Skip to content

Does it Matter How the Sausage is Made?

Leah Reich
10 min read
Does it Matter How the Sausage is Made?

For the first three and a half months of this year, I spent a not-inconsiderable amount of time and energy preparing to be deposed as a fact witness for a civil case against my former employer, about the harm social media does to teenagers. The specific case isn't important – people have filed many, many such lawsuits against social media companies in recent years, so many that they frequently require multidistrict litigation – so I won't link to it or provide any details. Sorry to be selfish on this one but if I'm going to attract any more attention from Meta's attorneys (hi guys!), I'd prefer to do so after I write a book and in such a way that I get great publicity and my work immediately shoots to the top of all the best seller lists.

Anyway, there's not much more to tell: My deposition was cancelled only days before it was scheduled, so all this is less exciting than I'm making it out to be. I don't even bring it up because I want to talk about the lawsuit per se, or whether social media causes harm to teens (I feel my readers know where I stand on this, no?). Rather, it was the process of preparing for the deposition that gave me a better understanding of both the outsider perception of product development as well as the blindspots I sometimes had as an employee. I was reminded of all this thanks to a conversation I had with a friend and former colleague when we caught up over dinner this week. Even if I wanted to say more about AI after last week's newsletter, I realized I first need to take a step back. One of the things that made me want to keep writing about AI was the way tech writers so easily conflated different types of products and features, lumping them together under a single concept. But that requires a little more foundation.


When it comes to writing about tech, I'm in an interesting position. I don't want to say it's unique, but it's not super common – I have a PhD, so I'm familiar with academia, but I've also worked as a writer as well as in tech for a long time. When I say I worked in tech, I mean I worked on actual products and feature development. There are other people out there with this combo, but most people who write about tech have little experience working at tech companies, and when they do it's not always on the product side. This isn't necessarily a bad thing. The outsider perspective is hugely important for holding the industry accountable; it's much harder to do this when you've bought into the bullshit or have a serious conflict of interest.

(Similarly, most academics/product people are not always interested in [dare I say skilled at?] writing for more "mainstream" audiences. I can say this with some level of confidence since "she wants to write for mainstream publications" was one of the big criticisms my dissertation committee lobbed at me when I went to defend. I heard them talking about it when they convened behind closed doors before the defense started, and it was said in the sneeriest tone you can imagine. Joke was on them though, since many years later my NYT op-ed on Tinder was published on the day there was a session about "how to get more sociologists into mainstream publications" at the annual American Sociology Association meeting.)

Now, the majority of mainstream end users/readers don't really care about insider baseball tech industry stuff or the nuts and bolts of how products get made. I'm not even a mainstream end user and a lot of the time I don't really care to read about it. Most of us care about bigger issues like whether the product itself is good and thus worth our time and money, as well as whether it's bad for us, the people we care about, and the world more generally. So the lack of insider experience or the specifics a writer might get wrong don't always matter. But sometimes the the specifics do matter. More than that, sometimes the outsiders are themselves insiders, but in a different way. I don't want to say it's an echo chamber, because that feels like a loaded phrase these days, but writers do sometimes to write for their audiences. People who look to The New Yorker to understand tech are probably different from people who look to, say, Ars Technica or even The Verge. So a lot of times I will see one set of opinions on Bluesky, where I see a lot of writers, journalists, and non-big tech people, only to go on LinkedIn and see a very different set of opinions.

This doesn't even cover the vast swaths of "normies" for whom topics of this sort are low on their list of daily concerns or interests. (If you are a "normie," please do not take offense. This is something you should 100% celebrate being.) I talk about this all the time, not only in last week's newsletter and in others before, but literally all the time: When you get beyond the survey data or the anecdotes, people have complicated relationships with tech, including AI. Tech is not something they're thinking about all day long, like those of us who work on or write about it. They vary in terms of their interest level, knowledge, and savviness. During my time at Mozilla last year, I asked every research participant I spoke to whether they considered themselves to be technologically savvy. I then asked them to define "technologically savvy" for me. Did it include computer software? Hardware? Navigating the Internet? Figuring out apps on their phones? People who use Firefox tend to be somewhat more tech-oriented than other browser users (but not always!), and even among this user base there was still plenty of variance when it came to actual knowledge and tech savviness.

How much does the average person actually need to know about how products get made, or who has authority over decisions in tech companies, or what any of the specifics really look like? What does it matter what the intentions are or who is ultimately to blame when the end results are always the same, and when the same people benefit? As my very wonderful lawyer said during one of our many conversations, "If there were a company called Cocaine Inc. that supposedly only wanted to make a product that was so much fun to use you couldn't stop using it, would it matter whether an element of compulsiveness was intended or even implied? Would it matter if the end result, which is addiction and significant harm, were the same?" In many ways it wouldn't. But he and I also agreed that if you could better understand how the company came to create that product, and the decisions that went into it, and who knew what, you might be able to do more to push back – to construct better arguments, to make better decisions, maybe even to build better alternatives.


During all the lawsuit prep, someone asked me if Instagram intentionally designed its algorithms so that teen girls would be served more content about harmful things like eating disorders. I said, you can't think about it like that. No one is sitting in their office like Mr. Burns with their fingers tented going, "Ahhh yes, let's destroy the psyches of little girls, we will make millions!!!!" I mean, maybe they are, but that tends to happen after the fact, when they learn that fixing a bad problem for users means creating many other bad problems for them as a business. It's tempting to think this is happening, because there are certainly bad actors in tech, and because we've all read tell-all memoirs and previous lawsuits full of damning quotes about people actively admitting that they don't want to stop the shitty thing because it's only shitty for users, not for the company. But I implore you to remember that, as in any profession, there is a wide range of people who work in tech. Some are mercenaries, some are calculating. Some are incredibly smart. Some are not And just like in every other industry, there are many who are normal average people who very frequently have no idea what's going on. Like sometimes when I read all the accusations leveled at tech, I compare it with my lived experience and I cannot square it.

Genuinely, I cannot fathom almost anyone having some big vision or malicious strategic plan, because I can't even fathom anyone even having a vision or strategic plan. You would be shocked at how little actual strategy there is, how few long term product visions there are. But if you did learn more about that, it might actually help you better understand why so many products feel so shitty all the time. They're frequently designed by org chart, which means teams are working on stuff based on how the company is organized, rather than on some cohesive sense of where the product should be headed, what the user needs, what can create long term sustainable success.


In product development, everything is connected, and there are always tradeoffs and unintended consequences. Sometimes those tradeoffs are technical. Things can't always be built because engineering decisions made long ago preclude certain new directions, or would lead to a lot of "tech debt," which basically means a lot of stuff someone will inevitably have to go in and fix in order to do this new thing. That requires time and resources which most teams would rather spend building fun new shiny things, because fun new shiny things are more enjoyable to build and also because that's how you get better bonuses and promotions.

Sometimes tradeoffs are political. Let's say you want to build a features that encourages teenagers to actually talk with each other, but that feature will lead to them spending less time in another important part of the app. There are two different teams responsible for these different things, and they each have their metrics, and very often the team that wins is not the one making the feature that will benefit users, or even benefit the business. Sometimes the decision is so short-sighted it will take your breath away. Sometimes it's as simple as the team managed by the person who was hired by the decision maker five years ago and thinks he shits gold.

Really, no one sets out to make a specific teen girl eating disorder algorithm. Instead, as you know, they want to make the app more fun to use so that you will use it more often and will stay on it for longer. Then, hopefully you'll actually do more stuff when you're there. All this is important for metrics such as daily active users (DAU), session length (how long you keep the app open), and anything related to engagement or interaction. Because for social media, it's not just how often you open the app or how long you stay on it, it's also what you do while you're on the app. It's not enough for people to only consume content.

Metrics are hugely important because they are largely how the company measures the health of the product. Please note that I don't think this is how the company should measure the health of the product. If it were up to me, I would want to measure more qualitative aspects of the experience and maybe build products that didn't define success at such an inhumanly massive scale. But really, again, it's the same as in many industries. Fans of Mad Men will remember when London Fog told Don and Sal that it wasn't enough that "two out of every three raincoats sold the year before had London Fog stitched on the inside pocket."

Algorithms are usually designed to show you different kinds of content, ideally content you actually want to see and interact with so you will keep coming back, etc. The algorithm takes different signals from you to determine if it's getting the content right. Are you clicking on it? Sharing it with friends? Liking it or commenting on it? The more you engage with it, the more the algorithm is like, ok cool, this seems to be stuff they want to see. But the algorithm doesn't necessarily know when the content is actively harmful, or when you're looking at stuff compulsively. Maybe it should, but even when it does, a lot of people know how to outsmart machines – a point I think is important to keep in mind, and which we should come back to later. Here's an example: If you want to watch bootleg Broadway videos, you will learn that people call them "slime tutorials." Why? Because Broadway bootlegs will get taken down. But slime tutorials are hugely popular and lucrative for video content creators, and no algorithm on earth is going to hide or penalize them.

These algorithms and products are designed to try and engage millions or even billions of users. The unintended consequence of an algorithm that gives you more of what you supposedly like is that if you like harmful content, you are going to get more of it. And if you tweak the algorithm or change the product to keep this from happening, you tweak and change for millions or even billions of users who may not be getting harmful content. Maybe you unintentionally censor content that shouldn't be censored, or you piss people off who think you're keeping them from seeing the stuff they want to see, or you tank metrics or you mess up a business deal or any number of things. It's not that it's impossible to fix some of these major issues, it's that someone has to be willing and able to make that decision and then to deal with the immense responsibility and the many, many repercussions of carrying it out. Because of how tech products get built, it's not always easy to point to who is responsible, who knew what or knew better, who will ultimately champion something like this and make it happen. Thanks to this lawsuit, I realized that when we work on products and complain that we're all trapped in silos, unable to collaborate or create a cohesive experience, it's almost by design. The more you work on specific, isolated pieces of the product with nice-sounding business goals like "connect people to their friends" or "help them find a podcast they might like," the less you're able to factually declare whether you were working on anything that was intended to make the product harmful in some way.

Maybe none of this matters. Maybe at the end of the day, the big names who are in charge are ultimately responsible for these products, no matter what happens along the way. If these products are harmful, maybe we don't care if they weren't intended to be harmful. But I don't think so. If we're going to dismantle any of this and try to rebuild, or to build something better, I think we have to know how the machine works so we can be smarter than it. We need some slime tutorials of our own.

Until next Wednesday!

Lx

Comments