Skip to content

I Think I Figured Out Why AI Sucks So Much

Leah Reich
8 min read
I Think I Figured Out Why AI Sucks So Much

Did I promise to write a specific newsletter last week? Does anyone remember? (Please don't go check.) Yes, I did, and I should know better. Chris, my apologies, but I won't be writing about your comment this week because—like a total jerk—I'm going to write about a comment yours inspired. Blame Ruth. (Just kidding! Sorry, Ruth, you're great.)

Meets Most is a reader-supported newsletter (no ads, no affiliate links). If you enjoy it, you can become a paid subscriber. Your support means a lot! Thanks for being here.

Become a Paid Subscriber

Rather, I'm going to let Ruth be my starting point, the way "Chris's pitch-perfect reply" (her words!) was hers. Here's a quote from what I can only describe as an extremely valid existential panic in the form of a comment (complimentary):

[U]ntil AI can read the minds of a diverse and ever-changing group of leaders, *nobody knows* what the final AI should be doing...
Our customers are not only expected tell us what's wrong with our product, but also how to build it to begin with. In some cases, they also have to agree to pay for new products before they're released and give us their likeness for marketing, then repeat the process for every other update. 
On the one hand, it feels good to incorporate the "voice of the customer" into so much of what we do, but on the other hand, we're only doing it for imminent upcoming releases. What happened to understanding the root concerns and needs and building a long-term strategy that fixes upstream issues and surprises and delights customers from the get-go?

Research, as most of you know (and as Ruth definitely knows), is meant to be a structured endeavor. There are methodologies, hypotheses, protocols, and plans. Call the social sciences as soft as you like, but there is still some science in them, and as there's rigor in the research itself. As a broad discipline, UX research also insists on rigor to a certain degree. I don't necessarily agree that it needs rigor (spicy opinion), but I bring this up simply to remind you that the work I did as a product researcher in tech companies was supposed to be scientific-ish. But for better or worse, I've always been a little loosey-goosey when it comes to things like rigor. That's partly because we weren't doing double-blind medical studies to cure cancer, but also because as a researcher I should always be listening, paying attention, and holding onto the threads of the broader insights I'm weaving.

We cannot have science without art, and we cannot have art without a little magic.

As a researcher, the job is to look beyond anecdote to find insights. You find those insights by looking for patterns in data. But it's really easy to define data too narrowly. A lot of people—too many people—focus only on the patterns and insights from a specific methodology or a specific project. They've got a narrow focus, even if they think they're looking at a big picture. You can still learn plenty that way, but it constrains you, like being a musician who plays learned pieces well but who can't improvise. In a research interview, your brain has to stay a little loose. You have a set of questions to ask a participant, but you can't adhere to them too closely because you need to be flexible enough to jump around, like if they answer questions you were going to pose later, or if they say something that you want to probe into more deeply.

You also need to stay loose enough to hear when someone says what I refer to as "The Thing." (Creative naming is not my strong suit.) What's "The Thing"? It's a comment or an answer that pings your spidey sense. It plucks a thread you've been holding onto across multiple projects, or an idea you've been noodling on in the background. It's a gestalt, a gift that allows you to piece together everything else you hear from all the participants and come up with a really fucking good insight.

It may not surprise you to know that The Thing is not a part of rigorous research. You are absolutely not supposed to focus on what one person says. I don't care, because for me The Thing is crucial. If your brain is loose enough to hear that little ping! when someone makes an off-handed remark, it means your brain has been paying attention, collecting data, running it through, and seeing the bigger picture.

Which brings me back to Ruth's comment. I read her comment last week, and then I reread it yesterday, after I read this Verge piece by Elizabeth Lopatto called "Silicon Valley has forgotten what normal people want." Here's Ruth inside the industry, reasonably freaking out about her role, about the impossible expectations she's supposed to meet, and about the complicity of representing the user but having to sort of betray the trust the user has given you. She's also voicing the frustration we all have, which I can sum up as "How the fuck are we supposed to know what to build when you, the leadership that makes all this money, has no fucking idea what you're talking about or doing?" Note to current and future employers: These are my word's, not Ruth's, please hire her and pay her well.

Meanwhile, here's Elizabeth Lopatto, voicing related but different frustrations held by many (most!) who are outside the industry. Why are you guys building all this garbage? Why are you continually shoveling shit at us we don't want or need? Whatever happened to solving problems, caring about what users enjoy, making good stuff that people actually want to buy?

I went back to Ruth, then to The Verge, then to Ruth again. And then I heard that little ping!

Do you know why Silicon Valley doesn't make things people want? Because it never really did, and now it's stuck in a terrible mess of its own making. Bear with me while I explain.

There's an overriding, almost nostalgic sentiment I see a lot when it comes to the tech industry: Once upon a time, tech made cool products that we all were excited to use and pay for, like the halcyon days of Apple with the seamlessness of Mac computers or the delight of the iPod. Form followed function, and technology became easier and more fun to use. Yes, this was stuff we "wanted." But it was not stuff we needed, nor was it stuff that was built with any sort of customer input or to solve any existing user problems. These products were built for Apple and Steve Jobs. Apple was notorious for not doing any sort of user testing, and for relying on their own design and engineering principles. In fact, this was pretty true of a lot of the industry, before user research existed really as a discipline. From The Verge:

Within recent memory, people who made software and hardware understood their job was to serve their customer. It was to identify a need, and then fill it. 

This is a real misconception about Silicon Valley. It comes in part from the industry's own self-mythologizing, and it comes from some of the tools it champions, like the ever-present Jobs To Be Done framework. Tech told everyone it was user centric, it wanted to create products that delighted users and solved real problems, it wanted to change the world in some meaningful way. And like, I don't blame anyone for believing that at the time or looking back on it as if it were true! The industry did make a lot of really great products that ostensibly solved some problems, and there was a time when it was fun to use computers and be online.

But when you start to work your way backwards, you start to see the pattern emerge. Let's stick with Apple as the example. Earlier I asked a friend this question:

Do you think it’s fair to say that Apple, in the era of Steve Jobs returning, the iPhone etc., was a company that succeeded more in delighting users than in solving their problems in some fundamental way?

At first, he was dubious. After all, the Mac had the feeling of solving existing problems. Sure, I said, but the problems it solved were existing tech problems—problems created by the industry itself. The Mac didn’t solve a problem, it just made doing the shit we were already doing so much nicer and more fun.

Next I asked, "What problem did the iPhone or iPad solve?" The iPhone, he argued, "seemed like such a new thing that it sort of invented the whole category of problems the device itself also solved?" Exactly. And the App Store invented problems that only existed because Apple created an entirely walled-garden ecosystem. Sure, laptops allowed us to leave our desks. Then the iPhone and other smartphones showed us the possibility of computing without even having to be on an actual computer. It seemed so freeing, until we realized each way of being untethered—first from the desktop, then from the desk entirely—somehow made us more tethered than ever. Which is why the industry has to come up with a new solution to that problem.

But here's what I hadn't thought of until I read Ruth's comment, describing all these tech leads who don't have any idea what these new products should actually do:

Over and over again, we—as consumers but also as people building the tech itself— kept going “oh wow yeah that's so cool!" or "oh wow yeah that solved a problem!” We didn’t fully see (or we didn't care yet) that the new cool thing solved problem for the industry or a problem created by the industry. We rewarded this method of problem solving, so companies kept using it.

Worse, they kept using it in the most selfish way, which delivers short-term rewards with increasingly diminishing returns. After my team made Add Yours at Instagram, the entirety of Meta wanted to rubber stamp Add Yours features into every possible product. That will only work for so long. Add Yours was successful because it actually did tap into some human needs—to feel more secure sharing stuff, to know what to share, to feel welcome and invited to participate, to feel like we're all doing stuff in community with others rather than posting content in a vacuum. But the industry does not really want to do that work! It really, truly doesn't! Please trust me on this! It wants to churn out as many replicas as it can until people are sick of that thing and it no longer makes them money.

And this is where, for me, it gets really interesting:

At one time, the industry's model to profit was to sell things to willing buyers. This is the era we all remember so fondly, even if still wasn't an era in which we were really taken into consideration—the industry just made better shit. But it moved beyond that model a while ago, first by taking all our data, then by taking all our content and our conversations and our creativity. Silicon Valley found itself at the limits of how far beyond the original sell-for-profit model it could go. Even with everything it strip-mined from us and from humanity more broadly, often without our knowledge or consent, it turns out that ad revenue can't keep up forever. So it needed to find a way to push even further to make money.

This is the position Silicon Valley finds itself in now. It's trying to build products based both on this false mythology of itself as an industry and on this model of product development. This is why Ruth's comment smacked me right in the face when I revisited it. Tech leaders don't have any idea what the final AI should be doing. No one has any idea how these AI products can really make money. They aren't building products that might actually solve some of our problems because they've never done that. They can only solve the problems they've created, and now the biggest problem they've created is not knowing how they're going to keep making money off us.

Until next Wednesday.

Lx

Comments