Are You Better Than ChatGPT?

The other day, during an unexpectedly deep conversation, a stranger made a confession: Only that morning, just before we met, she'd felt so uncertain and in need of guidance that she asked ChatGPT for advice.
Hold up. Before you judge her, do me a favor and pause for a moment. Think back to the last time something really troubled you. Maybe you made a serious mistake – and I mean big, like you lost a major sum of money or slept with someone you shouldn't have or messed up big time at work. You were worried about something but were too ashamed to ask anyone you knew. What did you do? Where did you turn? Be honest with yourself. When you have a problem, would you ever look for answers online?
I'll go first: I would. I have! I've looked for information, perspectives, comparisons, opinions, anything that could help inform me, make me feel better, or help me make a decision. Sure, much of what I've found wasn't useful or even correct, but I've been alive long enough and online long enough to have developed some necessary tools, critical thinking among them, to sort the wheat from the chaff. (Yes, I'm still listening to a lot of Trollope.) Even so, I have occasionally fallen for a piece of information later revealed to be false. I've gotten excited about technologies and companies that turned out to be less than they claimed or to even be harmful. I've found it difficult to resist behaviors and products I swore I'd avoid and boycott forever. And I have certainly felt proud of myself for swearing off one thing, only to later find out I was using something else that was arguably worse.
What's more, if you asked me to explain every single tool or product I use, and how it works, I couldn't. Do I know with certitude how search engines work, what content they have access to and how they prioritize results? Maybe? I certainly don't think I could identify with absolute confidence which chatbots were RPA (robotic process automation) agents and which were GenAI agents, or which used NLP (natural language processing) and which didn't. I couldn't explain what all of that means without a quick Google search, which I just did, and even then I had to resist the temptation to read the AI summary at the top. To be completely transparent, I didn't even think about the fact that there are different kinds of online "agents" and chatbots until a very kind person reminded me.
In fact, were I not someone who had spent the majority of her career working in tech or tech-adjacent spaces, after almost an entire lifetime easily co-existing with computers – to a higher degree than the vast majority of Gen Xers, thanks to a father who began working with computers in the 1960s – what then? Why would I even prioritize understanding any of this, since major tech writers and editors don't appear to? Why not ask ChatGPT for advice? Where else is a person supposed to go? Reddit will tell you to get divorced immediately, Google will direct you either to some garbage or to Reddit, and everyone on Bluesky will demonstrate how they're a much better human than you are because they would never, ever demean themselves or devalue their relationships by asking an AI chatbot to help them plan a child's birthday party.
God. Now that I think about it, maybe I shouldn't have told that lovely stranger to stop asking ChatGPT for advice.
But I did tell her that, and I certainly stand by it. ChatGPT, I told her, is a tool that's designed to mimic human thinking and communication. It takes text written human language – our natural way of speaking – and analyzes it. Then ChatGPT predicts the likeliest possible response based on billions of patterns and generates a response. (Hence the "Gen" in GenAI.) This kind of chatbot might summarize, translate, engage, or advise, depending on your request. But one big problem is that you don't know what information has been used to train the AI model, and whether that information is any good. You also don't know what rules and labels have been defined by the human beings who originally designed the model. What if the chatbot has been trained on incorrect or dangerous information? What if the rules and labels were faulty or biased? How much should you trust or verify the information you receive? How will you even know that you should verify it before you trust it?
This is especially challenging because "AI" has been slapped on everything, and the media seems to treat all these tools like they're equal. But a GenAI tool like ChatGPT is different from an "agent" or bot that's preprogrammed. They both use natural language processing (NLP) but a preprogrammed bot mimics human actions instead of human thinking. These tools use something called robotic process automation, or RPA, in which there are specific tasks that need to be performed in a specific order. This is kind of like a voicemail menu, where you can only select from the preexisting options, or yell AGENT! REPRESENTATIVE! I WANT TO TALK TO A HUMAN and then finally get transferred to someone who you hope is a human. An AI chatbot is different. It has natural language processing capabilities so it can understand human language and it can learn as it goes. It identifies new patterns with each set of inputs we give it, and it generates new types of responses after its initial "training" and deployment. So whenever you ask something and then respond to the information it generates, the chatbot will take everything you type at it to learn from, so it can then answer questions for someone else.
It was at this point that she said she was going to delete ChatGPT from her phone.
There's a reason I said a GenAI chatbot will mimic human thinking and communication. Wacky as it sounds, human beings also do things like recognize patterns, look for contextual clues, assume meaning, and learn from heavily biased sources and teachers. We learn over time, or we should. To be honest, a lot of people don't. How many people do you know with mental models in desperate need of a major update? How many people do you know who can listen to another person's questions and problems, for free no less, and try to help them to the best of their ability in a way that is clear and non-judgmental?
I have spent the majority of my career asking people about their opinions, perspectives, problems, and experiences. At my tech jobs, whenever I would interview a participant who had kindly agreed to take part in some user research project, I would begin by telling them that the one thing I could never be an expert in was their experience. I needed their expertise and would take it back to the teams I worked with so we could work on improving the feature or product or whatever we were working on. This didn't mean I reported every individual perspective or piece of feedback. Like an algorithm or AI model, I looked for patterns across all the participants and all the data I collected. But unlike an algorithm or AI model, I could both empathize with my participants perspectives and experience a Gestalt. I could speak with my participants in a way that would not rely solely on their prompts but on my ability to connect with them, to engage with them and help make them comfortable. In return, a participant would say something so powerful that would unlock an entirely new perspective, or over the course of multiple research projects I would be able to shift my understanding in a way that wasn't simply about pattern matching or analyzing the millions of individual parts, but about seeing how the pieces reconfigured themselves or created something that was somehow greater than their sum might have suggested.
More importantly, as a human being, I could strive to understand someone beyond the existing mental models I was trained on in order to understand what they actually needed. One of the most basic tenets in user research – and in a way one of the most profound – is much the same as in therapy. The problem someone tells you about is usually not the true, much deeper problem. (Another tenet is that what people say they do and what they actually do are often not the same.) In order to get to the deeper problem, the researcher has to ask a lot of smart questions and pay close attention to the answers. Most people and certainly most companies have only so much interest in and capacity for learning about a deeper problem, as we have discussed. As a result, one of the more basic tenets in product development is one I find gets ignored far too often: The "obvious" solution to this surface-level problem is actually a solution to another similar problem, one that's a close-enough corollary. This a reason why a lot of tech products you use frequently make you annoyed enough to ask whether anyone who makes the product actually uses it.
And more than almost any other reason (and there are many others!), this is why I think asking ChatGPT for advice is a truly horrible idea. You are engaging with a tool that has neither the capacity for cognitive empathy nor for Gestalt. It cannot ask you for a deeper understanding in part because it's been built by people who too frequently rely on close-enough corollary solutions in an industry that doesn't reward anyone for deeper understanding. In other words, you're asking a robot who has been programmed to give you the closest approximation of a solution to your surface-level problem.
But I still don't blame people for doing it because how would most people know? Even if ChatGPT came with a warning, many people wouldn't care. Having observed countless people while they used a product I was working on, I can confidently say most people – and that includes me – do not read or pay attention to most of the information an app presents to them. I mean, this should not come as a surprise, because when is the last time you read a user manual? Plus, the average person who neither works in tech nor spends most of their time online in tech-adjacent or -related spaces doesn't care in the same way you or I might. Whenever I work on a product I remind the teams I work with: Unlike us, who are paid to think about this app, it is not a priority for most of our users. It's not even close to the top 100 things they're thinking about during the day, and if they are thinking about it, they're thinking about the value they get out of it, not the experience or the features. Tech users can be pretty savvy but even their own concept of what savviness means differs – hardware? software? apps? navigating the web? – plus their information is now coming at them from any number of sources, from traditional media to TikTok and beyond. There's no cohesive message, just an ever-churning sea of soundbites and viral moments competing for primacy.
Plus, what's the alternative? Is everyone lonely because of tech, or is everyone lonely because doing the work of connecting with other human beings is hard, and we're all tired and pushed and broke? Yesterday I saw this whole Bluesky thread that set me off, one tech writer dunking on another tech writer (and rightfully so) for sharing dumb ideas he imagined a GenAI agent could possibly handle (but actually couldn't). I say dumb in part because it's selfish and irresponsible to be a prominent journalist for a major publication and publicly engage in cheerful brainstorming for billion dollar companies, thus influencing the ways average tech users perceive a tool like ChatGPT, which is especially problematic when many of them don't actually know what the fuck it really is and why they should care (see above). To be fair to him, many of the holier-than-thou responses in the ensuing thread also drove me nuts. I'm glad all of those people love planning children's birthday parties and are good, devoted, hyper-present parents who never for a moment get distracted by their devices or think reading to their children is boring. Just because you wouldn't do something or anecdotally everyone you know would do something else doesn't mean either of those are true or correct. Sometimes, thinking about how people use tech requires you to put yourself and your own preferences aside, whether you're a big time editor or a rando in the responses.

How would anyone know that? Well, anyone who is not ostensibly beholden to, like, journalistic ethics? Our lives online are structured such that we ask machines to answer our questions, not people. People are there to hear our opinions and takes, or to make us feel like idiots for being vulnerable online. This has absolutely filtered into offline life too, so it can't be surprising that anyone would go to ChatGPT for advice.
Plus, how do you know what modern AI is even capable of? What distinguishes different kinds of AI, and what their strengths and weaknesses are? Most people don't. I barely do, although I will write more about this in the coming weeks. Having done user research, I do know that that many people don't necessarily want AI to help them but many people are also overwhelmed with limited time and resources, in need of help, and these tools are being foisted upon them relentlessly with very little warning. Thanks to a major shift in the industry, there are now fewer researchers asking those people what they really need or what they understand about the products on offer. Even if they did ask, what could product teams do? What incentive is there for anyone to design or to use products that could actually help them? Those products are expensive to build and maintain and also can't make investors happy by showing the impossible: Eternal growth.
For a long time I have been noodling on a newsletter about algorithms, and how you don't really hate them, you just hate bad ones. Bad ones are selfish, which you can feel. Bad algorithms show you stuff for short term wins. They keep you on an app a little longer for now, but they don't make you want to go back to the app or think of it as a place for discovering cool new stuff. If anything, it makes you want to use the app less over time because you feel like you're seeing the same stuff over and over without breaking out of whatever weird bubble you've found yourself in.
Like the other day on Instagram, whose algorithms are sometimes pretty selfish, I liked a few posts from nail artists (as in, manicures) and spent a few minutes going down a nail art rabbit hole. Now my entire explore page is 75% nail art, even though I am not someone who regularly engages with or cares about nail art? I will say it's better than the super weird plastic surgery and celebrity face comparison posts that have filled my explore page for months now, all because I click on those out of curiosity and thus send a signal to the app that makes it think I want more of that content. But I don't want to write about algorithms and signals now! I bring this up because algorithmic selfishness results in a bad experience, and users are right to point it out.
You know what though? Selfishness and self-centeredness are everywhere, and we need to point that out more too. People and companies are rewarded for both. Even products and platforms that don't have algorithms can feel selfish and alienating. What gets a bigger response: Thoughtful analysis or a bad hot take? Engaging honestly with someone online or dunking on them? Encouraging people to ask questions and have challenging conversations or letting everyone get mad and yell at each other? It's easy to say "the Internet sucks" or "that's why we shouldn't be so online," but guys, we are the Internet, and the Internet has absolutely invaded real life too. If we don't care about it then the bots and webcrawlers and Groypers have won on all fronts, and that is a future I want no part of.
We are up against so much. It feels crazy to suggest that we can change any of this. But I guess I'm nuts because at the very least I think we have to try. I can't go door to door explaining why ChatGPT is not a good therapist, but I have some ideas of how we start to reimagine life online. I'll share those ideas soon but I'd also like to know what you think is needed.
In the meantime, if any publication wants to hire me so I can force some of our preeminent tech journalists into doing a bit of good old fashioned user research, let me know. It might do you some good to listen to someone other than yourselves.
Until next Wednesday.
Lx
Leah Reich | Meets Most Newsletter
Join the newsletter to receive the latest updates in your inbox.