Skip to content

Maybe the "A" in AI Sometimes Stands For Asshole

Leah Reich
7 min read
Maybe the "A" in AI Sometimes Stands For Asshole

Earlier today, I had an appointment with my favorite doctor. As is our habit, we spent my entire visit discussing something totally unrelated to my health. I lay face down on a procedure table while he stuck extremely large needles into my back to—as he once described it—give my nerve endings a little "sous vide" that make them take a break from constantly sending pain signals to my brain. Through a rapid series of tangentially-related stories about his family and mine, we somehow got to discussing the value of ChatGPT and of GenAI more broadly.

Now, I have played with ChatGPT and other tools a few times in the past. This year I've avoided using them, mostly due to ethical concerns over the stolen materials used to train models, the ways these tools are being pushed on all of us so relentlessly, the way there are seemingly no guardrails or any sort of oversight— as if we're not still neck-deep in the damage unleashed upon us by the same industry through social media! But this conversation with my doctor made me reconsider whether I should take such a hardline stance, both as someone who writes about humans and tech and culture, but also as someone who loves, and I mean LOVES, pattern recognition and pattern matching. Part of his argument was that ChatGPT is essentially a pattern recognition assistant, able to match patterns at a scale and speed inconceivable to the human brain. And you know when someone says something you already know, but they say it in a way that makes you stop and reconsider it? This was that.

Stopping and reconsidering in this way is something that I find very powerful. It's like taking a breath when you're about to lose your shit: A pause that can help see a situation more clearly. Maybe you see your unfair assumptions, or a perspective you couldn't consider because you had An Idea about it all. about something that locked your thinking into one position. Like putting a different lens on your camera, or realizing the wallpaper in a room is actually made up of tiny different details that you were only reading as one big ugly design. You might not end up taking a good photo or changing your mind about the wallpaper, but for a minute you stepped outside your standard way of thinking or perceiving. It's like the classic trope of "oh wow the nerdy girl wearing glasses was actually hot all along!" You couldn't really see her until you stopped seeing her through your own pre-existing perspective.

I should pause here and be clear: I actually think machine learning, large language models (LLMs), and generative "artificial intelligence" are all extraordinary technological leaps. Some of it is fascinating, next-next-next-level stuff that could have incredible potential in helping humans do important work. I just also happen to think these technologies are being built by an industry that has shown time and time again how much it cares about the humans who use these products and the many aspects of our lives these products upend. But if working in tech taught me anything, it's that you may not change much on the inside, but you still learn a lot that makes you better equipped to explain and critique it to people who can help bring about change. So I think my doctor finally convinced me to play more with these tools and get a stronger sense of what they can and can't do.

Anyway, as we were talking, he jokingly said that ChatGPT is a great therapist, and I immediately replied "Oh my god don't even joke about that, people are genuinely using it that way." This turned us to the topic of "what is therapy, anyway"—and of course therapy is, in part, pattern recognition. Helping people see their own patterns of behavior, the patterns in their lives, the patterns in the behaviors of the people in their lives, the way these patterns are similar to other people's patterns, and what they can to do change these patterns.

But what else does a therapist do? Or at least, what should a therapist do, assuming they're a good person who is good at their job? Hopefully, ideally, they have emotional intelligence as well as intellectual, cognitive intelligence. They can pay attention to non-verbal visual cues, to auditory cues like tone or volume, to tells that a person isn't giving the full picture or is an unreliable narrator. ChatGPT can't do that. It can only do half a therapist's job, because it's a machine doing mathematical probability based on whatever someone tells it, whatever its computations are, and whatever sources it has access to. It just happens to be packaged as a cool companion, described using language that makes it seem awfully human. And if you are crying, do you want me to sit across from you and comprehend you? Or do you want me to sit across from you and compute?

Now, as we've discussed before, you and I (and my doctor) already know this about ChatGPT. But a lot of people don't, or they've been taught not to care by the same industry that had a hand in destroying the communication skills of an entire generation and offering them quick solutions and viral success instead.

Regardless of whether any of us can explain how an LLM works or what the technical specifications are, we can at least help other people understand that the technology is a machine and a tool. No matter how it's packaged or described, it doesn't think or feel. It takes information you give it to generate guesses based on billions of pieces of data. It uses cognitive computational processing to mimic human output.

I know that last sentence seems obvious. But stop and look at it without immediately rushing to all your assumptions about GenAI, whether good or bad. What do you see? Generative artificial intelligence elevates one type of intelligence and prioritizes it over the rest. It takes the intelligence that most closely mirrors the intelligence of the people who created it—brilliant computational minds—and gives it the tools to gesture towards what a lot of these people frankly struggle with. The so-called "soft skills." Connecting with other humans. Articulating themselves. Having feelings and feeling understood.

Well, we can solve that! Just be smarter than everyone and mimic human behavior. You'll probably get promoted.

The simple fact that they took machine learning and renamed it artificial intelligence tells you everything. It's like planting a flag! We claim intelligence! (Like how data analysts rebranded themselves as data scientists and made user researchers take the name that gets paid less.) To the people who build GenAI, intelligence is about solving big problems through skills and languages the rest of us don't know. This is so central to the entire system, in fact, that to really maximize a generative AI tool, you need to learn the machine's language. You need to ask the right questions and use the right prompts. You need to conform to the tool in order for the tool to effectively work for you.

This is the ultimate example of the industry solving a problem for the people in the industry, and assuming everyone else will understand it and feel the same way about it. Because why wouldn't you? How can you not get it? How can you not think the same way? How can people who do not value or prize emotional intelligence understand that there are many types of intelligence and that what seems obvious to you may be opaque to me?

Do you ever think about what intelligence is? What means to be intelligent, what do we classify as intelligence, and what sorts of intelligence we value as a society? I think about this a lot. But maybe you have a more vibrant social life than I do. There are more kinds of intelligence beyond the two I'm writing about here. Have you ever known someone who hated school and who other people said was kind of dumb, and then they turned out to be, like, some mechanical genius or musical prodigy or extraordinary dancer? Conversely, have you ever been in an exercise class with that one person who the teacher constantly corrects and who cannot seem to do one move correctly or in time? Zero brain-body connection. No body intelligence. When I was in high school, I was lucky enough to still be able to take vocational arts classes, but I wasn't lucky enough to exist in a society that made me feel vocational arts were a viable career path for a smart person. I'm great with my hands and have incredible reflexes, but no one came down to the machine shop in 1991 to tell all the students, "Hey, you guys are very intelligent."

To be honest, I think the tech industry values that kind of intelligence way more than it does emotional intelligence. And that's why we're in this position. If we can build incredible machines that mimic human behavior and emotions, well folks, isn't that enough?

Jimmy Fallon: "And do you use ChatGPT when raising your baby?" Sam Altman: "I cannot imagine figuring out how to raise a newborn without ChatGPT."

[image or embed]

— More Perfect Union (@moreperfectunion.bsky.social) December 9, 2025 at 10:01 AM

You probably saw this Sam Altman video from Jimmy Fallon, in which he says he can't imagine raising his newborn without using ChatGPT. I made myself watch that clip a few times, trying to get my brain outside the perspective of "Christ, what an absolute asshole this guy is" and "have you considered just being a human being?" What's funny to me about Sam Altman is that, whenever I am forced against my will to learn something new he's said, I find he's actually given a useful insight into human behavior, almost against his will. I'm not even sure if he sees what he says as this kind of insight, or if he disregards it because his vision is also locked into one perspective, which is "win the powerful rich guy super smart computer" competition.

What he says in this clip is similar to his whole "ChatGPT is a better diagnostician but I'd still go to a human doctor" commentary some months back. He says, here we have all this incredible technology. I could be asking it big questions, having it solve really important problems, and instead I'm asking it why my kid loves dropping his pizza on the floor. Well gosh, Sam. Maybe you're doing that because this is extremely normal human behavior. Even when people should be focused on something else "bigger" or "more meaningful," they'd rather figure out why the girl they have a crush on doesn't like them, or why their teenager won't talk to them, or why they can't seem to succeed at work no matter how hard they try. It's almost like no matter how hard you try to come up with bigger and better systems that provide more answers faster than ever, people still need to do the work that a machine can't do for them: Figure out how to be human.

And honestly, what's more meaningful than that?

Until next Wednesday.

Lx

Comments