Skip to content

My Whole Thing is Everyone's Whole Thing Now

Leah Reich
9 min read
My Whole Thing is Everyone's Whole Thing Now
polaroid self-portrait, 2009

Many (oh god, so many) years ago, when I was in my late 20s and was finally taking therapy seriously, I had a truly wonderful therapist. To this day, he was one of the best therapists I've ever known. During a session, he looked at me and said, "Leah, you are harder on yourself than anyone I have ever known." I looked back at him and said, "You're a therapist and I assume you hear people beat themselves up all the time, so this really says something about me, huh."

Meets Most is a reader-supported newsletter (no ads, no affiliate links). If you enjoy it, you can become a paid subscriber. Your support means a lot! Thanks for being here.

Become a Paid Subscriber

It would be nice to tell you that I've softened since then and become kinder to myself. But despite all the work I've done—and it's a lot, we'll get to some of it one day—that whole "beating myself up" thing is one of the hardest nuts to crack. Like last week, I felt terrible about the newsletter I sent out. It was the worst newsletter I've written yet, it didn't make sense or say what I wanted it to, I'm letting everyone down, and worst of all, I sent it out a few minutes late so it wouldn't have the Wednesday timestamp. An obvious catastrophic failure across the board.

So you can imagine my surprise when that newsletter received multiple long and extremely thoughtful comments. Isn't it funny how that works?


I read the comments and sat with them for a while before I responded. At first I felt bad about that too, like I was doing a crappy job with community building and encouraging engagement (ugh), but then I thought: Why not respond here, in the newsletter itself, so we can all have this conversation together? Ghost, unlike Substack, isn't a closed system with built-in social tools, which means it doesn't have an active hive-like feel, something I've talked about before. Sure, Ghost is part of/connected to the Fediverse. But I bet more than one of you doesn't do anything on or related to the Fediverse. Hell, I bet more than one of you doesn't know what Fediverse even is. Which, for the record, is extremely normal and fine! Even so, our online habits have changed a lot in the past decade. Closed systems like Substack and other forms of social media have conditioned us away from earlier mental models and behaviors, like intentionally going to a blog to read a post and leave a comments. Again, you might not even know this newsletter gives you the option to leave a comment, if you're only reading it in email. Without either that mental model or the heady activity and avenues for boosting visibility on a platform like Substack, the newsletter itself can feel lonely and comments can get lost.

So I decided to try and have those conversations here, where more of you can read and maybe respond. While the newsletter is mine, what I'm talking about is—for better or worse—a group project. We all need to talk about it. Let's see if I can facilitate a little of that vibe here.

The first comment I saw was from Sebastian, and I want to pause and thank Sebastian, because his comment really reassured me. It made me feel like maybe I hadn't sent out The Worst Newsletter Anyone's Ever Written, and it also meant at least one other person out there saw the outline of what I was gesturing at. Even better, it was a person who doesn't work in tech (I don't think), and who also cares about language and writing and young people! Then came two other great comments, and I thought ok, maybe I'm onto something here.

(Three comments may not sound like a lot but getting comments here at all is amazing to me. They're intentional comments, not people trying to get visibility for their own work on a social media platform. Plus, I get great comments from smart, kind people. These new comments were lengthy, too, which reaffirmed what I wrote about the benefit of a smaller, more engaged group of readers than a large, disconnected one.)

As someone who was also a college writing instructor, albeit briefly, I really felt the ambivalence of Sebastian's comment, as well as the glassy-eyed fear.

As a college writing instructor, it's hard to stay calm or clearheaded about this. On the one hand, I *know* we're on one particularly unsettling portion of the pendulum's arc. There will be a swing back towards the "soft skills" the humanities have to offer, Marc Andreessen's lack of an inner life notwithstanding. 
On the other hand, I'm worried that, before our administrators here at Neoliberal U realize they should have stayed the course, they'll eliminate my whole program, on the theory that if we have chatbots to clean up the writing what do we need writing instruction for? And telling them that "cultivation of an inner life" is not going to cut it.

Honestly, it was hard enough to get college students to care about writing 20 years before ChatGPT ever showed up. This may be anecdotal, but I found that my ESL college students had significantly better English writing skills than most of my students who spoke only English. As you probably know, there's a larger structural issue at hand: Reading has been destroyed in the US by a flawed teaching method, and it's not hard to imagine how much more writing has suffered. I dealt with all the usual suspects, like overt plagiarism and no grammatical knowledge and a total lack of motivation or interest in writing, so I cannot imagine teaching college students now.

But! I also had a lot of incredible, interested students in a wide range of classes who wanted to learn and who wanted to work. I watched more than one lightbulb turn on in a way that I could tell was permanent. Which is why what Sebastian ended with is what feels most important to me.

I want to make sure students develop enough muscle memory that they can get along without the technology, while also retaining enough critical thinking skills to avoid falling for the LLM's sycophancy.

This is why I keep poking at the idea that, amidst all this chaos and institutional rot, and even with these enormous structural pressures crushing our hopes, dreams, and livelihoods, there are still things we can do as individuals. This is why I am persisting with writing about an idea that does feel crazy sometimes: What if we start small. What if, while we're all losing our jobs (and minds) and teetering on the brink of WWIII, we focused on things we're good at and can control. What if Sebastian keeps trying to teach students to think critically and to care about writing? What if he keeps trying to reach the students who want to hear it? Because I know that in every class, there's at least one student whose eyes will light up in a way they haven't before, and who knows what that one student might turn around and do in turn.

It's interesting because, even though Gary commented as a programmer instead of a writing instructor, there are echoes of the same ambivalence. Gary's whole comment is great, and you should read it, because he's also an AI skeptic whose livelihood is threatened but finds at least some value in the tools he uses. Maybe what I love about these comments is the fact that we're a couple of humans finding a quiet corner to share all this nuance. No one is an AI booster, no one is writing an angry screed about the ghouls wasting billions. We're individuals who are like, well, yeah sure I'm mad and worried, but I also need to be able to do something. And what if the thing I'm good at doing is ultimately the key to surviving all this once the actual madness ends and the dust settles?

In particular, I liked some of the specifics Gary pointed out:

It's still crucial that the person writing and refining the prompts actually understands the code that is being generated. The implications for the security and maintainability of code deployed by a developer who does not understand it is terrifying. But we all know people are vibe coding this way en masse, and I'm already noticing popular websites and apps becoming more unstable recently, with bugs and outages. Is that a coincidence? That old "fast, cheap, good; pick any two" triangle still holds, and we've basically all decided to choose fast and cheap.

In some ways, this feels like the inevitable outcome of these tools given the industry that built them. Yes. I know, this ghouls are wasting billions, so how can I call them cheap? Because the industry decided, over the years, to cheap out on a lot of this deep knowledge and understanding in many critical areas of creating products. All the billions being spent aren't going to building an actual set of tools that humans can or even want to use, it's for building the next big infrastructure so the industry can continue to profit. Work like content design or user research that's concerned with the human experience of tech, and with making sure that humans understand the tech? That's not really being done. I was extremely creeped out by Claude's supportive, friendly "conversation" when I used it—creeped out because I immediately forgot it wasn't a person named Claude, and I shouldn't want to hear it tell me I did a good job! Eventually, I think some version of that work will have to be done again, by actual humans. But what version? What can we work on now that will get us there?

Gary also mentions this limitation of using one of these tools, which I found when using Claude:

Another limitation I've noticed is that AI-generated coding seems to exhibit something similar to the "last mile problem" in shipping or logistics, where the final bit of a journey is inherently inefficient and eats up a hugely disproportionate amount of time. For example, I can use Google AI Studio to whip up a really impressive and functional alpha or beta for a website or app. And it's astounding, to have something useable appear from nowhere in a few minutes. But the more I work to refine the results, to add features, debug weirdness, or adjust the interface, the less impressive the process gets.

Forgive me, but it's like having an intern who is both extremely intelligent and extremely dumb. You know who I'm talking about. Someone who has a lot of skills they learned by the book but has never practiced them in the world and who also doesn't have a ton of common sense.

Eventually, the only sensible thing to do is to give up, export the code and start modifying it manually. But now I have hundreds of lines of code in front of me that I didn't write, and even for an experienced dev, that takes time to study, understand, and reverse engineer before I can jump into modifying it. So in the end, time was saved, yes. But not as much as it might have seemed originally. Expertise is still required to build something that actually matches provided requirements, specifications and designs. But do hiring managers and clients understand this, or care? They may not.

I don't think they will care, at least not for a while. Like Sebastian says, we're in the wild pendulum swing of AI, and this pendulum is more like a wrecking ball. When enough shit falls apart, someone will figure out that we need humans to be a part of this process? But again, what will that look like? What can we do while all this is happening? I think it's some version of what we're doing here, as stupid as that sounds.

Only I don't think it's stupid, because of this last comment, from someone I used to work with about 12 years ago. Do you remember when I wrote about the connected car startup and how I had to show a technician that regular drivers didn't understand technical car knowledge in the same way he did? One of the engineers I worked with popped up after all this time. Not only did he remember the exact story I shared in that newsletter, he told me why that story matters. Here's part of his comment:

I build the data infrastructure that sits behind AI products — pipelines, schemas, the invisible plumbing. And here's what nobody in the AI hype cycle talks about: the single biggest factor in whether an AI agent gives you something useful or gives you garbage is how well a human being described the data to it beforehand. Not the prompt. The metadata. Column names, table descriptions, documentation. The boring stuff...
You and I both remember [the story]. I'm doing the same translation work now, just for a different kind of non-expert. The LLM is the driver who has no idea what the check engine light means.
Your dad is right that this is spec-writing from the '50s. And you're right that natural language skills matter. But I'd push it one step further: the people best positioned for this moment aren't just the ones who can write good prompts. They're the ones who can describe the world clearly enough that a machine — or a scared driver, or a confused user — can actually make sense of it. That was your whole thing at Automatic. Turns out it's the whole thing now too

First of all, if you want to know whether this made me cry, the answer is yes. Not only because it meant so much to be seen and remembered like this, but also because this "one step further" is what I wanted to say last week. I just felt too nervous to say it, because I worried it sounded too naive. Because I still sometimes think, after all this time, that because I don't do the technical work I don't really understand these systems. But I feel this one really deeply. The people who can describe the world clearly are the people who can help solve this problem. What can they do? I don't know yet. Maybe you'll help me figure it out.

Until next Wednesday, friends.

Lx

Comments