Everyone's An Expert Now
If I were to be completely honest with you, there were a lot of things I disliked about being a user experience researcher. Don't get me wrong, it was often a good and interesting job. It was just frustrating one. As a user researcher, you have a lot of responsibility to influence others, but almost no authority. You want product teams to build in a way that benefits the actual people who use the product you're building, but you don't get to decide that's what will happen. You often don't even get to be in the room with the product managers, engineers, and designers when those decisions are made. You—unlike all your teammates—are required to bring the team with you on "the user journey," which means involving them in every step of the process, hyping it up for them, convincing (nagging) them to participate, posting daily updates about what you've learned on Slack or Teams or Workplace or whatever. Then you leave it up to them to make all the important choices. And even with all that, you regularly work with people who still don't know what a user researcher does, what value you bring to product development, or why they should listen to you.
As I used to joke: If you a love spending at least 50% of your time explaining to your own coworkers the specific value you bring the company, then UX research is the discipline for you.
Meets Most is a reader-supported newsletter (no ads, no affiliate links). If you enjoy it, you can become a paid subscriber. Your support means a lot! Thanks for being here.
But user research is also a very special job. Unlike every other role in product development, as a user researcher you have a real opportunity to put people first. You're the one talking to and listening to people, especially if you're a qualitative researcher. The only other team at a company that regularly hears from and communicates with a product's users is the customer support team—an incredibly important role that, when done well, can be hugely beneficial to a product's success. Customer support also gathers a lot of data about user problems, needs, and demands. It's a different kind of data and a different kind of user interaction though, because as a whole the focus is on resolving and reporting immediate issues.
So what makes user research special? In a way, a researcher is a sort of product therapist. Everyone who uses a product—and just as importantly, everyone who doesn't use a product—has opinions about that product. Not just opinions but ideas, perspectives, problems, issues, questions, and so on. All these people are out in the world, living their real lives and doing a million things wholly unrelated to the product, but somehow that product is still a part of their existence. Or you want it to be. Given the chance and a little encouragement, most of them want to talk, whether about the product, about themselves, or about their lives.
When building a product, it's tempting—just like in therapy—to start with or focus only on The Problem, or at least what you think is The Problem. But most of the time, the trouble a user is having with a product, or the workaround they've found for an issue, or the use case they were hoping the product would solve is, in a way, the least important part of the conversation. It's the entry point to figuring out what is and isn't working, what they're looking for that they don't entirely know how to articulate, or what's really going on. So in the long run, you find ways to keep the problem from happening at all, rather than always patching over the same, seemingly surface issue.
There's only one way to do this work, and this is perhaps what's most special about user research: You, the researcher, are not the expert. You get to invite expertise from others, including from people who may feel very much like the opposite of an expert, especially when it comes to tech. There's always a bit of boilerplate that researchers use at the start of every in-depth interview or concept test, about how "there are no wrong answers" and "if you don't want to answer a question we can skip it." I would always add this:
"One of the things I love best about this job is that I get to ask people like for your expertise in something I can never, ever be the expert in: Yourself. Your opinions, your perspective, but most importantly, your experience. What you share with me about yourself, your life, and this product will help the teams I work with understand how, why, and what to build for more kinds of people."
As you might remember, I did my PhD dissertation on doctors in clinical settings whose offices (mostly internal medicine and OB/GYN) were transitioning from paper to electronic medical records. My primary method was ethnography so I conducted—you're not ready for this—400 hours of participant observations, along with 50 interviews with doctors, medical assistants, and administrators. This might sound crazy, but I love watching people work, especially when they finally forget you're observing them. Most of all, I loved interviewing them. I loved letting someone else be the expert, and listening. This is one reason I wrote this last year, on AI.
After I posted last week's newsletter, I immediately got three more comments by three different and equally thoughtful people. Obviously it made me happy (more please!) but what it really did was confirm something else I was thinking: Amidst all the loud voices and so-called experts, among the AI boosters and AI doomers, there are a lot of people who just want to have realistic conversations about what's happening and what we can do about it. There are a lot of actual experts and really smart people who I want to listen to and learn from. Like Indy, who said this:
Being an old tech guy now, I do feel there has been a culture change. Lots of moments you could date it to, but for today I'll choose the release of Gmail into the world as "an extended beta." There begins the infuriating culture of "just throw the tech out there and see what users make of it." Infuriating because there's a kernel of truth - technology is always an interaction, it's arrogant to think a designer can understand everything users might do with the new affordances. Yet at the same time, it has become an excuse for loading the user with all that work and also an excuse for terrible quality control.
PLEASE SAY THIS LOUDER FOR THE PEOPLE IN THE BACK, INDY
I think about this all the time. Like imagine if an appliance company designed a new oven with a new door that didn't close correctly, but hey, it still got hot and baked stuff, so they let people buy it anyway. Or a car with a whole new braking system that worked...well, most of the time. And then! What if those companies had the temerity to make users do the work of reporting these problems, even fixing them or paying for upgrades? I don't even know if this is only because the industry has chosen cheap (although it is largely this, as Indy says, because it's cheaper to have users test it than to hire good QA people), or also because things like "perfect is the enemy of good" and need to ship, ship, ship a product made MVPs (minimum viable product) more minimum and less viable.
Buffy's thinking is along the same lines, with thoughts I've heard from many friends forced to use AI at work in increasingly stupid ways. All of the "work" they turn over to AI has to be babysat, checked, checked again, and ultimately corrected, like an unqualified intern who only got the spot because their parent knew someone who knew someone. One thing Buffy said that I really liked was this follow up to Izzy's comment about "describing the world clearly enough that a machine...can make sense of it."
But the problem is, you're not describing it for a scared or confused human, you're describing it for a machine that has no emotion at all, and whose only goal-seeking rule is to "say something that might reassure or inform your human user." And oh, by the way, completely make stuff up if that furthers the goal.
Earlier today I saw this opinion piece on AI and language that made me think about this comment again, in yet another way: The machine is not only working based on specific rules, instructions, and goals, using pre-supplied information (often of unknown provenance) and/or making stuff up, it's also only able to communicate all of that in whatever language style it's been trained on. This is true both in terms of computer language and natural language. As the Guardian piece says, LLMs are trained entirely on written language, which as all of us know is wildly different from spoken language. Sure, Reddit gets scraped constantly, and some communication on Reddit is more casual and conversational-ish, but it's not like spoken language. Similarly, I haven't programmed in years, but I know there are many, many ways to write code. Just like prose, there's good and bad code, efficient code, janky code, you name it. So this means it's not just that we have to describe the world to an emotionless machine that wants to inform the human user, it means we have to communicate in a way the machine understands. We have to prioritize the machine's languages over our own.
Which brings me to Chris's comment, one you should read in its entirety. In fact, I hope you'll forgive me, but I'm going to pause here and write about that comment next week, because there's a lot in there, and if I dig in now this will go for another 2000 words. Plus, maybe more of you will have some thoughts on all this, and I can keep this party going.
Until next Wednesday.
Lx
Leah Reich | Meets Most Newsletter
Join the newsletter to receive the latest updates in your inbox.