The Global Fraud Economy
Last week, a news story caught my eye, even though it seemed to fly under the radar of general discourse. It was one of those blink and you miss it headlines on the Apple News app, which helpfully refreshes at random intervals and in random ways, giving me headlines I don't want for hours and then disappearing headlines I'd like to click on so quickly that I can barely remember what to search for before I get distracted by something new the app pushes at me. I love to consume content mindfully and with intention on the internet.
Anyway, the story was about Meta and all the money the company makes from facilitating fraud on a gargantuan scale by showing users fake ads, ads for fraudulent investment schemes and other scams, and ads for banned goods. "Banned goods" is a vague phrase, so you should know that based on my extremely cursory internet search, "banned goods" include but are not limited to: banned medical products (while blurring or blocking ads for abortion pills), gun silencers, "nudify" apps that allow users to create sexually explicit deepfakes, gambling and sports betting, along with pornographic and sexually explicit ads (all examples in that article are blurred, don't worry) and "AI girlfriend" apps.
Need I remind you that if you are a user on any of these platforms, your content may immediately be removed for the slightest hint of nudity or anything else that violates the terms of service. You won't be able to contest this. You might even be banned from using the app. Meanwhile, this is happening:

Now, Meta obviously does not show these ads intentionally, but show them it does. According to internal estimates, it sometimes shows them to the tune of 15 billion scam ads a day across all platforms. Fifteen billion. That is a mind-boggling number. Beyond the very serious issue at hand—users being shown and clicking on scammy ads or sexually explicit deepfakes, Meta making money from these views and clicks—it's also a reminder of just how many ads are being shown every day on Meta products alone. 15 billion is a fraction of that total.
Oh and by "all the money the company makes," I mean upwards of 10% of Meta's total revenue, according to those same internal estimates. So you know, about $16 billion dollars. (These numbers have since been walked back.)
Now, I should be fair. While I've seen some questionable ads on Instagram, I've also seen crazy scam ads on non-Meta platforms. Recently, during a random YouTube rabbit hole, I got served a series of super weird deepfake ads about natural erectile dysfunction remedies, and one of them featured... Oprah?
And you know, Meta has tried to remediate the issue to a certain degree. They've done internal assessments, developed automated systems to not only flag questionable ads and marketers but predict whether those advertisers are likely to be scammy, taken down millions of ads, reduced user reports of scam ads, and penalized marketers that are likely to be scammers by charging them higher ad rates.
I'm sorry. They do what?
They charge higher ad rates. The idea is that higher rates will be a deterrent for anyone who might—might! not absolutely certainly!—be looking to advertise gambling or firearm supplies or sex. In other words, the company actually makes more money from these ads than from other ads.
The cursory internet search I mentioned above also produced a number of results on sites like Reddit and Quora, among others, full of people trying to figure out how to effectively report these ads and get them taken down, or even report the ads at all. To be clear, these posts and comment threads are not data, and many of them might not even be real. So they don't prove anything about Meta's reporting systems or the company's willingness to remove questionable or overtly problematic ads. But seeing them, and thinking about experiences I've had as a user, it does make you wonder how they reduced user reports of scam ads by 58%. Were there in fact fewer scam ads to be reported? Or did the changes in content moderation policies earlier this year have an impact on ads? Did users feel more discouraged from reporting, or confused by the process? I don't know the answers to any of these, but they would be interesting to dig into.
But! I'm not here to accuse Meta of doing anything shady when it comes to user reports, or secretly trying to earn more money from scam ads. (Hello to any lawyers who may be reading!) I just want to talk about these ads, and about the fact that a company is able to make billions of dollars by in part by providing a platform for advertisers who may be actively scamming and defrauding the users of those platforms. Including users who try to report it, whose own content has been taken down for ostensibly violating terms of service, who are vulnerable or at risk. Or they may not, it's really hard to be certain.
When I worked at big tech companies, it was often hard to explain to non-tech workers (and frankly even to many tech workers) why companies made products and made decisions that users hated. Occasionally, it was a case of a necessary change to navigation, an honest attempt to fix an issue even though it would be impossible to please everyone, or a choice that was based on a particular engineering decision the company had committed to years ago. It's not always possible to build the thing you want. But the funny thing you learn is that it usually is possible to build the thing a business partner wants, or a particular VP, or a major partner, or an advertiser.
Sometimes this is even explicit. At one job (not at Meta), I pushed back when another team kept telling my team they were taking a feature we had worked on and were pushing it to production. I told them that in every single study we had conducted, users didn't like the feature. I made it clear that the feature would actively worsen the experience for the people who paid to use the product. The other team kept insisting, and it was only after they said I was being obstinate and "kind of bitchy" (for advocating for our users) that someone cleared things up: Someone in the C-suite had personally told the team to ship the product because they'd promised it to business partners. I said, "Oh. Why didn't you guys tell us this to begin with? Even I know there are some battles you can't win."
When you work in tech long enough, if you have any level of detachment and cynicism (and any sense of ethics or morals), you start to see patterns. All those new legal policies that provide cover, all those public statements touting efforts to mitigate the problem full of results that sound so impressive. But who really holds these companies accountable for enforcing those policies? How do we know the problem is being solved when we have no other numbers or metrics to compare? How much money would the company stand to lose if the problem was fixed to the full extent possible? To the public it always seems like not enough is being done, but when you've been on the inside long enough you can see that it's actually just enough being done: Look how hard we’re trying! It’s such a tough intractable problem! Who could possibly solve it! :jazz hands:
A few weeks ago, I finally read Tinker Tailor Soldier Spy by John le Carré for the first time, and ever since I've been obsessed with his work. If you know me, you know I am a wildly curious person. I can't just know about one part of a thing, I have to know about as many parts of the thing as possible. I'm also very tangential, as evidenced by this seemingly unrelated aside, but also by the fact that, while exploring the le Carré universe, I decided I should also read a non-fiction book called A Spy Among Friends, about Kim Philby, one of the famed Cambridge Five double agents who spied for the Soviets. The first chapter is about Philby's close friend Nicholas Elliott, who was also in the secret service and who was one of the people he most betrayed. There's a line in that first chapter that struck me. Elliott, later described as a child "of the Empire," "was born to rule (though he would never have expressed that belief so indelicately), and membership in the most selective club in Britain seemed like a good place to start doing so."
Tech, as we've learned the hard way, has been our 21st Century Empire. You might think this is crazy, because no one was "born" to tech. But that's always been part of the ethos behind the industry. A special Shangri-La, a meritocracy in which certain people succeed because of their obvious natural born talents, a selective club of special geniuses. Although this is the United States, so instead of class we just call it money. And there are a lot of ways to take money from dupes and dummies.
It's not just the ads. It's NFTs, crypto, and the insane proliferation of sports betting sites, new versions of the old sleight of hand and back room bookies that we've now spit-shined and legitimized. It's the products that suck you in and ruin your attention span, then push ten different things at you to distract you and make you forget whatever it was you'd intended to do. But it's also the ads. The relentless barrage of shit that you click on even when you don't mean to, the ads that skirt the rules, the ads that remind you that you're not only trading your data for a free service, you're also allowing the company to pick your pockets, access your life savings, and flash you in the bargain.
You already know that the tech industry has attracted people who think they're smarter than everyone else. Now you know they also think the rest of us are total dummies, while they're born to rule in this new Empire, fully entitled to take our money. They've been doing it for a while now! No wonder nothing is beautiful, and everything hurts.
Until next Wednesday.
Lx
Leah Reich | Meets Most Newsletter
Join the newsletter to receive the latest updates in your inbox.