Hi Welcome You can highlight texts in any article and it becomes audio news that you can hear
  • Fri. Nov 22nd, 2024

Follow the Money: How Digital Ads Subsidize the Worst of the Web

Byindianadmin

Jul 29, 2020 #Subsidize, #worst
Follow the Money: How Digital Ads Subsidize the Worst of the Web

There’s a lot going on this summer. The presidential race is building steam, civil rights protestors are still in the streets, the pandemic is taking a nasty turn, Hamilton is on Disney+. Amid all those news events—and partly because of them—businesses, activists, and lawmakers are zeroing in on an issue that seems less dramatic but is still pretty important: digital advertising, the underlying financial model of the open internet.

The highest-profile example is the Stop Hate for Profit campaign, which has convinced some major advertisers, including the likes of Verizon and Unilever, to pause their spending on Facebook until the company takes dramatic steps to deal with the spread of hate speech on its platform. But how exactly does this stuff turn a profit? The answer goes far beyond Facebook’s content policies.

Re: Targeting is made possible by the Omidyar Network. All WIRED content is editorially independent and produced by our journalists.

“A lot of those debates, when you track them down to their technical causes, it inevitably boils down to advertising technology,” said Aram Zucker-Scharff, the ad engineering director for The Washington Post’s research, experimentation, and development team. “So many of the problems that people are talking about on the web right now, these are problems that arise out of detailed and persistent third-party, cross-site user behavior tracking.”

There’s a lot to unpack there. Over the next few weeks, WIRED is going to be taking a look at the various ways in which the modern digital advertising market underwrites the proliferation of harmful, divisive, and misleading online content, while at the same time undermining real journalism. To start, we need to understand the three main categories of ad tech and the position they fill in the food chain of online garbage.

Social Media

Companies like Facebook and Twitter make almost all their money from ads. Hence the Stop Hate for Profit boycott: The loss of advertising revenue is the only thing, the thinking goes, that could make the world’s biggest social network change how it deals with racism and disinformation. But what exactly is the relationship between advertising and social media bad actors? It’s not as though white supremacists on Facebook are making money from their posts. The economics are a bit more complicated.

Critics of Facebook have long argued that while the platform doesn’t monetize hate or disinformation directly, its reliance on microtargeted advertising encourages that stuff to exist. A social network that’s free for users makes money in proportion to how much time those users spend on the platform. More time means more opportunities to serve ads and to collect data that can be used to help advertisers target the right people. And so for a long time, social media companies have designed their platforms to keep people engaged. One thing that tends to hold people’s attention really well, however, is polarizing and inflammatory content. This isn’t exactly surprising; consider the old journalistic mantra “If it bleeds, it leads.” An algorithm that prioritizes keeping users engaged might therefore prioritize content that gets people riled up—or that tells people what they want to hear, even if it’s false. Even if advertising isn’t directly funding divisive or false content, that stuff is keeping people on the platform. Facebook’s own internal review concluded, for example, that “64% of all extremist group joins are due to our recommendation tools.”

The other issue is with the substance of the ads themselves—particularly political ads. The same features of a platform built around engagement and microtargeting can make paid propaganda especially potent. In June, for example, Facebook took down a Trump campaign ad that featured an upside-down red triangle reminiscent of a Nazi symbol. Data from Facebook’s Ad Library shows that the campaign tested several variations of the ad, using different artwork; the triangle one appeared to perform the best. In other words, Facebook’s algorithm optimized for an ad that Facebook ultimately decided violated its own policies.

“Facebook’s entire business model is an optimization of a robust data-mining operation extending across much of our lives to microtarget ads against the cheapest and most ‘engaging’ content possible,” said Jason Kint, the CEO of Digital Content Next, a trade organization representing publishers (including WIRED parent company Condé Nast), in an email. “Sadly, the content that tends to

Read More

Click to listen highlighted text!