Big tech, epic fail
Covid-19 misinformation means that social media is bad for our health. Thankfully, regulators have noticed.
It’s true to say that there has always been a lot of conspiratorial craziness out there. Way before Twitter, Torquemada was able to stoke fear and undertake a torturous campaign across Spain that lasted years. The fear that travelled from town to town as the Inquisition moved around the country persecuting Muslims and Jews didn’t need amplification on social media.
But it’s certainly the case that social media has made it easier for the craziness to be shared. Even a cursory examination of Facebook, Twitter or YouTube suggests that so-called mis- and disinformation (MDI, for short), is a major component of the content on social media platforms.
Some of it is laughable. I came across a recent tweet which claims that the World Economic Forum’s logo features three hidden sixes, and therefore carries a hidden message — extremely well hidden if my eyes don’t deceive me — through its reference to the number of the beast. Even ignoring this trope’s lazy revisiting of the devil-in-a-seat-of-power storyline presented by a seventies horror movie, The Omen, it’s frankly laughable.
But it fans the flames. Because as a network of followers like and share these meaty main courses, algorithms and happenstance introduce side orders of 5G fear, QAnon conspiracy and false claims of election rigging from elsewhere on the madness menu, leaving a foul taste in the mouth and serious heartburn. Up until now, those who post such messages have largely been left alone by Big Tech, unless they offended or threatened someone. That’s now changing.
Emptying the swamp
After years of being warned to self-regulate or have regulation forced on itself, big tech — and social media platforms in particular — are being threatened left, right and centre. President-elect Biden, who has “never been a fan of Facebook,” wants to revoke section 230 of the ‘Communications Decency Act of 1996’, which protects social media platforms from any responsibility for the content on their platforms.
Here, the UK Government has just announced the creation of a ‘Digital Markets Unit’ within the Department for Digital, Culture, Media and Sport (DCMS) to curtail the power of Big Tech. While it appears that its main remit is to remove the market dominance of Big Tech and encourage competition, given its place within the DCMS, it’s hard to see it not getting pulled into debates around questionable content.
Most worryingly of all for Big Tech, the European Union’s long struggle with tech reaches new heights with the publication of the bloc’s draft ‘Digital Services Act’ before the end of the year. This seeks to address the way that tech has “created new risks to citizens and society at large, exposing them to a new range of illegal goods, activities or content.”
The background to these and other measures to address the power of big tech is, of course, its market domination and the way it squeezes out any competition. This was the subject of a recent US Congressional investigation which found that the tech giants have abused their dominance in the marketplace. But there’s more. Notions of individual privacy, for a start, raise questions about whether the users of online services are not so much customers, but products for sale. The ownership of their data is, therefore, a key consideration, and provides another reason for authorities to be seen to do something.
But in addition to these dry, courtroom-friendly subjects that invite scrutiny of Big Tech, there is also the unhealthy miasma of MDI, and the apparent inability of Big Tech’s social media platforms to control the content they allow to be shared and liked.
And, while there is some excuse for ignoring low-grade craziness like the World Economic Forum ‘666’ tweet mentioned previously, something else happened this year, and it has brought social media’s failings to control MDI to the fore: Covid-19.
The casebook nuttiness of the World Economic Forum logo with the hidden message was posted to the Twitter account of a print newspaper in the UK called “The Light” which claims a distribution of 100,000 copies. It is packed with wild stories about how Covid-19 is a scam; why vaccination is a means-of-control conspiracy; or how facemasks not only don’t help prevent the transmission of the virus but are dangerous in themselves. And, in a sign of how these issues now bridge the online and physical worlds, the newspaper is highly active on Facebook and Twitter.
In July of this year, the Center for Countering Digital Hate (CCDH) highlighted 409 “Anti-Vaxx” social media accounts with a total of 58 million followers. In a separate study, the CCDH discovered that the proportion of individuals who would definitely refuse a vaccine was higher among those who rely on social media to learn about Covid-19, compared to those who rely on traditional media (35% vs. 24%).
The difference between Covid-19 MDI and the usual conspiracy fare is that it represents a real threat to public health. So regulators are getting involved in policing and removing this poisonous stuff because the social media platforms have largely failed to remove it themselves.
A social disease
Researchers at the Oxford Internet Institute suggest that the world of online information we increasingly inhabit should be considered a factor in our health, much like education, income, environment and access to healthy food. In effect, being online and on social media has an impact on our health, claim researchers, as there is “a direct correlation between exposure to poor information quality and poor health outcomes.”
Earlier this year, Facebook promised to remove false content or conspiracy theories about Covid-19 that could be harmful, adding in a now-deleted news item that this was “an extension of our existing policies to remove content that could cause physical harm.”
Yet it was revealed this week that Facebook failed to spot more than 2,000 ads and posts that breached the UK’s Advertising Standards Authority’s rules by allowing ads that “preyed on public health fears and anxieties” and offered up quack cures for the disease. A further 1,500 ads were removed because they promoted regulated items for sale. The ASA claims to be moving to more proactive monitoring of social media instead of relying on consumer complaints, and noted the cooperation of Facebook in the removal of the ads, blaming the “sheer volume” of “worrying content” as a reason why it was hard for Facebook to remove the content itself.
But the volume argument doesn’t hold much sway if one considers the results of an Oxford Internet Institute study of Covid-19 related MDI which revealed that, of 32,607 Facebook communities that had posted links to MDI videos, “Just 250 communities have provided the same amount of visibility, in terms of shares, to misinformation than all the remaining communities combined.” Two hundred and fifty. An intern could identify them and pull down the accounts in a couple of days. Why can’t Facebook do it?
Facebook certainly has the money to pay for content moderators. And it’s hard to imagine that the company, which has an estimated 52,000 data points for each user, cannot better identify and remove dangerous content on its platform using its very own algorithms. So its failure to take action on Covid-19 related MDI suggests that it either doesn’t care, or won’t do anything unless pushed into it by a body such as the ASA.
How else could one explain the fact that, despite Facebook’s promises to remove harmful Covid-19 content, having reported those 409 “Anti-Vaxx” social media accounts with 58 million followers to Facebook, Twitter and YouTube, the CCDH subsequently discovered that only six accounts had been removed: less than 1 per cent of the network it had identified.
Mark Zuckerberg, Facebook CEO has argued that private companies like his should not be “the arbiter of truth”, so perhaps the company’s abdication of responsibility for the content on its pages is a conscious decision, and the company wants the greater scrutiny now knocking at its door.
Free to air
Perhaps not. Perhaps Facebook and other social media platforms ultimately champion freedom of speech over anything else, even when it is dangerous to public health. Judging by the many “Keep Britain Free” groups and accounts that seem to have emerged over the last few months in reaction to lockdown, for many social media users, it is indeed a matter of personal liberty and the freedom to speak out about it.
Yet the openness of many of these users to a wide and absurd range of conspiracy theories suggests a low level of online literacy and a resulting inability to tell truth from fiction. Education and, yes, intervention from the social media platforms and, in their absence to date, regulators, is essential.
And for those online-savvy and sophisticated social media stars who shout about free speech (or seem to be driven by the clicks this content baits), they might consider the words of the authors of the same Oxford Internet Institute referenced earlier:
“We have all made personal sacrifices during the pandemic to protect the most vulnerable in society from the Sars-COV-2 virus. We should also be willing to sacrifice a fraction of the right to freedom of speech to protect those who are most susceptible to online health mis/disinformation. This makes tackling the MDI a matter of public health importance.”
It’s not so much a matter of free speech, then, but saving lives at the expense of clicks.