Citizen Digital Foundation

Misinformation thrives when ‘what is effective’ is trained to beat ‘what is right’.

misinformation

“Falsehood flies, and truth comes limping after it.” Jonathan Swift.

In recent years, amplification of disinformation and misinformation on digital media has led to swaying of elections, racism, misogyny, paedophilia, polarisation and genocide. A single Whatsapp message led to 24 deaths in India. Covid-related misinformation across the world has led to people consuming lethal substances, doctors and nurses being beaten up, and widespread racism and bigotry. A pioneering 2018 BBC study in India revealed 36.5% of messages shared within private networks are scares and scams around conspiracies, tech, money and health. Followed by 29.9% of national myths around cultural preservation and common man stories. The multimethod study identified coordinated political disinformation activities on Facebook and Twitter that harness and validate pride of identity, and tap in to fear and insecurities to further polarise religious and ethnic beliefs, and stoke nationalistic narratives.

Which begs the question, ‘Why would platforms intentionally or otherwise facilitate amplification of disinformation and misinformation?’ For that we must first understand the two a little better. Disinformation or propaganda is false, misleading information or distorted narratives generated intentionally to serve political, profit or reputational interests. Misinformation is distinguished from disinformation in that it is not intentional. Misinformation, sometimes with origin in disinformation, includes any misheard or misinterpreted information. Both however, by nature of either validating our social and cognitive biases, or having ‘newness’, ‘value’ or ‘pride’ associated, prompt us to share them along. 

According to Renée DiResta, Disinformation Expert, and Lead – US Senate’s Russian Investigation Committee, there are three key factors that fuel the velocity and virality of disinformation on today’s digital platforms, unlike propaganda in any other time in history. Firstly, unlike disinformation in times of the World Wars and earlier pandemics that used multiple, expensive and geographically limited channels of mass communication, the ‘deep mass consolidation’ possible through just 4 or 5 top digital platforms delivers quick and large-scale reach and mobilisation far more organically. The second, ‘targetability’ is the unprecedented ability of these platforms to deliver engaged audiences profiled by their likes, hopes, aspirations, sense of identity, fears, insecurities and vulnerabilities, for a nominal price. And finally, opportunities for large-scale ‘gaming of algorithms’ or misuse and manipulation of the platforms’ architecture itself in new forms of information-warfare such as coordinated inauthentic activity or bots, forum sliding, consensus manufacturing, troll farms and of course, hacking. Misinformation then by nature, becomes the wind in the sails of disinformation.

In their mission to connect communities from across the globe to each other’s products, beliefs, ideas, resources, solutions and expressions, the top digital platforms might not have seen algorithmic optimisation taking on such insidious forms and proportions, until it did. With competitive pursuit of advertising revenue and a quest for global power and monopoly, delivering more and more engaged audiences was only a natural progression. This transformed the platforms from being solution providers to the world’s largest attention merchants. Engineers designing the platforms’ algorithms were tasked primarily with increasing ‘time spent watching/using’. And the success of these intelligent algorithms was set to prioritise continued, effective engagement over correctness or users’ interest. Around 70% of YouTube’s one billion daily views is recommended content, not what users search for, says Guillaume Chaslot, one of the early engineers of the exclusive team at YouTube that wrote their most advanced search and recommendation algorithms. He points out, “YouTube is something that looks like reality, but it is distorted to make you spend more time online. The recommendation algorithm is not optimising for what is truthful, or balanced, or healthy for democracy.” 

In light of the dire consequences of disinformation and misinformation facilitated by digital platforms, Facebook, YouTubeand Twitter have taken several corrective measures over the last few years. These include investing in new technology and human intervention to check the spread of algorithmically amplified disinformation, misinformation and hate speech. However, as long as business models remain engagement-centred, these measures remain reactive and incapable of matching up to the sophistication of the original machinery powered by even bigger investments that feed the business models. What’s more, most of these measures are actively applied to English-speaking countries, spurred by the harsh media criticism and their subsequent impact on bottomlines. In countries like South Africa, Kenya, Brazil, Myanmar, Indonesia and India, Facebook, Twitter, YouTube and Whatsapp, devoid of nuance and expertise in their vernacular services to boot, have catalysed misinformation to lethal levels. When awe shrouds awareness and pride of identity trumps critical inquiry, the soil remains fertile and inviting for big tech to keep experimenting their profit-driving algorithms.  

Become a Good Tech Champ!

Support our work to make children safer online, and make the internet a productive place for everyone.