Citizen Digital Foundation

Large corporates have too much information on you – and that could be a problem.

profiling and targeting

Can a harmless quiz on a social media platform alter the future course of a nation or impact the geopolitics of an entire continent? A few years ago, we might have dismissed this as science fiction; but the Cambridge Analytica scandal taught us that we are now living in the dystopian technology-governed future that we once feared.

In the 2010s, a British company called Cambridge Analytica (CA) used an innocuous app called ‘Your Digital Life’ to gather the personal data of over 87 million Facebook users – and then sold the data to their clients, who in turn used it for targeted political advertising. When users took the personality quiz on the app after signing in through their Facebook accounts, they unwittingly gave away access to their public profiles, page likes, birthdays, contact details, location, newsfeed, timeline, and messages. Using this data from unsuspecting individuals and by extension, their friends and families, Cambridge Analytica’s complex algorithms were able to build fairly accurate psychological profiles of millions of users.

In the run up to the 2016 presidential election in the US, the Donald Trump and Ted Cruz campaigns are believed to have used this invaluable data to micro-target voters with advertisements that pandered to their opinions, fears, prejudices, and biases, ultimately affecting the outcome of the US elections. Cambridge Analytica has also been accused of influencing the outcome of the Brexit referendum in the United Kingdom, by offering similar data based analysis to the ‘Leave’ campaign. When a whistleblower from CA revealed these chilling details about how data is used to create psychographic profiles of  users and target them, it sent shockwaves through the world. 

CA might have shut down in the aftermath of this enormous scandal, but it opened the floodgates for future misuse of data to target users for commercial, political, and ideological gain. As you read this, digital companies – social media platforms, search engines, ecommerce websites, news and entertainment websites, and every ‘convenience’ app and website that we use for life, work, and play continue to harvest our personal data and use it in questionable ways. Based on the digital footprints we leave behind in thousands of places, companies are able to use a potent mixture of behavioural science and cutting edge artificial intelligence technology to exploit the vulnerabilities in our personalities. 

Though the companies that use our data insist that the profiling and targeting is done with well-intentioned, ‘user experience’ driven objectives – improving the relevance of the content we see, recommending things that might interest us, serving us ads for products and services that are perfectly aligned with our lives – there is a fundamental imbalance built into this system. As adult humans with the ability to think and discern what’s right for us, we believe that our actions are conscious choices, born of free-will. But when you take into account the fact that mega corporations with sophisticated science and technology at their disposal perhaps know us better than we know ourselves, it creates an alarming scenario. Armed with this data, they have the ability to shape our thoughts, emotions, behaviours, and actions – and manipulate them for specific purposes. With the power to influence what we watch and read, who we engage with, what products and services we buy, and which political parties and ideologies we support, digital companies wield enormous power over our individual and collective lives. In the face of this unequal relationship with technology, the notion of free-will becomes debatable. Do we actually like the things we like, want what we want, or have the algorithms, over time, trained us to prefer apples over oranges or vice versa? Do we support a particular party and ideology because of our innate political beliefs or are we being manipulated to? Do we indeed hate a certain group of people or are we being ‘trained’ to hate them? Are our fears and insecurities rooted in reality or have they been seeded there, only to be exploited later for commercial or political gain? Do we buy products and services because we need them or because we are bombarded with compelling ads that give us FOMO? 

The root of the deep polarisation that’s evident online – vicious comment section wars, trolling, hate speech – also perhaps lies in here. Rather than show us balanced perspectives that enable independent, critical thinking, the platforms ‘learn’ to show us content that confirms (and strengthens) our existing biases, creating echo chambers where we hear no other voices except those that mirror ours.  The content that the algorithms curate for us in this manner has the profound ability to divide and isolate us, often in irreversible ways, while benefiting those with specific agendas. 

Advertising and propaganda are not new. Different groups of people have always used them for commercial or socio-political gain. But the scale and precision with which they are used to influence us today is unprecedented in history. 

This becomes especially pronounced when you take into account the power differentials that exist between the rich and poor, the educated and the less educated, the technology savvy and the new adopters, the young and the old.

But awareness is the first stepping stone to action. As we navigate digital spaces, it’s important we retain some dose of healthy skepticism – who stands to gain from my online actions? Why should I read, watch, like, share, comment on, or believe a particular piece of content? What can I do to reclaim my time, energy, headspace, and agency so that my beliefs and opinions are my own?