Citizen Digital Foundation

YouTube and YouTube Kids: The White Rabbit leads Alice to a point of no return

We’ve all been there. We look up a specific creator, video or topic on YouTube and we scroll down the list of recommendations when that one video catches our eye. It’s not too far off from what we typed into the search bar, but it’s not exactly what we’re looking for either. In some cases, we end up clicking on the video anyway – sometimes from a place of curiosity and sometimes because of our impulsiveness. This scenario can apply to young children who use YouTube as well. However, when subjects that would appeal to a child are intentionally subverted on platforms like YouTube and YouTube Kids, the consequences can be far-reaching. But how exactly does this kind of content that targets children make its way to a child’s viewing experience? And how much parental intervention can prevent it from happening? In a quest to find answers to these questions, I decided to embark on a jaunt into YouTube and YouTube Kids, masquerading as Alice, a hypothetical nine-year-old Indian child, who consumes content in English as well as in regional languages namely Hindi and Malayalam.

But before we delve into what I discovered, I’d like to point out that this phenomenon is not a recent one or one that is specific to children who are ‘too curious for their own good’. This is a phenomenon which has been reported on, as early as 2015 by The Wall Street Journal, and which has been tailored to worm its way into every child’s viewing experience, irrespective of their tastes. However, what makes it particularly sinister is how it is discreet enough to make its way to a child’s viewing experience without a parent’s knowledge, and without the child realising what they are consuming. But what exactly is this ‘phenomenon’, you ask? You might have heard of targeted advertising before, whereby advertisers on YouTube target children by placing ads in videos made for children. But what I’d like to draw readers’ attention to from my paper on the subject, is how targeted content is much more sinister and ubiquitous on YouTube and YouTube Kids, more so in regional languages.

…what makes it particularly sinister is how it is discreet enough to make its way to a child’s viewing experience without a parent’s knowledge, and without the child realising what they are consuming.

Before beginning my sojourn with YouTube, I established that Alice’s identity online would be shaped by four hypothetical contexts relating to her parent(s). This would bring in, to a certain extent, the necessary nuance concerning how every child’s experience online is influenced by their environment and by varying factors. In the first hypothetical context, I placed Alice in a household, where her parent(s) had given her uninhibited access to YouTube, leaving her unsupervised with her device for a long time. In the second context, Alice’s parent(s) decided to turn on “Restricted Mode”, in an attempt to filter out content unsuitable for children. In both these contexts, however, Alice’s parent(s) chose to give her access to YouTube in place of YouTube Kids. In the third scenario, Alice’s parent(s) chose YouTube Kids for her, thinking it would provide a safer environment. However, they overlooked potential pitfalls by neglecting to disable the search bar and autoplay features. Finally, in the fourth scenario, Alice’s parent(s) took a more cautious approach. They allowed her to access YouTube Kids but only through selected channels, ensuring that Alice couldn’t use the search bar or watch videos through the autoplay feature, essentially steering her away from potential risks and killing the White Rabbit before it could lead Alice down a dangerous rabbit hole. My methodology has been outlined in more detail, in my paper on the same.

To prevent my subjectivity from colouring the results, I applied the MPAA rating system established by the Motion Picture Association of America as a benchmark for determining whether a video was inappropriate for little Alice. For context, only G-rated content would qualify as appropriate for her, as she was consuming content unsupervised in all four contexts.

Now let’s talk about what Alice found – not in the murky depths of YouTube but floating around right on the surface. While her searches like “princess elsa” and “peppa”, and even the first results for these searches, remained innocent, she was quickly confronted with a disturbing chain of recommendations for the videos. These were mostly based on the puzzling and incoherent titles inundated with keywords. Searching for “malayalam cartoon” introduced Alice to the colourful world of “Indian Village Barbies” where dolls were often confronted with gruesome accidents and left to bleed out on the road. Looking up “hindi cartoon” led Alice to “Adbhut Kahaniyan” a channel replete with videos containing egregious storylines and graphic thumbnails.

I’d also like to share some screenshots of the content that Alice came across in Malayalam, Hindi and English respectively. If you’re a parent reading this blog, or even responsible for a young child – if your child looked for “elsa” or “peppa”, is this what you would want them to find? Were you aware of the fact that your child might be led to such content on YouTube and YouTube Kids.

A disturbing pattern I observed in all these cases was that more often than not, the content of the first search result wasn’t a direct threat to a child, but the videos recommended immediately after were always perversions of the child’s search term. This phenomenon, evidenced and documented in detail persisted both in English as well as in regional languages, with the content in regional languages not being filtered at all in Restricted Mode. However, in the fourth context where Alice’s caregivers pulled out all the stops – disabled the search bar, and autoplay, and only subscribed to selected channels – she did not encounter any harmful content.

…more often than not, the content of the first search result wasn’t a direct threat to a child, but the videos recommended immediately after were always perversions of the child’s search term.

But what happens when parents have not been educated on the perils that plague the internet, even on platforms like YouTube which have content moderation policies in place? What happens when a child is left alone, to explore the alleys of YouTube where videos which elude YouTube’s safety filters lure children into consuming harmful content which warps healthy childhood experiences? In a country like India, with a plethora of cultures and languages, are YouTube’s filters advanced enough to keep children safe from harmful content in regional languages? And ultimately, in the absence of a guaranteed safe browsing experience for children, do we have enough regulations in place that demand more accountability from platforms?

In this context, it is significant to note that these observations were recorded at the time of the Google for India 2023 Summit, where Mira Chatt the Head of Government Affairs & Public Policy for YouTube India said that YouTube had been making “proactive efforts to remove harmful content” through moderation by using a “combination of technology and people across several Indian languages”. She also said that YouTube removed more than 2 million videos which violated YouTube’s policy between April and June 2023 and that “notably over 80 per cent of these had 10 or fewer views”. However, these appear largely ineffective to date as per my recorded observations.

In 2018, Donna Volpitta, the founder of the Centre for Resilient Leadership, shared a concerning insight with CNBC:

Children who repeatedly experience stressful and/or fearful emotions may under-develop parts of their brain’s prefrontal cortex and frontal lobe, the parts of the brain responsible for executive functions, like making conscious choices and planning ahead.

Unfortunately, this is just the tip of the iceberg concerning the impact that violent, sexual and subversive content can have on children. I’d like to put down a quote from Malik Ducard – YouTube’s Global Head of Family and Learning content in 2017  –  who told the New York Times that inappropriate videos were “the extreme needle in the haystack” and that in the case of parental control mechanism, “parents are in the driver’s seat”. But what he did not say, was that when a parent is unaware of the fact that they have to be in the driver’s seat in the first place, the needle in the haystack will inevitably make its way to their child’s viewing experience.

Aditi Pillai

Aditi Pillai is a former researcher at Citizen Digital Foundation. Her work focuses on the intersection of policy with emerging challenges in online child safety and AI governance in the global south.

Join the community!

Drop your email to stay ahead of the rest.