Citizen Digital Foundation

🆕Is the resounding male gaze in chatbots an opportunity for responsible anthropomorphism?🆕

The omnipresence of chatbots in modern life has reached a point where some companies are marketing them as replacements for real human connection. As we grapple with the ethical implications of imparting human-like values to these faceless digital assistants, a critical question arises: are we unconsciously perpetuating gender stereotypes by the very design of these chatbots?

Once upon a virtual assistant

Some time back, voice assistants were caught in a similar fire—they had female-sounding names and were perpetuating gender biases. Think Alexa, Siri, and Cortana, at your service in an instant, but also subservient by design. “Obedient and obliging machines that pretend to be women are entering our homes, cars, and offices,” pointed out Saniye Gülser Corat, director of gender equality at UNESCO when a report was released in 2019 by the organisation on voice assistants and gender.

“Obedient and obliging machines that pretend to be women are entering our homes, cars, and offices.”

Saniye Gülser Corat, Director of Gender Equality at UNESCO

In many cases, sophisticated scammers leverage this principle. Disinformation researcher Wen-Ping Liu investigated China’s role in influencing the Taiwan elections and found that there was a flurry of fake accounts—most of them faking a female persona. “You want to inject some emotion and warmth, and a very easy way to do that is to pick a woman’s face and voice,” says Sylvie Borau, marketing professor and online researcher in Toulouse, France. Borau adds that people tend to view women as more agreeable and warm, whereas men, on the other hand, are seen as competent and therefore more hostile and threatening.

Paired with the criticism that the tech industry has faced since its dawn for largely not being inclusive of women, this issue becomes more pronounced. According to data by Accenture, the ratio of women to men has declined in the past 35 years, with half of the women who get into tech dropping out by the age of 35. With a lack of diverse voices at the decision table, the product that is developed is oftentimes a one-dimensional embodiment

How people adopt this feature into their daily lives is another topic that has generated buzz from time to time. Borau’s research found that users treat chatbots differently based on their perceived sex, with female chatbots being far more likely to be subjected to sexual threats and harassment.

Borau’s research found that users treat chatbots differently based on their perceived sex, with female chatbots being far more likely to be subjected to sexual threats and harassment.

Writer Rachel Withers penned an article for Slate magazine, announcing well in the headline: “I Don’t Date Men Who Yell at Alexa.” “It matters how you interact with your virtual assistant, not because it has feelings or will one day murder you in your sleep for disrespecting it, but because of how it reflects on you,” says Withers in the article, further adding that even if Alexa and other voice assistants are not humans per se, we do treat them like one. So what if they do not display human-like emotions and can’t retaliate when crass words are thrown at them—is it fair for humans to abuse this power dynamic?

Even if Alexa and other voice assistants are not humans per se, we do treat them like one. So what if they do not display human-like emotions and can’t retaliate—is it fair for humans to abuse this power dynamic?

The loopholes in humanising AI connections

There has been a bid to humanise AI connections to an extent, bringing innovations so that your chatbot interaction is on par with a human friendship. Take a look at any AI companionship chatbots, whose selling point is to simulate friendships, intimacy, and romantic love. Sometimes, the bonus point would be that the bot is all customisable, ranging from the personality traits it exhibits to the outer appearances. There is an array of chatbots made for this principle: to simulate purposeful connections that mimic real-life interactions in the wake of a loneliness epidemic.

But in the end, giving human skin to artificial intelligence leads us to the same question that we once started with—do we get the privilege to treat a chatbot badly just because the conversationalist on the other side is not capable of processing emotions? Plus, with the grim reality of rampant sexual abuse and assault in the real world that affects women disproportionately, is this just a vent for people to verbalise aggression and dump it on their chatbot pal?

According to a report by Futurism titled “Men Are Creating AI Girlfriends and Then Verbally Abusing Them,” people took to Reddit to post their interactions with their personalised chatbots, some of which reeked of emotionally abusive texts. “Some users brag about calling their chatbot gendered slurs, role-playing horrific violence against them, and even falling into the cycle of abuse that often characterises real-world abusive relationships,” mentions the article.

The ethics of AI friendship is a different question altogether, but while we evaluate the validity and fruitfulness of such connections, there is also a need to reflect on how technology has not washed off all the real-world biases and stereotypes. Are we at risk of normalising behaviours towards AI that could spill over into real-world interactions?

Trade-offs of making education ‘attractive’

Malar AI, touted as the World’s First Autonomous AI University Professor’ is a chatbot trained on the entire engineering syllabus of Annamalai University, and answers students’ queries on WhatsApp. The popularity of this chatbot indicates the need for more dynamic and accessible learning systems in India. While the intent is all good, the visualisation of this chatbot as a svelte and sultry female oddly decked up as a bride, points to some bias on the part of the prompter and those who cleared it, along with the male gaze it caters to. Claims of the avatar being inspired by a popular Malayalam movie character ‘Malar miss’ establishes this gaze. While some educators feel this would generate great interest in learning among students, it begs the same question: Whose interest? What kind of interest? And should it come at the cost of reinforcing the objectification of women?

Malar AI answers up to 20 questions for free, after which it requires users to make a payment to access the features. IndiaAI’s (which is an initiative by the Ministry of Electronics and Technology) report says “Like humans who work to sustain themselves, Malar operates with an awareness of its operational costs, particularly the expenses incurred by its AI server.” To state that Malar “operates with an awareness” like humans who inherently have a motive to sustain themselves is essentially taking away from the fact that the AI chatbot has been an invention by humans themselves. The AI “teacher” is a byproduct of the research conducted by human minds, and any cost incurred by its operation is a concern of the brains behind Malar AI, and not the bot itself, which is not a sentient being in the C-suite making decisions for a company. Usage of such terminology revokes the responsibility of actions by the operators and misplaces it on such assistive technologies.

Visualisation of Malar AI based on a popular movie character
The fallout of celeb chatbots

If it is necessitated at this stage to anthropomorphise interactive technological inventions, what degree of responsibility do the makers bear? The feature that Meta launched in 2023 and withdrew sometime back- borrowing the likeness of celebrities and influencers and inducting them into chatbots presented us with a multitude of loopholes. A possibility of parasocial relationships, furthering of advertisements and promotional material (which can be unethical too) by a celebrity face and a chance of conflicting endorsements between the celebrities and their chatbot personas. But what is more lucrative than cashing off emotional vulnerability and social disconnection?

The way forward

Per Media Equation Theory by Clifford Nass and Byron Reeves, people tend to respond to media as they would to real people. The translation of real-world equations to chatbot and voice assistant interactions is unquestioned, so facilitating a humanised persona becomes a double-edged sword. Plus, we see an inaccurate estimation of the abilities of chatbots. Their noted function as first-line customer assistance is exhausting people now, with connecting to a real-time agent referenced as a “luxury.” Would it be wise to let chatbots assume the position of the first-line respondent in cases that need to be dealt with sensitivity? 

On gender stereotypes, diversifying the table of decisions is a headstart. “Male data makes up the majority of what we know and what is male comes to be seen as universal. Whereas, women and non-binary genders are positioned as minorities, invisible by default,” mentions an article for Becoming Human while discussing the feminisation of AI.

“Male data makes up the majority of what we know and what is male comes to be seen as universal. Whereas, women and non-binary genders are positioned as minorities, invisible by default.”

Becoming Human article

Ascribing sentience (calling chatbots “empathetic” or “compassionate”) is deliberately misleading- AI does not yet embody such traits which are exclusively human. Companionship chatbots especially prompt such murkier ground that blurs the thick body of differences between humans and generative AI. To navigate this responsibility would be to clear off the grounds from the start, with companies diligently being transparent about the limitations of their products.

Yashna Kumar

Yashna Kumar is a student of journalism with features in The Print and Stumble By Kommune. She likes to write on how internet culture shapes personal experiences.

Join the community!

Drop your email to stay ahead of the rest.