Citizen Digital Foundation

Whitepapers

GPQA: A Graduate-Level Google-Proof Q&A Benchmark

This article presents a dataset of 448 multiple-choice questions in biology, physics, and chemistry, designed to be challenging for both experts and AI systems. It aims to facilitate the development of scalable oversight methods for AI systems, enabling human experts to supervise and obtain truthful information from AI surpassing human capabilities.

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

This article from the ACM Digital Library discusses the risks associated with the development of increasingly large language models. It emphasizes the need for careful consideration of environmental and financial costs, responsible data curation, and exploration of research avenues beyond simply scaling up model size.

BEYOND MEMORIZATION: VIOLATING PRIVACY VIA INFERENCE WITH LARGE LANGUAGE MODELS

This article examines the privacy risks posed by large language models (LLMs). It presents a study showing that LLMs can infer personal attributes from text, such as location and income, with high accuracy, raising concerns about privacy protection in the digital age.

Ethical Approaches to Closed Messaging Research: Considerations in Democratic Contexts

This article examines the ethical challenges researchers face when studying election-related communications through encrypted messaging apps like WhatsApp, Signal, and Telegram. It reviews four research models and presents key questions regarding public parameters, participant safety, and researcher obligations to navigate the privacy concerns inherent in such closed messaging spaces.

Empowering Innovation Through Responsible AI Governance

The emergence of artificial intelligence (AI) and machine learning (ML) has set the stage for a transformative future, promising unprecedented advancements across industries. However, with great opportunity comes great responsibility. The development of responsible and trustworthy AI technologies is of paramount importance, particularly considering the role that these technologies play in powering the future of work.

User Insights into Recommendation Algorithm Reporting

This article explores the perspectives of social media users on algorithmic transparency. It discusses the need for meaningful transparency reports that inform users about recommendation algorithms, reflecting the insights and values that users believe should be supported by such report.

Search Engine Manipulation Effect – Robert Epstein and Ronald E. Robertson

The results of these experiments demonstrate that (i) biased search rankings can shift the voting preferences of undecided voters by 20% or more, (ii) the shift can be much higher in some demographic groups, and (iii) search ranking bias can be masked so that people show no awareness of the manipulation. We call this type of influence, which might be applicable to a variety of attitudes and beliefs, the search engine manipulation effect.

Why does junk news spread so fast on social media? Algorithms, Advertising and Exposure in Public Life - Knight Foundation

The term “fake news” has risen to prominence in the post-truth world, and is often used as an umbrella term to describe a wide range of problematic content, from accidental misinformation to purposefully misleading and deceptive information. The term is also used discursively to describe the swath of incendiary and outrageous headlines, hate speech, hyper partisan content, and political propaganda that have partially characterized the post truth world.

Hubspot Education Solutions - Whitepaper on Digital Distractions

Smartphones enable easy access to social media or gaming platforms, including on school grounds. Phone addiction (compulsively checking smartphones throughout the day), social media addiction (on social media platforms constantly) and internet addiction (with difficulties defining the difference between virtual and physical worlds) all present significant challenges for schools. The World Health Organisation recently included ‘gaming disorder’ in their classification of diseases.

Internet intermediaries and online harms: Regulatory Responses in India

While the increased use of the Internet has propelled the growth of the digital economy, and democratised access to information, it has also given rise to complex policy challenges surrounding the need to address harmful online content and conduct. At the heart of this challenge lies the issue of ‘intermediary liability’. Should entities that transmit/carry/distribute third party (user) content, and enable online interactivity be held responsible for harmful acts that may be carried out on or using their services?

A Contextual Approach to Privacy Online - Helen Nissenbaum

This article explores present-day concerns about online privacy, but in order to understand and explain on-the-ground activities and the anxieties they stir, it identifies the principles, forces, and values behind them. It considers why privacy online has been vexing, even beyond general concerns over privacy; why predominant approaches have persisted despite their limited results; and why they should be challenged. Finally, the essay lays out an alternative approach to addressing the problem of privacy online based on the theory of privacy as contextual integrity.

The Ethical Use of Persuasive Technology - Stanford Behaviour Design Lab

While our research has moved on from persuasive technology to focus on designing for healthy behavior change, we believe it is important to continue to highlight the ethical contributions in the field of Persuasive Technology so that those who are responsible for designing persuasive technologies can do so in an ethical way.

Become a Good Tech Champ!

Support our work to make children safer online, and make the internet a productive place for everyone.