Privacy Law
Words Matter: Addressing Bias in Digital Discourse
By: Kaavya Shanmugam
3L, Santa Clara University School of Law
Language is often perceived as a neutral medium yet embeds historical and societal biases that significantly influence legal interpretation and outcomes. Much like society, language evolves over time, reflecting historical events, power dynamics, and cultural attitudes. We now see privacy legal terminology adopting and reinforcing these biases.
Language is not neutral because it is deeply intertwined with the history and social structure of the societies in which it is used. Moreover, dominant groups often shape the language used to describe and define other groups, which can marginalize or misrepresent those groups’ experiences and identities. For example, in criminal law, the use of the term “black market” to describe illegal trade markets has been questioned for its potential to reinforce negative associations with blackness. Another example being “black codes” to describe laws that restricted African Americans’ rights, post-Civil War. It is important to recognize how seemingly neutral terms can carry unintended connotations and reinforce biases in the legal sphere.
Similarly, the technology industry has grappled with biased terminology, as seen in the recent shift from ‘dark patterns’ to ‘deceptive patterns” by the person who coined the term, Dr. Harry Brignull. Dr. Brignull even rebranded his website from darkpatterns.org to deceptive. Design. While the intent was to highlight deceptive practices, the choice of the word “dark” carries unintended implications. In many cultures, darkness is associated with negativity, danger, or evil, which can unintentionally reinforce harmful stereotypes and racial biases by equating darkness with undesirable traits. Moreover, the term lacks neutrality, implying a moral judgment that may get in the way of objective discussions about these design tactics. It also raises concerns about cultural sensitivity, as darkness holds different connotations in various societies. But, the current shift reflects a growing consensus for more precise terminology in describing deceptive user interface practices, while highlighting the widening gap between legal language and industry standards.
It’s crucial to note that this terminology is still prevalent in current legislation. In fact, only three state privacy laws explicitly address these deceptive practices, and all of them continue to use the term “dark patterns.” These include the California Privacy Rights Act (CPRA), the Colorado Privacy Act, and the Connecticut Data Privacy Act. This situation highlights the pressing need for legal language to evolve alongside our understanding of these issues. As we craft future legislation and amend existing laws, it’s important that we consider adopting the term “deceptive patterns.” This change would not only align our legal language with current expert consensus but also demonstrate our commitment to creating inclusive, unbiased laws that accurately describe the practices we aim to regulate.
From a policy perspective, the use of “dark patterns” is problematic for several reasons. The global nature of the digital economy creates pressure to use terminology that translates effectively across different cultures and languages. The term “dark patterns” may not convey the same meanings across different contexts and cultures, potentially complicating international collaboration on this critical issue. To effectively combat these practices, we need to educate the public with clear, understandable terms. Adopting the term “deceptive patterns” better aligns our language with the true nature of these practices. This change allows us to focus on the core issue – the deliberate deception of users – without getting sidetracked by debates over terminology or unintended cultural implications.
The use of biased terminology can also have psychological effects on users and consumers. Exposure to biased language can impact decision-making processes, self-perception, and even performance. In the context of ‘dark patterns’, the term itself may unintentionally prime users to expect malicious intent, potentially increasing anxiety and reducing trust in digital interfaces overall. Additionally, the repeated use of such terminology in tech and legal contexts can normalize these biases, perpetuating their effects across various domains of society. By shifting to more neutral language like ‘deceptive patterns’, we can mitigate these psychological impacts and foster a more inclusive digital environment.
In conclusion, the shift from ‘dark patterns’ to ‘deceptive patterns’ is more than a matter of semantics. Adopting more precise and inclusive terminology facilitates clearer communication across global markets, and focuses directly on the manipulative nature of these practices. As we move forward in crafting and enforcing legislation to protect consumers in the digital age, let us lead with language that is as clear, inclusive, and effective as the protections we aim to provide.
About the author: Kaavya is third-year law student at Santa Clara University School of Law, where she is pursuing the Privacy Certificate. She is deeply interested in the dynamic field of data protection and digital rights, and eager to make significant contributions to this crucial area after graduating in May 2025.
Disclaimer: The views expressed in this student article are solely those of the author and do not represent the opinions of the California Lawyer’s Association Privacy Law Section. This article is for educational purposes only and should not be considered as legal advice. Readers should consult with qualified legal professionals for specific guidance.