

Student’s Corner
This is a special section of our website dedicated to showcasing the analytical skills and insightful perspectives of law students with a keen interest in privacy law. Here, you’ll find a diverse collection of articles, essays, case analyses, and commentary written by students from various law schools. Our goal is to provide a platform for aspiring legal professionals to share their thoughts on privacy law, enhance their writing skills, and engage with a broader audience.
Why Contribute?
- Share Your Insights: Whether you’re passionate about data protection, cyber security, or privacy regulations, the Student’s Corner is your space to express your views.
- Build Your Legal Portfolio: Published work on our site can enhance your legal writing portfolio, providing valuable exposure and experience in the field of privacy law.
- Engage with the Legal Community: Join a community of like-minded peers and receive feedback from readers and fellow legal enthusiasts.
The call for volunteers is currently open to student submissions.
For more information about and to volunteer email the Privacy Law Section Publications at privacypublications.cla@gmail.com
STUDENT ARTICLES
Differing Data Minimization Standards: Comparing California’s CCPA and Maryland’s MODPA
By: Joe Brown
3L, Santa Clara University School of Law
Introduction
Data minimization is one of the core principles of privacy. While expressed differently across jurisdictions, the message has remained similar: entities should only collect, retain, and process personal data that is necessary for a specific purpose. This concept was designed to reduce the risk associated with excessive data collection by making sure that those who collect data do not gather more information than they need. Yet time and again, the failure to follow this principle has played a central role in some of the worst privacy incidents to date. For example, Marriott experienced a series of data breaches from 2014 to 2018 that was exacerbated by a lack of data minimization practices. These breaches exposed over 339 million guest account numbers, 5.25 million unencrypted passport numbers, and 5.2 million guest records worldwide. Much of the data that was exposed was either unnecessary for Marriott to collect or was held for significantly longer than needed. As part of the FTC settlement, Marriott is ordered to create a data minimization policy that requires personal information to be retained for only as long as reasonably necessary to fulfill the purpose for which it was collected. In response, states have considered redefining data minimization from the current majority standard. The state that has taken the boldest approach is Maryland.
This article will explore the current majority data minimization approach, examine Maryland’s newly adopted standard, and compare California and Maryland’s data minimization standards.
How The Term Data Minimization Came to Be
The idea of data minimization was first introduced by the Organization for Economic Co-operation and Development (OECD) in its Privacy Guidelines of 1980. This guideline laid the groundwork for almost every global privacy framework that we know today. It was first called the Collection Limitation Principle which stated: “There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.” From there, the principle evolved into what we know today as data minimization.
Eventually, the principle of data minimization was codified in Article 5(1)(c) of the General Data Protection Regulation (GDPR). Under GDPR, personal data must be “adequate, relevant, and limited to what is necessary in relation to the purposes for which they are processed.” Article 5(1)(c) placed a legal obligation on controllers to avoid collecting more data than they need.
While the United States lacks a comprehensive federal privacy law, individual states have implemented their own privacy frameworks over the years that have all included some form of a data minimization standard. Of the 19 states that have enacted a comprehensive data privacy framework, 15 states have echoed a similar “GDPR approach” to data minimization. For example in Virginia’s VCDPA which follows the GDPR approach, a controller shall… “limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which the personal data is processed, as disclosed by the consumer.”
While almost every state has followed the GDPR approach, some states like California and Colorado have created slightly different standards. One state in particular that has departed from the norm and created a stricter data minimization standard is Maryland. The state of Maryland’s new definition of data minimization could potentially have significant consequences for how data is collected.
Maryland’s New Approach
Maryland’s data minimization approach, found in the Maryland Online Data Privacy Act (MODPA) of 2024, has deviated from traditional standards in a few ways. It is less permissive in how companies can use collected data and virtually prohibits the sale of sensitive personal data.
First, in MODPA Section 14–4607(B)(1), “a controller shall limit the collection of personal data to what is reasonably necessary and proportionate to provide or maintain a specific product or service requested by the consumer.” This definition provides little to no leniency for how companies can use the data that they collected and leaves in question whether secondary uses such as targeted advertising, product development, or analytics are permissible.
In addition, Maryland has taken steps to prohibit the sale of sensitive personal data. Under MODPA Section 14–4607(A)(1), controllers cannot collect, process, or share sensitive data about a consumer unless it is “strictly necessary to provide or maintain a specific product or service requested by the consumer to whom the personal data pertains.” Again, this is a much higher standard than the traditional norm that is usually based on consent. This new heightened threshold is an intentional policy shift toward stronger safeguards for sensitive information, creating an expectation for Maryland consumers to feel secure that their sensitive data will not be sold.
Maryland’s approach has privacy professionals curious if this new standard will create a new wave of stronger data minimization standards or if this is an isolated event. Many privacy advocates have applauded Maryland for taking a stronger consumer protection approach, while industry groups are raising concerns about the feasibility of redefining the common state level data minimization standard. MODPA becomes effective on October 1st, 2025, so by the end of the year there may be better insight on how Maryland enforces this new standard.
CCPA and Maryland Distinctions
California’s data minimization standard as codified in Section 7002 of the CCPA is slightly different from both Maryland and the other 15 states that follow the GDPR standard. Section 7002 states that, “a business’s collection, use, retention, and/or sharing of a consumer’s personal information shall be reasonably necessary and proportionate to achieve the purpose for which the personal information was collected or processed”.
The “necessary and proportionate” standard has a clear set of standards in Section 7002(d) that someone who is collecting data can look to evaluate their practices:
- What is the minimum personal information that is necessary to achieve the purpose identified?
- The possible negative impacts on consumers.
- The existence of additional safeguards for the personal information to specifically address the possible negative impacts on consumers.
Compared to Maryland’s approach that generally permits data processing only when it is strictly necessary to provide a requested product or service, California allows for a more context-specific analysis. These two models reflect different policy priorities. Maryland emphasizes pushing data minimization to its limits, while the California approach involves evaluating data minimization practices by using a case-by-case context analysis.
Conclusion
Maryland’s deviation from the current data minimization standard is just one example of an additional layer of complexity that exists in the evolving U.S. privacy landscape. How other states and potentially the federal government respond to Maryland’s approach may determine whether they set a new precedent or remain an outlier. As AI systems increasingly rely on vast amounts of data, the tension between innovation and data minimization has become more contested than ever.
Words Matter: Addressing Bias in Digital Discourse
By: Kaavya Shanmugam
3L, Santa Clara University School of Law
Language is often perceived as a neutral medium yet embeds historical and societal biases that significantly influence legal interpretation and outcomes. Much like society, language evolves over time, reflecting historical events, power dynamics, and cultural attitudes. We now see privacy legal terminology adopting and reinforcing these biases.
Language is not neutral because it is deeply intertwined with the history and social structure of the societies in which it is used. Moreover, dominant groups often shape the language used to describe and define other groups, which can marginalize or misrepresent those groups’ experiences and identities. For example, in criminal law, the use of the term “black market” to describe illegal trade markets has been questioned for its potential to reinforce negative associations with blackness. Another example being “black codes” to describe laws that restricted African Americans’ rights, post-Civil War. It is important to recognize how seemingly neutral terms can carry unintended connotations and reinforce biases in the legal sphere.
Similarly, the technology industry has grappled with biased terminology, as seen in the recent shift from ‘dark patterns’ to ‘deceptive patterns” by the person who coined the term, Dr. Harry Brignull. Dr. Brignull even rebranded his website from darkpatterns.org to deceptive. Design. While the intent was to highlight deceptive practices, the choice of the word “dark” carries unintended implications. In many cultures, darkness is associated with negativity, danger, or evil, which can unintentionally reinforce harmful stereotypes and racial biases by equating darkness with undesirable traits. Moreover, the term lacks neutrality, implying a moral judgment that may get in the way of objective discussions about these design tactics. It also raises concerns about cultural sensitivity, as darkness holds different connotations in various societies. But, the current shift reflects a growing consensus for more precise terminology in describing deceptive user interface practices, while highlighting the widening gap between legal language and industry standards.
It’s crucial to note that this terminology is still prevalent in current legislation. In fact, only three state privacy laws explicitly address these deceptive practices, and all of them continue to use the term “dark patterns.” These include the California Privacy Rights Act (CPRA), the Colorado Privacy Act, and the Connecticut Data Privacy Act. This situation highlights the pressing need for legal language to evolve alongside our understanding of these issues. As we craft future legislation and amend existing laws, it’s important that we consider adopting the term “deceptive patterns.” This change would not only align our legal language with current expert consensus but also demonstrate our commitment to creating inclusive, unbiased laws that accurately describe the practices we aim to regulate.
From a policy perspective, the use of “dark patterns” is problematic for several reasons. The global nature of the digital economy creates pressure to use terminology that translates effectively across different cultures and languages. The term “dark patterns” may not convey the same meanings across different contexts and cultures, potentially complicating international collaboration on this critical issue. To effectively combat these practices, we need to educate the public with clear, understandable terms. Adopting the term “deceptive patterns” better aligns our language with the true nature of these practices. This change allows us to focus on the core issue – the deliberate deception of users – without getting sidetracked by debates over terminology or unintended cultural implications.
The use of biased terminology can also have psychological effects on users and consumers. Exposure to biased language can impact decision-making processes, self-perception, and even performance. In the context of ‘dark patterns’, the term itself may unintentionally prime users to expect malicious intent, potentially increasing anxiety and reducing trust in digital interfaces overall. Additionally, the repeated use of such terminology in tech and legal contexts can normalize these biases, perpetuating their effects across various domains of society. By shifting to more neutral language like ‘deceptive patterns’, we can mitigate these psychological impacts and foster a more inclusive digital environment.
In conclusion, the shift from ‘dark patterns’ to ‘deceptive patterns’ is more than a matter of semantics. Adopting more precise and inclusive terminology facilitates clearer communication across global markets, and focuses directly on the manipulative nature of these practices. As we move forward in crafting and enforcing legislation to protect consumers in the digital age, let us lead with language that is as clear, inclusive, and effective as the protections we aim to provide.
About the author: Kaavya is third-year law student at Santa Clara University School of Law, where she is pursuing the Privacy Certificate. She is deeply interested in the dynamic field of data protection and digital rights, and eager to make significant contributions to this crucial area after graduating in May 2025.
Disclaimer: The views expressed in this student article are solely those of the author and do not represent the opinions of the California Lawyer’s Association Privacy Law Section. This article is for educational purposes only and should not be considered as legal advice. Readers should consult with qualified legal professionals for specific guidance.
Identity & Criminal Check Verification: Potential Solution to Bridging the Safety Gap in Dating Apps
By, Kiara J. Patiño Navarro, CIPP/US
Santa Clara University School of Law, 2024 J.D. & Privacy Certificate Candidate
A lot has changed since Grindr launched in 2009 as one of the few geolocation dating apps. Dating apps now have a plethora of filters to ensure users can find matches that align with their preferences and feel secure in their interactions. These filters include, but are not limited to: age, race, height, gender orientation, religion, hobbies, dating goals, relationship preferences, and a verified selfie check.
What filter is currently missing? A verified identity and criminal records check.
In this piece, I will discuss the importance of identity and criminal verification checks, the need for them in the current digital landscape, how they can be implemented, and how their implementation could impact privacy compliance.
Safety Concerns
There are justifiable safety concerns in the use of dating apps, especially for women.
A ProPublica 2019 report found more than a third of women participants reported being sexually assaulted by someone they met through an online dating platform. And usage is high. Pew Research Center reported in 2023 that 53% of participants under 30 had used a dating site or app compared to 37% of participants aged 30 to 49. With the pervasiveness of dating app usage, it is unsurprising that 60% of Americans support a background check on dating apps. Unfortunately, 57% of women participants reported that online dating is “not at all/not too safe.”
Dating apps such as Hinge and Tinder are aware of these risks. Hinge and Tinder’s terms of use ask users to ensure they are eligible before making an account, and seemingly in users making their account, they are confirming that they have not been convicted of, or pleaded no contest to a felony, or any crime involving violence, including sex crimes; and that they are not required to register as a sex offender.
Online Dating Social Contracts
Before the explosion of online dating and the use of dating apps, social contracts in the dating world provided some comfort via informal verification. The term “social contract” is an implicit agreement among the members of a society to cooperate for social benefits.
Social contracts in the dating world vary by community. For instance, many parts of the world were shaped by social contracts in close-knit communities.
Cass R. Sunstein describes in his 1996 article, “On The Expressive Function of Law,” that in close-knit communities when a defector violates norms they will probably feel shame and this is an important motivational force for compliance, especially in considering the high social “tax” communities can enforce through informal punishment like ostracization.
There are some places in the world where dating apps are not useful or encouraged. For example, my family’s small town of San José de Gracia in the highlands of Jalisco, Mexico has a population of less than 10,000. This close-knit community is a place where social capital and reputation carry over multiple generations. A phrase used in town is, “pueblo chico, infierno grande” (small town, huge hell) which reflects their attitudes in feeling the social pressures to act exemplary or live in hellish exile.
Whereas in larger metropolitan areas, folks typically don’t have a tía (aunt) or a busybody neighbor who can give some assurance in dating by providing informal verification.
In the digital world, the closest form of community-based social nets are regional, women-led dating advice groups on social media platforms that warn others of their experiences using dating apps, particularly regarding sexual assault and catfishing.
Not a New Concept
Identification verification is not a new concept in dating apps.
The League is a dating app aimed at professionally career-driven folks. An identity check is required for all users via users merging their LinkedIn profile. This check relies upon people being truthful about their public LinkedIn. The waitlist is around 50,000 people in the Bay Area and each profile on the app is personally reviewed before going live.
The League’s model is a good first step and it would be beneficial for other dating apps to invest resources into trustworthy criminal and identity verification checks rather than relying on user-provided public representations.
Privacy Considerations
If dating apps conducted criminal and identity checks, privacy considerations could include:
- Dating apps map users’ identities by encouraging building a detailed profile, including their hobbies, sexual orientation, and religious and political beliefs. Some of this information is sensitive personal information under some privacy laws and if a dating site conducted criminal and identity checks, they would likely be collecting even more sensitive personal information, which may go against data minimization principles. Mozilla Fox recently reported that most dating apps (80%) share or sell users’ personal information and won’t guarantee them the right to delete their data.
- Dating apps would need to exercise special care to safeguard users’ sensitive personal information while balancing the need for the physical safety of their other users. For example, the California Consumer Privacy Act (“CCPA”), as amended by the California Privacy Rights Act (“CPRA”), provides California users the right to limit the use of their sensitive personal information for limited purposes. Per Section 1798.135 of the CCPA regulations, companies must provide a notice of the right to limit or provide an alternative opt-out link that is accessible to users. Thus, as users provide the information needed for background checks, companies must ensure that users are given notice of the right to limit or be provided with an opt-out link.
- Companies should consider disposing data soon after verifying users’ identities to safeguard users’ information. As users have the right to correct under the CCPA, the data retention period could factor in the time needed to correct a user’s information if they were denied the badge unjustifiably.
In addition to these privacy concerns, wide-scale implementation of checks may be cost-prohibitive. However, users could be charged a one-time fee for a verified criminal and identity check badge and be afforded ease of mind in dating.
Conclusion
With a growing number of younger people using dating apps, these apps have a societal responsibility in shaping a safer dating experience. While identity and criminal verification checks may not entirely prevent sexual assault, they would likely result in a reduction and provide a more trustworthy and transparent space for users.
Disclaimer: The views expressed in this student article are solely those of the author and do not represent the opinions of the California Lawyer’s Association Privacy Law Section. This article is for educational purposes only and should not be considered as legal advice. Readers should consult with qualified legal professionals for specific guidance.