Privacy Law
PRIVACY LAW REVIEW – WHAT YOU NEED TO KNOW (April 2022)
CLA’s Privacy Law Section summarizes important developments in California privacy and beyond.
Message From the Chair
We hope you enjoy this month’s Privacy Law Section newsletter and thank you for your interest and membership in our section. We are excited to share with you our enthusiasm for all things data privacy and cybersecurity and to provide a platform for California privacy lawyers and other professionals to engage in thought leadership around the issues of the day.
We welcome all members of the California Lawyers Association to become involved in our section’s committee work. You can find out more about what we do on our website at https://calawyers.org/section/privacy-law/. Email us at privacy@calawyers.org if you are interested in any of this work.
Committee to Assess California Legal Specialization in Privacy
We are happy to announce that the California State Bar is considering creating a new legal specialization in Privacy Law. Members of the CLA Privacy Section were involved last year in drafting an application to the State Bar that encouraged it to consider creating a privacy specialization. Our application was well received and, as a result, the State Bar is convening a 13-person committee to explore the creation of the specialization.
The Legal Specialization unit is currently seeking licensees, professionals, and others with expertise in Privacy Law to participate in the State Bar of California’s new Consulting Group on the Establishment of a Legal Specialization in Privacy Law (Privacy Law Group).
The Privacy Law Group will be composed of several volunteer attorneys and public members. Members will meet four to six times per year to study the practice area and provide recommendations to the California Board of Legal Specialization about the possibility of adding Privacy Law as a State Bar legal specialization area. The Privacy Law Group will also draft certification standards if it is recommending adding this practice area. Apply by Wednesday, June 1, 2022. Please submit applications to appointments@calbar.ca.gov. For questions, please contact legalspec@calbar.ca.gov.
We hope you have a wonderful May. Look for some new programs coming soon!
Sheri Porath Rockwell
Chair, CLA Privacy Law Section
The Privacy Law Section is proud to continue Privacy Talks: Interviews with California Privacy Leaders
This month, we interviewed Jeewon Serrato. Jeewon is Partner and Co-Chair of the Digital Transformation and Data Economy at Baker & Hostetler LLP. Jeewon served as the inaugural Chair to the Privacy Law Section and currently serves on the Section’s Executive Committee. Click here to watch our interview with Jeewon.
Cyber Incident Reporting Act Imposes New Cybersecurity Responsibilities on Critical Infrastructure
By Cody Venzke
On March 15, 2022, President Joseph Biden signed H.R. 2471, the Consolidated Appropriations Act, 2022, into law. Division Y of the Appropriations Act, titled the “Cyber Incident Reporting for Critical Infrastructure Act of 2022” (the Act), establishes new cybersecurity reporting requirements for the owners or operators of critical infrastructure.
Under the Act, an “entity in critical infrastructure” must report “substantial cyber incidents” and ransomware payments to the Cybersecurity and Infrastructure Security Agency (CISA). Although the Act’s requirements must be developed through rulemaking by CISA, it is poised to impose new cybersecurity reporting requirements across dozens of industries.
The Act applies to “covered entities,” which it defines as “an entity in a critical infrastructure sector, as defined in Presidential Policy Directive 21” (PPD-21). The 2013 PPD-21 and related law define “critical infrastructure” as “systems and assets, whether physical or virtual, so vital to the United States” that their incapacity or destruction would debilitate “security, national economic security, national public health or safety, or any combination of those matters.” PPD-21 identified 16 “critical infrastructure sectors,” including communications, energy, healthcare, and government facilities, among others. PPD-21 and subsequent legislation designated a “Sector Risk Management Agency” (SRMA) for each of those sectors to coordinate with the Department of Homeland Security in the protection of their respective sector’s cybersecurity.
Covered entities are required by the Act to report “covered incidents,” which it defines as a “substantial cyber incident,” and ransomware payments. The Act incorporates the existing definition of a “cyber incident” under the Homeland Security Act of 2002, 6 U.S.C. § 659, as an “occurrence that actually or imminently jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information on an information system.” Covered entities must report covered incidents or ransomware payments to CISA within 72 or 24 hours, respectively; however, the statutory deadline for reporting covered incidents begins to run only after the entity “reasonably believes a covered cyber incident has occurred.”
Although the Cyber Incident Reporting Act incorporates existing definitions of “critical infrastructure” and “cyber incidents,” the scope of those terms is subject to refinement under future rulemaking by CISA. At minimum, the future rulemaking must provide “clear description[s]” of:
- the “types of entities that constitute covered entities,”
- the “types of substantial cyber incidents that constitute covered incidents,”
- the “specific required contents of a report” for a covered incident or a ransomware payment, and
- procedures for submitting required reports.
CISA’s rules must be promulgated through a notice of proposed rulemaking (NPRM) and a final rule. CISA is required to publish an NPRM within 24 months of the Act’s enactment in consultation with the SRMAs, the Department of Justice, and “other Federal agencies.” The final rule must be published within 18 months of the NPRM. The Act stipulates that “subsequent rules” promulgated following the final rule must “comply with the requirements under chapter 5 of title 5, United States Code, including the issuance of a notice of proposed rulemaking under section 553 of such title,” known as the Administrative Procedure Act (APA). The requirement that subsequent rulemaking meet the notice requirements of the APA suggests that the initial NPRM and final rule need not do so.
The Act includes other provisions, including privacy protections for reports provided under the Act, a pilot program for ransomware vulnerability warnings, and increased coordination and information sharing by the National Cybersecurity and Communications Integration Center. The Act’s reporting requirements for covered entities do not come into effect until “the dates prescribed in the final rule issued” by CISA.
Ninth Circuit Holds Data Scraping is Legal in hiQ v. LinkedIn
By Jennifer Oliver
The Ninth Circuit court of appeals has yet again, held that data scraping public websites is not unlawful. hiQ Labs, Inc. v. LinkedIn Corp., decided on April 18, affirms the court’s previous decision that plaintiffs may not rely on the Computer Fraud and Abuse Act (“CFAA”) to enjoin third parties from scraping data from their websites. Data scraping refers to the extraction of data from websites, whether public facing or not. Because that practice is not per se illegal, parties must rely on statutes like the CFAA to protect that data.
That is precisely what LinkedIn did, serving multiple cease and desist letters on hiQ for scraping its members’ data, and restricting hiQ’s access to its website. In response, hiQ filed a complaint against LinkedIn, alleging LinkedIn’s behavior was anticompetitive and violated state and federal laws. Among other things, hiQ alleged that LinkedIn was improperly attempting to exercise monopoly rights over personal data made publicly available by its users, and that hiQ did not violate users’ privacy rights when it scraped that data. In response, LinkedIn argued that hiQ’s claims should be preempted by the CFAA.
The district court granted hiQ’s request for a preliminary injunction, and in September 2019, the Ninth Circuit affirmed the lower court’s ruling. In that decision, the Ninth Circuit found that LinkedIn computers are publicly available and therefore there was no access “without authorization” in violation of the CFAA. LinkedIn filed a petition for writ of certiorari to the Supreme Court, which was granted in June 2021. The Supreme Court issued a summary disposition, vacating the Ninth Circuit’s previous judgment and remanding the case for additional consideration in light of the Court’s ruling in Van Buren v. United States, which held that an individual who has legitimate access to a computer network but accesses it for an improper or unauthorized purpose (in that case, a police officer retrieving information about a license plate in exchange for money) does not violate the CFAA.
The Ninth Circuit held that the decision in Van Buren reinforced its prior holding, referring to the Court’s finding that there is a potential violation only if authorization is required and has not been given. On a publicly available website, the Ninth Circuit found, there are no rules or access permissions to prevent access, and therefore accessing that publicly available data can not violate the CFAA.
These rulings call into question the future of all data scraping litigation, and companies that maintain publicly available information on their websites must be advised that they likely cannot use the CFAA to prevent third parties from scraping that data, regardless of what is or is not allowed in their terms of use. It would now appear that the only way to assert CFAA claims to prevent scraping is to require prior authorization or credentials to access the data. In the wake of hiQ and Van Buren, victims of data scraping must rely on state common law claims to protect that data.
Stay tuned to the CLA Privacy Section’s monthly updates for more reports on those cases.
California’s Biometric Information Bill (SB 1189) – To Be, or not to Be: That Is the Question
By Alyona Eidinger
As it was reported in the February issue of the Privacy Law Review, Senator Bob Wieckowski introduced SB 1189, or Biometric Information, a measure in the California Senate on February 17, 2022. This bill, partially based on the Illinois’s Biometric Information Privacy Act (“BIPA”), establishes very specific rules for collection, disclosure, and sale of biometric information by a “Private Entity.” The definition of the private entity is very broad and includes “an individual, partnership, corporation, limited liability company, association, or similar group, however organized;” it excludes “a federal, state, or local government agency or an academic institution.”
SB 1189 defines biometric information as “the data of an individual generated by automatic measurements of an individual’s unique biological or behavioral characteristics, including a faceprint, fingerprint, voiceprint, retina or iris image, or any other biological characteristic that can be used to authenticate the individual’s identity.”
SB 1189 permits private entities to “collect, capture, purchase, receive through trade, or otherwise obtain a person’s biometric information” to “provide a service requested or authorized by the subject” or for “another valid business purpose specified in the written policy.” Prior to collection, the private entity must (1) inform in writing about the “biometric information being collected, stored, or used” and “the specific purpose and length of time for which the biometric information is being collected, stored, or used,” and (2) receive from an individual a signed “written release,” which is defined as “specific, discrete, freely given, unambiguous, and informed written consent.” The written release must be a stand-alone consent, not combined with “another consent- or permission-seeking instrument or function.”
SB 1189 prohibits sale, lease, trade, or receiving any profit from the disclosure of a person’s biometric information or using it for advertising.
SB 1189 permits disclosure in 3 situations: (1) with “written release” that authorizes disclosure and contains information about the data being disclosed, the reason for the disclosure, and the recipients of the biometric information; (2) the disclosure “completes a financial transaction requested or authorized by” the individual; and (3) if required by law or pursuant to a valid warrant or subpoena.
SB 1189 mandates a private entity that possesses biometric information to “develop and make available to the public a written policy establishing a retention schedule and guidelines for permanently destroying the biometric information” on or before September 1, 2023.
SB 1189 prescribes a private entity to use a reasonable, industry-specific standard of care to “store, transmit, and protect from disclosure biometric information.”
SB 1189 contemplates the private right of action, allowing for either the statutory damages ($100-$1,000 per day) or actual damages, whichever is greater. In addition, an individual may also recover punitive damages, reasonable attorney’s fees and litigation costs as well as any other relief (including equitable or declaratory) that the court deems appropriate.
Procedural History
The Senate Judiciary Committee passed SB 1189 on April 5 and re-referred the bill as amended to the Senate Committee on Appropriations, which heard it on April 25, 2022.
Committee on Appropriations Hearing Outcome
At the beginning of the hearing, Chair Senator Anthony J. Portantino underscored that the Committee was not going to relitigate the policy of the measures presented on the agenda. Rather, the Committee was going to hear the testimony as to the bills’ fiscal impacts. There were 25 bills on the April 25 agenda, including SB 1189, all of which were suspense file candidates.
The author of SB 1189 waived presentation, and there was no one in the room to testify in support or opposition of SB 1189. However, several members of the public called-in during the hearing. One caller expressed “strong support” of the bill on behalf of the California Council on American-Islamic Relations (CAIR); another caller, representing the National Payroll Reporting Consortium (NPRC), opposed it. Absent any public testimony as to the fiscal impact of SB 1189, Chair Portantino placed the bill on suspense file.
Demystifying “Suspense File”
The suspense file process has been a part of the Committee Rules since the mid-1980s. It is a way to consider the fiscal impact of a measure on the state. The bill’s fiscal impact must meet certain thresholds to trigger a referral to the suspense file. The Committee, “by a majority of the members present and voting, shall refer to the Suspense File all bills that would have a fiscal impact in any single fiscal year (i) from the General Fund (including general obligation bond funds) or from private funds of $50,000 or more; or (ii) of $150,000 or more from any special account(s) or fund(s).” Bills that meet either of the suspense file thresholds will be placed on the suspense file after testimony is taken at a regular-order hearing.
The Committee’s analysis for SB 1189 indicates that although the “cost pressures to the judicial branch” cannot be determined with certainty due to numerous factors influencing the costs, they are estimated to be “in the millions or tens of millions.” The Committee on Appropriations examined the court filings generated by the BIPA litigation to gauge the fiscal impact of SB 1189.
From “Suspense File” to “Suspense Hearing”
Bills placed on the suspense file are then heard and voted on during the so-called suspense hearing. The suspense hearing is a vote-only hearing with no testimony. The bills are heard alphabetically by author. Such hearing occurs before fiscal committees are due to hear and report bills to the Floor. Thereafter, the suspense-file bills either will move on to the Floor for further consideration or will continue to be held in committee and under submission.
Based on the 2022 Tentative Legislative Calendar, last days for the fiscal committees to hear and report to the Floor bills introduced in their house are May 20 and August 12. Therefore, one can expect the Senate Committee on Appropriations to revert to voting on SB 1189 around either of these deadlines.
To put the process in perspective, it is illuminating to consider the details from last year. The Senate Committee on Appropriations conducted suspense hearings on May 20 and August 26 in 2021. There were 357 and 322 bills, respectively, on the agenda for the Committee to consider. On average, the members of the Committee were voting at the speed of approximately 3.5-4 bills per minute!
As SB 1189 now goes “to suspense without objection,” its subsequent fate remains to be determined by the Senate Committee on Appropriations at a later suspense hearing.
Adtech Privacy Update Vol. 4: DSA, ATT, Amazon’s Alexa, and Google ‘Data Safety’
By McKenzie Thomsen
Lately, it seems with every passing month, the adtech world is more and more shaken up by privacy regulation. When I sit down to write these updates, I make a list of what has happened in adtech privacy since my last article, and every time I end up with a ridiculously long list. It reminds me of the song “We didn’t start the fire” by Billy Joel. It’s an ever-growing list. There’s so much on the list that there’s no time to dig into each topic. (I guess that could be said about privacy in general too). Even so, I’m going to try and elaborate on four items from my list:
(1) the Digital Services Act’s impact on digital advertising;
(2) Amazon is selling transcripts of your Alexa interactions to serve you ads; and on the mobile side of things;
(3) Researchers found Apple’s ATT… is ineffective; and
(4) Google has released the ‘Data Safety’ section (it’s version of Apple’s Privacy Nutrition Labels)
The DSA is about to change digital advertising in the EU
You’ve likely already heard of the EU’s Digital Services Act (DSA). On Saturday April 30, the final terms were agreed upon. We won’t know the final text for a little while, but here’s what we know so far. The DSA is designed to impose legally binding content controls on digital platforms, specifically related to illegal content, transparent advertising, and disinformation. ‘Very Large Online Platforms (VLOPs), and ‘Very Large Search Engines (VLSEs) (think Google, Meta, Amazon) have heightened obligations.
Let’s get into the effects the DSA will have on adtech (originally written by Eric Seufert). The DSA will:
- Ban targeted advertising to minors. The specifics around ‘knowledge’ of a user’s age are still unknown.
- Ban the use of sensitive data for targeted advertising. Digital platforms “shall not present advertising to recipients of the service based on profiling… using special categories of personal data” as defined by the GDPR.
- Require digital platforms provide users with meaningful information about how their data will be monetized, and an opt-out mechanism. And unsurprisingly there’s a “dark patterns” aspect. “Refusing consent shall be no more difficult or time-consuming to the recipient than giving consent.”
- Require digital platforms disclose the sponsor of an ad and the targeting parameters used in serving the ad.
- Require VLOPs to maintain records on targeted advertising. Specifically, VLOPs must maintain, (a) the content of the ad, (b) the sponsor of the ad, (c) the period during which the ad was exposed, (d) the targeting parameters used in serving the ad, and (e) the total number of people to whom the ad was exposed, broken out by targeting group. All this data must be made available via API.
Big changes are coming, and we’ll see how digital platforms adapt (or exit the EU). The DSA’s restrictions will go into force 15 months after being voted into law or on January 1st, 2024, whichever is later.
Amazon is selling your Alexa interactions to provide you with target ads
This may not be surprising, but researchers conducted a study and found that Amazon and third parties (including advertising and tracking services) collect data from your interactions with Alexa and share it with as many as 41 advertising partners. The shared data is then used to infer user interests and serve targeted ads on Amazon platforms as well as non-Amazon platforms (think, the web generally). This type of data is in demand. Researchers found it tends to get “30X higher ad bids from advertisers.”
Amazon confirmed that it does use voice data from Alexa interactions for targeted advertising but says that it doesn’t share the recordings, but transcripts of interactions. Admittedly, that’s better, but it would be best if Amazon actually disclosed their data practices. And of course, Amazon didn’t just admit to these practices outright, a spokesperson refuted the study altogether stating, “many of the conclusions in this research are based on inaccurate inferences or speculation by the authors, and do not accurately reflect how Alexa works.”
Apple’s ATT is ineffective (we figured, but now it’s confirmed)
In a study called “Goodbye Tracking? Impact of iOS App Tracking Transparency and Privacy Labels,” researchers disclosed the lackings of Apple’s App Tracking Transparency (ATT). Below are the (crazy) highlights.
- Apps are still tracking. Apparently, ATT had very little impact on apps tracking users. Oof. In some cases, tracking libraries are contacted ‘at the first app start’ which is an indicator that apps are ‘tracking’ prior to the ATT consent request. (Tracking libraries track events (e.g., when a user clicks on a link, or moves to another page) and sends that information to a third party via an API).
- Some apps use a ‘User ID’ and collect location. This information can be combined with other information to build a device-specific profile on a user. This circumvents ATT and contradicts its purpose entirely.
- Apps are blatantly fingerprinting. Apps are creating their own User IDs and sharing them with third parties (who are receiving User IDs from other apps) and identifying the user with other data points. Apple has been informed about this and has done nothing to stop it.
- Many apps’ ‘Privacy Nutrition Labels’ are inaccurate and contradict their posted privacy notice.
- Apple is tracking you for profit. Apple has admitted it is collecting significant device-specific information (that through ATT it won’t allow apps to collect) and combining it with other information to build advertising cohorts on Apple’s SKAdnetwork.
Google’s ‘Data Safety’ section: better or worse than Apple’s ‘Privacy Nutrition Labels’?
Originally announced in May 2021, Google has begun rolling out the ‘Data Safety’ section (Google’s version of the Apple Privacy Nutrition Labels).
Google’s Data Safety section requires app developers to disclose what type of data is collected, the purpose for collection/use/sharing, whether that data is shared, security measures taken to protect the data, and whether the app has committed to following the Google Play’s Families Policy. Developers can also choose to disclose whether the security practices they’ve implemented have been validated by a global security standard (such as the Mobile Applicable Security Assessment (MASV).
In contrast, Apple’s Privacy Nutrition Labels are more formulaic. The Privacy Nutrition Label is divided into 3 sections: (1) “Data Used to Track You”; (2) “Data Linked to You”; and (3) “Data Not Linked to You.” For each section, the developer must state the types of data collected/used by the developer and/or any third-party partners as well as for what purpose. The main difference is that Apple has predetermined exactly what ‘types of data’ and ‘purposes’ are available for app developers to enter so a developer must attempt to match their practices with one or more of Apple’s predefined types of data and purposes, regardless of whether there are discrepancies with the term being applied or ‘gray’ areas in how they’re applied (e.g., this app may ‘track’ as defined by the GDPR, but not as defined by the CCPA).
Google’s ‘Data Safety’ section | Apple’s Privacy Nutrition Labels | |
Required Disclosures | ● Data collected ● Purpose for data collection ● Whether data is shared ● Security practices ● Whether a user can request data deletion ● Whether app has committed to following Google Play’s Families Policy ● [optional] Whether security practices have been validated by a global security standard | ● “Data Used to Track You” ● “Data Linked to You” ● “Data Not Linked to You” For each section, the developer must state: ● the types of data collected/used by the developer; and/or ● any third-party partners as well as for what purpose. Both the types of data and the purpose for collection/use are predetermined and predefined terms that app developers select from. |
Enforcement Measures | Self-attestation, however, Google says it will verify. | Self-attestation, but since bad press, has stated they will routinely audit. |
Effective Date | July 20, 2022, but the section is already rolling out (so any app that has not completed the section will be listed as having “No info available.” | December 8th, 2020. |
We’ll see how trustworthy these are with time (not to mention how useful they are to customers).
Conclusion
As you can see, there were lots of Billy Joel ‘fires’ this month. And this is just the highlight reel.
State of Privacy Legislation in OK, MD, and CT: No. No. Go!
By Kewa Jiang
Oklahoma: Computer Data Privacy Act
Status: Failed to pass.
On March 23, 2022, the Oklahoma House passed the Oklahoma Computer Data Privacy Act (HB 2969) 74-15. However, HB 2969 failed to advance past the Senate Judiciary committee in April 2022. Senator Julie Daniels (R-Bartlesville), who chaired the Senate Judiciary meeting, declined to grant the bill a hearing.
Senator Daniels called the data privacy bill “very complicated” and “much more expansive in terms of the problems I think it would create and the relationships between businesses and consumers that would be affected.” In response, House Representative Josh West (R-Grove), a co-author of HB 2969, expressed frustration that several bills, including HB 2969, died in committee.
HB 2969 is a new version of HB 1602, the Computer Data Privacy Act, which similarly was not granted a Senate Judiciary hearing in 2021 and died in committee. HB 1602 was also sponsored by HR West and HR Collin Walke (D-OKC) and would have required “internet technology companies to obtain explicit permission to collect and sell personal data.” HB 2969 would have likewise required businesses to receive consent before collecting consumer data, consumers must be informed of their right to opt-out of personalized advertising, and restricted sale of consumer data.
Maryland: Biometric Data Privacy Bill
Status: Failed to pass.
On March 19, 2022, the Biometric Data Privacy bill (HB 259) passed the Maryland House of Representative and moved to the Senate for consideration. However, by April 11, 2022, the legislature adjourned without passing the bill.
The HB 259 is the latest version of biometric privacy law introduced in Maryland by House Delegate Sara Love. Last year, HD Love withdrew a similar bill after it failed to advance past committee hearings.
HB 259 is based on the Illinois Biometric Information Privacy Act and would have limited the collection, use, retention, and sharing of covered biometric data of Maryland residents by private entities and processors. The bill defined biometric data to include fingerprints, voice print, eye retina or iris, or any unique biological patterns or characteristics that can be used to identify a specific individual. An individual alleging violation of their biometric information by a private entity would have been able to bring a civil suit.
Connecticut: An Act Concerning Personal Data Privacy and Online Monitoring
Status: Passed.
Connecticut joins the growing trend of state-level consumer privacy protection bills. On April 20, 2022, the Connecticut Senate unanimously passed Senate Bill 6. The bill then passed the House on April 28, 2022 (144-5 vote). Once signed by the governor, the bill’s provisions will go into effect on July 1, 2023.
SB 6 is sponsored by Senator James Maroney (D-Milford) and Senate Majority Leader Bob Duff (D-Norwalk). While a prior iteration of the consumer privacy bill failed to pass the House in 2021, both senators continued their efforts to push for consumer privacy protection.
SB 6 is generally based on the Colorado Privacy Act (CPA) but is comparatively less strict than California’s Consumer Privacy Act. The bill establishes a comprehensive framework that addresses a broad range of issues, such as dark patterns, data brokers, children’s online privacy, and opt-out mechanisms. The bill will impact companies that hold data on at least 100,000 Connecticut residents.
Some highlights of the consumer rights included in SB 6 are the right to access personal data, to correct inaccuracies in collected data, request to delete collected personal data, and obtain a copy of the consumer’s personal data. The bill also requires companies to provide consumers the ability to opt-out of personal data processing for targeted advertising. Any collection of “sensitive data” will require consumer consent before it is processed. Sensitive data is broadly defined to include consumer’s racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, citizenship or immigration status, genetic or biometric information, data collected from a known child, or precise geolocation data.
Japan Amends Its Privacy Law With Important Changes
By Weiss Hamid
On April 1, 2022, Japan is set to begin enforcement on the amendment to its Act on the Protection of Personal Information (“APPI”). The APPI was originally adopted in 2003 – making it one of the first data protection regulations. However, with the passing of the EU’s General Data Protection Regulation (“GDPR”) and China’s Personal Information Protection Law (“PIPL”), Japan has now overhauled its own data protection law in order to meet the current privacy climate.
Below are the significant changes to the APPI:
Applicability
Much like the GDPR, the APPI applies to not only businesses operating in Japan, but also personal information handling business operators that control and/or process the personal data of individuals located in Japan.
The Amendment expands the scope of covered businesses even further. The prior version of the APPI indicated that the law only applied to business operators that stored the information of at least 5,000 identifiable individuals on at least one day during the previous six months. Now, there is no minimum limit on the database size.
Categories of Data Protected
The APPI traditionally protected personally identifiable information (“PII”), such as name, date of birth, email address, and biometric data. The Amendment expanded this category by including “Individual Identification Codes” which includes numbers (like a driver’s license), symbols, or codes (e.g. a fingerprint).
The Amendment also introduces a new category of personal information referred to as “special care-required personal information,” which is effectively “sensitive” information. This includes information about an individual’s race, creed, medical history, criminal record, social status, or any other information that may lead to social discrimination. In order to collect or process this information, a business must obtain a user’s prior, opt-in consent.
The Amendment further introduces the concept of pseudonymous information, which relates to an individual but is processed in a manner that does not identify a specific individual unless collated with additional data. Businesses are not obligated to delete pseudonymously processed information derived from personal data, and may retain it for potential future statistical usage.
Finally, the Amendment adds another new category of “personal-related information” which includes information related to an individual that does not fall within the scope of personal (or pseudonymous) information. “Personal-related information” would include information that can be used to identify an individual if connected to other information. Cookies and IP addresses would likely be considered this category of personal information. The Amendment requires the consent of the individual when a third party acquires personal-related information as “personal data.”
Data Transfers
The Amendment has also placed restrictions on data transfers to companies outside of Japan. Similar to the GDPR, the Amendment states that a cross-border data transfer can only take place if either (1) the overseas recipient is located in a country that has an adequate level of data protection equal to Japan and establish a personal information protection system with the cross-border company, or (2) the company obtains the user’s prior opt-in consent.
In order to meet the (1) threshold, there must be a contract between the two companies in place that outlines the “necessary measures” to obligate the receiving party to maintain and process the personal information in compliance with the APPI. In order to provide effective consent to meet the (2) threshold, the transferor is required to provide detailed information of the transfer prior to obtaining consent. This includes (a) what country the recipient is located, (b) information on legislation for the protection of personal information in the recipient’s country, and (c) information on measures taken by the recipient to protect personal information.
Mandatory Data Breach Notification
The Amendment must now report data breaches to the Personal Information Protection Commission (“PPC”) if the breach includes the following: (1) sensitive personal information, (2) personal information that is likely to cause financial or property damage (e.g. credit card information), (3) unauthorized access to a data server or malware infection by a third party, or (4) more than 1,000 affected individuals.
A business is required to “promptly” provide initial notice to the PPC. A second notification is required within 60 days if the breach involved more than 1,000 affected individuals’ personal information, or 30 days if the breach falls within any of the other three above categories.
A third and final notification must provide the PPC a summary of the incident, categories of personal information involved, the total number of affected individuals, the cause of the breach, the extent of damages, and any actions taken by the company since the breach occurred.
The company is also obligated to provide notice to affected individuals as soon as possible – but there is no firm deadline. A company may publish information regarding the breach on the company’s website if notice is practically difficult to make.
Penalties for Non-Compliance
Failure to comply with APPI regulations can result in financial penalties of up to JPY 100 million, or approximately $1,000,000 USD. The penalty could also include imprisonment for up to a year. The PPC may also publicly publish the violation.
CPPA to hold Pre-Rulemaking Stakeholder Sessions
By Andrew Scott
On April 13, The California Privacy Protection Agency (CPPA) announced it will hold Pre-Rulemaking Stakeholder Sessions via Zoom video and telephone conference. The stakeholder sessions will be held on May 4th, May 5th, and May 6th, 2022. These sessions are intended to provide stakeholders to speak on topics relevant to the upcoming rulemaking, assisting the Agency as it develops regulations.
The CPPA indicated that signup would neither be required to attend and to listen nor to participate in general public comment. In order for the Agency to accommodate as many participants as possible, speakers will be required to adhere to time limits.
On May 3, the CPPA released the program and schedule for the stakeholder sessions:
DAY 1: WEDNESDAY, MAY 4, 2022
9:00 am: Automated Decisionmaking
12:00 pm: Lunch Break
12:30 pm: Businesses’ Experiences with CCPA Responsibilities
3:00 pm: Consumers’ Experiences with CCPA Rights
4:00 pm: General Public Comments
DAY 2: THURSDAY MAY 5, 2022
9:00 am: Data Minimization and Purpose Limitations
10:00 am: Dark Patterns
11:30 am: Lunch Break
12:00 pm: Consumers’ Rights to Opt-Out
2:30 pm: Consumers’ Right to Delete, Correct, and Know
4:00 pm: General Public Comments
DAY 3: FRIDAY MAY 6, 2022
9:00 am: Consumers’ Rights to Limit Use of Sensitive Personal Information
10:00 am: Processing that Poses a Significant Risk to Consumers
11:00 am: Cybersecurity Audits and Risk Assessments
12:30 pm: Lunch Break
1:00 pm: Audits Performed by the Agency
2:30 pm: Additional Topics
3:30 pm: General Public Comments
The Staff present for the Pre-Rulemaking Stakeholder Sessions includes Ashkan Soltani, Executive Director, Brian Soublet, Interim General Counsel, Vongayi Chitambira, Deputy Director of Administration and, Trini Hurtado, Conference Services Coordinator, California Dept. of Justice.
The U.S. Signs Declaration with 60+ Partners that Includes Prioritizing Privacy on the Internet
By Andrew Scott
On April 28, 2022, the United States with 60-plus partners from around the globe launched The Declaration for the Future of the Internet. This Declaration sets out the vision and principles of a trusted Internet.
The 60-plus partners that signed the Declaration made a political commitment to “support a future for the Internet that is an open, free, global, interoperable, reliable, and secure.” In the White House Fact Sheet, the US stated the Declaration reaffirms and recommits its partners to fostering privacy.
The Declaration recognizes that “people have legitimate concerns about their privacy and the quantity and security of personal data collected and stored online.” As such, the Declaration indicates that its partners intend to work toward an environment that secures and protects individuals’ privacy.
In its vision, the Declaration aspires to have an internet that “is developed, governed,
and deployed in an inclusive way so that unserved and underserved communities, particularly those coming online for the first time, can navigate it safely and with personal data privacy and protections in place…” Moreover, the Declaration states that “[d]igital technologies should be produced, used, and governed in ways that enable trustworthy, free, and fair commerce; avoid unfair discrimination between, and ensure effective choice for, individual users; foster fair competition and encourage innovation; promote and protect human rights.”
The principles stated in the Declaration are not legally binding; however, the principles should be used “as a reference for public policy makers, as well as citizens, businesses, and civil society organizations.” The principles include commitments to the following:
• Protect human rights and fundamental freedoms of all people;
• Promote a global Internet that advances the free flow of information;
• Advance inclusive and affordable connectivity so that all people can benefit from the digital economy;
• Promote trust in the global digital ecosystem, including through protection of privacy; and
• Protect and strengthen the multistakeholder approach to governance that keeps the Internet running for the benefit of all.
What global privacy needs, ultimately, is a multilateral solution. While this Declaration does not provide a robust framework that can be embedded into a legal system, there does appear to be a vision and a set of principles that could lay the groundwork for countries, including the United States, to work more cooperatively with industry, academia, and other global stakeholders.
Interestingly, the US signing this Declaration shows another recent “commitment to protecting and respecting human rights online and across the digital ecosystem.” (see United States and European Commission Joint Statement on Trans-Atlantic Data Privacy Framework, see Executive Order on Ensuring on Ensuring Responsible Development of Digital Assets).
Of note, the following countries endorsed the Declaration: Albania, Andorra, Argentina, Australia, Austria, Belgium, Bulgaria, Cabo Verde, Canada, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Denmark, Dominican Republic, Estonia, the European Commission, Finland, France, Georgia, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Jamaica, Japan, Kenya, Kosovo, Latvia, Lithuania, Luxembourg, Maldives, Malta, Marshall Islands, Micronesia, Moldova, Montenegro, Netherlands, New Zealand, Niger, North Macedonia, Palau, Peru, Poland, Portugal, Romania, Serbia, Slovakia, Slovenia, Spain, Sweden, Taiwan, Trinidad and Tobago, the United Kingdom, Ukraine, and Uruguay.
The U.S. Department of Commerce Intends to Leave APEC for new Global CBPR Forum
By Andrew Scott
On Thursday, April 21, 2022, the U.S. Department of Commerce released a statement by Commerce Secretary Raimondo on the Establishment of the Global Cross-Border Privacy Rules (CBPR) Forum.
The statement indicates that the The Department of Commerce intends to withdraw from the APEC Cross Border Privacy Rules (CBPR) System and join a newly formed Global CBPR forum. The statement states “the Global CBPR Forum reflects the beginning of a new era of multilateral cooperation in promoting trusted global data flows.”
In addition to establishing a CBPR system, the forum intends to establish a Privacy Recognition for Processors (PRP) system as well as a “first-of-their-kind data privacy certifications that help companies demonstrate compliance with internationally recognized data privacy standards.” The other countries involved in the new initiative are Canada, Japan, the Republic of Korea, the Philippines, Singapore, and Chinese Taipei (these companies are currently participating in the APEC CBPR system).
The Global Forum and the current APEC CBPR and PRP Systems are not going to be related. According to the CBPR Privacy Rules Declaration FAQ, the Forum “intends to establish an international certification system based on the APEC CBPR and PRP Systems, but the system will be independently administered and separate from the APEC Systems.”
According to a CBPR Privacy Rules Declaration FAQ, the objectives of the Global CBPR forum are the following:
- Establish an international certification system based on the APEC Cross Border Privacy Rules and Privacy Recognition for Processors Systems. It would be administered separately from the APEC system;
- Support the free flow of data and effective data protection and privacy through promotion of the global CBPR and PRP Systems;
- Provide a forum for information exchange and co-operation on matters related to the global CBPR and PRP Systems;
- Periodically review data protection and privacy standards of members to ensure Global CBPR and PRP program requirements align with best practices and
- Promote interoperability with other data protection and privacy frameworks.
Finally, all approved Accountability Agents and certified companies will automatically be recognized in the new Global CBPR Forum based on the same terms that they are recognized within the APEC CBPR and PRP Systems
While it has not been stated why the U.S. and the other countries have pulled out of the APEC CBPR and PRP systems, it could be reasonably assumed that the US (along with the other economies joining the initiative) sees the new forum as opportunity expand participation in the APEC framework globally (the CBPR Declaration FAQ indicates the forum will “pursue interoperability with other data protection and privacy frameworks”).
Benefits of the CBPR System
In short, The APEC CBPR system is a regional, multilateral cross-border data transfer mechanism. The new CBPR System will continue to be a government-backed data privacy certification that companies can join to demonstrate compliance with internationally-recognized data privacy protections. For controllers, the CBPR system provides a framework to ensure protection of Personal Information transferred among participating APEC economies. CBPR enables controllers that collect, access, use or process data in APEC Economies to develop and implement uniform approaches within their organizations for global access to (and the use of) personal information.
The CBPR system is not just a benefit for big, multinational technology companies, but for companies across all sectors of the economy, and for micro, small- and medium-sized businesses, workers, and consumers as well.
For consumers, the CBPR provides them with trust and confidence that their personal information is transmitted and secured across borders (just as the same would be for clients that choose processors).
For governments, the CBPR helps to assure there are no unreasonable impediments to cross border data transfers while at the same time protecting the privacy and security of their citizens’ personal information domestically and, in cooperation with foreign governments, internationally. The use of an Accountability Agent to demonstrate compliance by evaluating privacy policies and practices only helps legitimize the framework even more.
Benefits of the PRP System
The APEC PRP system allows processors to demonstrate their ability to effectively implement a controller’s privacy obligations related to the processing of personal information. The PRP also enables information controllers to identify qualified and accountable processors, as well as to assist small- or medium-sized processors to gain visibility and credibility.
What to Expect for Companies Currently APEC PRP or CBPR Certified Companies
Companies certified to the CBPR system should continue to enjoy current data transfer benefits under the new Global system and are expected to see increased transfers benefits as economies outside of APEC recognize the new global system.
If anything, the Global CBPR Forum should promote expansion of the CBPR and PRP Systems beyond APEC to increase the international data transfer benefit as well as facilitate data protection.
Hopes for the Future
As the new system further develops and considers its autonomy, perhaps the system could position itself in several ways that could leverage its participation with economies in ways that could benefit companies. For example, it would be great to see the Global CBPR Forum capitalize on the potential for an interoperable system that makes it easier to move personal information across international borders. Currently, some governments have embedded the CBPR in their legal systems, allowing for third-country companies to receive cross-border transfer data flows as adequate.
Additionally, it would be great to see a company’s Global CBPR certification as a mitigating factor by a DPA in the event that a fine must be imposed.
Finally, hopefully, this will attract more economies to participate in the forum. The Global CBPR Forum is intended to be open, in principle, to those jurisdictions which accept the objectives and principles of the Global CBPR Forum as embodied in the Declaration.
it would be a coup for this new system if it brought in important economies, including the United Kingdom, India, and, of course, Europe.
European Data Protection Board Publishes Guidelines on Dark Patterns in Social Media Platforms
By Paul Lanois
In March 2022, the European Data Protection Board published its draft “Guidelines 3/2022 on Dark patterns in social media platform interfaces: How to recognise and avoid them“ (the “Guidelines“) for public consultation. While these guidelines were drafted specifically with social media platforms in mind, they can also provide recommendations and guidance that are relevant for the design of any websites and applications given that the existence of a “dark pattern” may constitute a breach of certain GDPR requirements such as consent.
The Guidelines define “dark patterns” as “interfaces and user experiences implemented on social media platforms that lead users into making unintended, unwilling and potentially harmful decisions in regards of their personal data“. According to the Guidelines, dark patterns aim to influence users’ behaviors and can hinder their ability “to effectively protect their personal data and make conscious choices”, for example by making them unable “to give an informed and freely given consent”.
In order to assess whether a design or interface constitutes a “dark pattern”, the Guidelines indicate that the principle of fair processing laid down in Article 5 (1) (a) of the GDPR should serve as a starting point, along with the GDPR’s principles of data protection by design and default, transparency, data minimization and accountability.
The guidelines divide the “dark patterns” into six categories:
- Overloading: Users are presented with too much, for example too much information or too many options or possibilities to prompt the sharing of more data or allow personal data processing against the expectations of the data subject.
- Skipping: Design that presents things in a way that users forget or do not think about data protection aspects.
- Stirring: Design intended to make users base their choices on emotions or visual nudges.
- Hindering: Design that prevents or blocks users from becoming adequately informed of the processing activities, or that makes data management harder to achieve.
- Fickle: Design that makes it unclear or misleading to the user how to navigate data management settings or to understand what the purpose of the data processing is.
- Left in the dark: Design that hides relevant information or data protection settings or to leave users unsure of their data protection and accompanying rights.
For example, the Guidelines provide that when users are in the process of registering themselves and creating an account, any language used that delivers a sense of urgency or sounds like an imperative could have an impact on their “free will” and constitute a “dark pattern”, even when in reality providing the data is not mandatory. As an illustration of this situation, the Guidelines give the following example: “The part of the sign-up process where users are asked to upload their picture contains a “?” button. Clicking on it reveals the following message: “No need to go to the hairdresser’s first. Just pick a photo that says ‘this is me’.” While the Guidelines recognize that the aim of such wording in the signup process is simply “to motivate users and to seemingly simplify the process for their sake (i.e. no need for a formal picture to sign up), such practices can impact the final decision made by users who initially decided not to share a picture for their account“.
The Guidelines further recommend against using humor that could be misread as a misrepresentation of potential risks and, as a result, invalidate the actual information provided. The Guidelines provide the following example: “A cookie banner on the social media platform states “For delicious cookies, you only need butter, sugar and flour. Check out our favourite recipe here [link]. We use cookies, too. Read more in our cookie policy [link]”, along with an “okay” button'”. According to the Guidelines, the context and humor overshadow the data protection information in the example, whereas the relevant information for users should be upfront in order for consent to be deemed valid and informed.
These guidelines are subject to public consultation and the final version of the guidelines may therefore differ from the current draft. In any case, these guidelines provide a good insight as to the approach that European data protection authorities are likely to adopt in future investigations.