Antitrust and Unfair Competition Law

Competition: Spring 2017, Vol 26, No. 1

ASSESSING DAMAGES IN PRIVACY CASES: A PANEL DISCUSSION WITH ANDREW SERWIN, JAY EDELSON AND GARRETT GLASGOW

Moderated by Dominique-Chantale Alepin1

The growth of the internet and other technologies and the explosion of available data has put privacy and cybersecurity cases in the spotlight. These cases raise a host of thorny questions. Among the most difficult are questions concerning what damages are, or should be, available to plaintiffs. The Golden State Institute was pleased to feature a distinguished panel of thought leaders in this area to discuss this important and fascinating topic.

The panel

  • Andrew Serwin is a partner at Morrison & Foerster and Global Co-chair of Morrison & Foerster’s market-leading Privacy + Data Security group. He advises a number of Fortune 500 and emerging companies alike, with a particular emphasis on: international compliance; health privacy; mobile; behavioral advertising; the Electronic Communications Privacy Act and wiretap issues; electronic marketing concerns; social media; and compliance with FTC requirements. He also handles some of the highest-profile data security incidents and privacy enforcement and litigation matters in the world.
  • Jay Edelson is the founder and Chief Executive Officer of Edelson, PC. He is widely recognized as a leader in consumer class action litigation, technology law, corporate compliance issues, and consumer advocacy. He has appeared on dozens of local, national, and international television and radio programs, and was named one of "40 Illinois Attorneys Under 40 To Watch" by the Chicago Daily Law Bulletin. Mr. Edelson’s involvement with the first class action settlement for text message spam earned him the nickname "the Spam Slammer" and he was also named a "Legal Rebel" by the American Bar Association Journal. He is regularly asked to advise legislators on issues related to his practice, including consumer issues involving the recent federal bailouts and technology issues such as those involving mobile marketing.
  • Garrett Glasgow is a Senior Consultant at NERA Economic Consulting. He specializes in applied economic and statistical analyses related to market competition, intellectual property, and environmental cases. He holds a Ph.D. in Social Sciences from the California Institute of Technology. Prior to joining NERA, Garrett was an Associate Professor of Political Science at the University of California, Santa Barbara.

[Page 161]

MR. SERWIN: I would like to provide an overview of 100 years of privacy theory and 50 years of Federal Trade Commission law. I want to start with how we monetize privacy in a damages model. Privacy is really a societal norm that expresses concern over the collection, protection and processing and retention of an individual’s information. Privacy has nothing to do with a company’s information. Cyber security is a much broader field in certain ways. When we look at privacy, we are really dealing with these societal norms. And different societies come to different conclusions about those societal norms.

In the United States, there have been two theoretical constructs. First, we have Privacy 1.0: Warren and Brandeis and the right to be let alone.2 The basis of Warren and Brandeis’s model was the instant camera which implicated the right to be let alone. In FTC parlance, this comes down to "notice and choice." You should have areas where you are safe from having your privacy invaded. You should have choice around whether someone takes a picture of you with an instant camera and then published it in a newspaper.

Second, we have Privacy 2.0: Dean Prosser. Prosser, writing in the 1960s, was concerned about a court-created mess. As he noted, "Judge Biggs has described the present state of the law of privacy as ‘still that of a haystack in a hurricane.’ Disarray there certainly is; but almost all of the confusion is due to a failure to separate and distinguish . . . four forms of invasions, and to realize that they call for different things."3 Those four privacy torts described by Prosser are: (1) intrusion upon seclusion, (2) appropriation of name or likeness; (3) publicity given to private life; and (4) publicity placing a person in false light. Europe is different.

In Europe, privacy is seen as a fundamental human right stemming from World War II and some of the databases that were used. But the U.S. view has a lot to do with a property view of information.

The FTC is the main privacy regulator in the U.S. They are not a data protection authority in European sense, but they do have a consumer protection jurisdiction, and included within that is privacy. They use "deception" and "unfairness" to bring enforcement actions against companies in the privacy and cybersecurity space. Deception comes down to a misrepresentation that is likely to mislead consumers acting reasonably. But unfairness has nothing to do with representation, it has to do purely with consumer harm. Was there consumer harm and was there an offsetting benefit to competition or to consumers? If the injury is not reasonably avoidable, that is a basis for FTC action.

When we look at Privacy 1.0, which is notice of choice, which is the Warren and Brandeis model, that is really tying out to the FTC’s deception authority. The consumer is told something that wasn’t true, and the consumer made a choice based on deception and notice of choice. Unfairness, because it is a harm-based model, applies to both Prosser privacy torts and to cases where a representation is present.

MR. EDELSON: I wanted to provide an overview of the privacy cases that are being brought. Privacy is a large world, that can be split it up into two buckets. One is the common law. In the common law, most people think of data breach cases. But there is also a "silent piece": data security cases. Over the last year, plaintiff’s attorneys have started bringing cases when they have seen there’s an existing vulnerability in a company’s website, but there hasn’t necessarily been a data breach.

[Page 162]

The reason these are "silent" cases is they are brought under seal. They are brought under seal because if there’s a vulnerability, you don’t want to alert the hackers to it, as that would do a lot of harm to the class. Those cases are working through the courts now. Although there has been some law developed, not everybody knows about it. There’s a big fight about when those cases get unsealed. So that’s another part of the common law.

The second part of privacy is the statutory privacy cases. The laws surrounding privacy is an odd patchwork of State and Federal laws. They tend to be passed reactively. For example, one of the most famous laws, the Video Privacy Protection Act4 was passed because the video rental history of Robert Bork, the Supreme Court nominee, had been published in the newspaper. Senators freaked out, saying, "Oh, my goodness, we don’t want our constituents to know what we have been renting." That precipitated the passage of many of these statutes.

And the patchwork of state laws is largely resultant of the retroactive passing. For example, Alaska cares very much about genealogical information. It is the only state in the country that has a law addressing that.5 Illinois cares about biometric information.6 Illinois and Texas are the only states that have statutes addressing that. Michigan cares about what publications you read and what music you listen to, what videos you watch.7

Statutory cases all have large statutory penalties associated with them. Statutory damages can range from $500 per person for a Telephone Consumer Protection Act case,8 to, in Alaska, $100,000 if you intentionally make a profit from disclosing genealogical information.9 And since the cases are almost uniformly brought as class actions, damages are quite large.

The issue of large damages is what precipitated Spokeo Inc. v. Robbins.10 The Chamber of Commerce looked at large privacy damage cases and thought it was not fair for businesses to face huge class action lawsuits when all that they did, according to them, was merely violate a statute. They argued to the Supreme Court that there needed to be something more: "real-world harm."

In Spokeo, the Supreme Court did something fairly clever. The Supreme Court wanted to separate the good cases from the bad cases. The Court held that you have to look at the harm that’s being alleged and ask, is that the type of harm that Congress cared about? If so, then you have standing in federal court, and you can bring suit. If not, you can’t.

[Page 163]

A lot of lawyers think it is great when these cases are dismissed from federal court because they think they go away. But they actually don’t. They go to state court. That is the ultimate irony of Spokeo. Generally, defense attorneys want to fight the weakest case in federal court. But some of these weak cases, because of Spokeo, are ending up in state court. And some jurisdictions are a lot more lax than federal court.

Spokeo also held that intangible harms could count, including the risk of a future harm. That could include a future data breach, if there’s enough of a connection. Although that part seems good for plaintiffs, and it may ultimately be our undoing if we pay too much attention to it.

MR. GLASGOW: I want to discuss six different types of economic damages that have been discussed and /or tried in various privacy cases: (1) actual financial injury, (2) the cost of mitigating potential future harm, (3) risk of future harm, (4) alternative purchase decisions, (5) lost value of information, and (6) lost value of privacy.

First, we have actual financial injury. In one sense, this is the most straightforward type of damages that we can calculate. For example, if there was a data breach and credit card information is stolen, we would need to figure out which charges were fraudulent for consumers and total them up. There is comparatively less speculation as compared to the other damages theories.

However, causation is more difficult with actual financial injury. Although we can look up the fraudulent charges, causality is a lot more difficult to determine. Credit card charges and fraudulent charges are fairly common, how can we link it to this particular case? When there’s been a data breach, and a consumer notices a fraudulent credit card charge that emerges at a later date, was that unlawful charge due to that data breach, or was it a different data breach?

While it is a difficult question to tackle, there are some ways of proving causation in the aggregate. We can look at aggregate patterns to see if among the customers for a particular company, there is a certain baseline rate of fraudulent charges. If that rate goes up right after a data breach, there is evidence that there have been fraudulent charges.

MS. ALEPIN: Do plaintiffs allege this type of actual financial injury in privacy cases? And in what situations are plaintiffs able to allege this type of damage?

MR. EDELSON: Not many. Individual actions are where these type of damages are usually alleged. But data breach cases are difficult to bring as individual actions. The economics are against it. And when people have fraudulent charges on their credit card, the banks generally pay for them. So there would not be actual damages in those cases.

However there are some actual damage data breach class actions. For example, in Greene v. MtGox, Inc., et al.11, the plaintiff alleged actual financial injury. That case involved virtual currency. Hundreds of millions of dollars of bitcoin was the data that was breached. And that was clear, direct actual damages.

[Page 164]

And the theory of actual financial injury is also brought as a class action on behalf of the banks.

MS. ALEPIN: Mr. Serwin, are there particular problems that you see with alleging this type of injury in a privacy class action lawsuit?

MR. SERWIN: Sure. Recently there have been Article III standing arguments made in the beginning of privacy class actions. Although these standing arguments were dealt with at the 12(b)(6) stage, the same arguments are also made at summary judgment and class certification because of the causation issues. Judges are beginning to scrutinize whether there is causation on a class-wide basis. This is especially true in the credit card cases where consumers are, in most cases, not liable for fraudulent charges. Those are the cases where it is harder to come up with a damage theory because usually consumers are reimbursed by the banks.

MR. GLASGOW: The second theory of damages is the cost of mitigating potential future harm. As with actual financial injury, it is very easy to tally the costs consumers are going to take to mitigate potential future harm. For example, the cost of credit monitoring, or the time spent changing passwords or closing accounts and opening new accounts. These are the kinds of things that are easy to keep records of, and there will be actual dollar amounts that we will assign to these tasks.

Here, causation will be less of a concern than with actual financial injury because we can see consumers undertaking these activities in response to the data breach or another privacy case. Something happens that causes someone to be concerned about their personal data, concerned about their privacy, so they undertake actions to try to protect it.

But as discussed before with actual financial injury, a lot of times the consumers don’t actually bear this cost. One of the standard playbooks after a data breach now is to offer free credit monitoring to anyone affected. So in many cases, just as with banks reimbursing for fraudulent credit card charges, the costs of mitigating future harm is often covered by the company that suffered the data breach or who has been put at risk.

MS. ALEPIN: So are lawsuits seeking to recover time that people spend, for example, changing their passwords on multiple different sites after a data breach?

MR. EDELSON: I feel like the first comment I am going to come across as a defense lawyer. I swear I am a plaintiff’s lawyer. I am really excited about a couple of the damages theories. On the mitigation costs for future harm, I think the courts have it wrong, but the courts are pretty clear.

To me, the big damage in a data breach is the upending of someone’s life. If you deal with clients, it is awful. If you told someone you can either spend months trying to fix your credit and change all your passwords, or you can have $10 stolen from your wallet, they would say, I’d rather lose the $10. But courts don’t recognize that. Courts have not been receptive to the idea that consumers can recover for their lost time.

[Page 165]

MR. SERWIN: I think a discussion of Clapper v. Amnesty Int’l USA would be instructive.12 Clapper was a warrantless surveillance case. And the question was what was the harm if you didn’t know you were surveilled under a classified program?

The reason that case is so relevant is there is always this issue of how do you know? If I go buy credit monitoring, and I can’t actually tie up causation, is that actual harm or am I manufacturing harm?

And there’s a variety of different views that people have on that. That’s one of the core issues against causation is where a lot of these issues really get bought out, I think.

MR. GLASGOW: The third damages theory is the risk of future harm. We talked about mitigating potential risks or mitigating the effects of some kind of privacy violation. But what about the fact that just maybe consumers have been put at risk of future harm? They haven’t undertaken any activity yet; they haven’t purchased credit monitoring. But the fact that their data has been leaked into the wild and some hackers have their hands on it has created a future risk. Their financial security data is now less secure because of a data breach or because of another privacy violation.

To calculate these damages, we look at expected future loss. For example, suppose there was a data breach and some information is stolen. We have seen enough data breaches now, so there’s a lot of information out there about different types of information being stolen. So for these types of data breaches, we can ask what kinds of future financial losses or problems do customers tend to encounter? If you have certain types of information stolen, what is your future expected risk?

We will have a probability of some sort of loss, or some kind of estimation of what the average expected loss for people that had this type of data stolen. And from an economic perspective, calculating expected future loss is not very hard to do. For example, in Remijas v. Neiman Marcus Group, LLC,13 the court held that loss from future harm could be recoverable. In that case, information from about 350,000 payment cards was stolen, and, very quickly, about 9,000 fraudulent charges appeared. Because of the high incidence of fraudulent charges, it looked like there was a high risk of future harm.

But the same caveats are going to apply. For example, offering free credit monitoring will tend to mitigate some of this future risk.

MR. EDELSON: Risk of future harm is the fool’s gold of the Spokeo case. A lot of data breach attorneys were super excited when they read the Supreme Court’s opinion that consumers could have standing for intangible harms including potential risk of future harm. People were slapping me on the back saying "Great job."

But my view is that’s the wrong road to go down. There are two strategies in bringing data breach cases. One strategy is allege the cause of action that will get by a motion to dismiss and then settle the case. By alleging a risk of future harm, we are just going to get by a motion to dismiss. There are certainly reasons that defendants will settle a data breach case, but settlements based on risk of future harm tend to be really bad.

[Page 166]

For example in the Target data breach settlement, which stemmed from one of the most high profile data breaches we’ve seen, virtually no money went to class members. The reason is because the damages theories class counsel were pushing involved a risk of future harm.

But there are other lawyers who take a different view. They are asking how do you come up with a damage model which is not just going to survive a motion to dismiss, but it is going to survive summary judgment, trial, and will be able recover actual money for class members.

MR. SERWIN: One of the ironies of this theory is, Jay is right. It is great to say you can have future economic harm, but you can easily start carving the class up into people that have maybe never been part of a breach, if that person exists, versus someone who has been in five or ten. So how do you really know that the same information has been lost five times, and what caused the alleged or risk probability of loss in the future?

I think future economic harm is a difficult damages theory to use in privacy cases. Although it might survive a 12(b)(6) motion to dismiss, if you actually had to go to trial on it and you had class members that had been in ten breaches with the same data, I don’t know how you would actually prove it out if you had to.

MR. GLASGOW: The fourth way to conceive of damages in a privacy case would be an alternative purchase decision. Would the consumer purchase the product if the true privacy policy or the data breach risk was known? You entered into a contract with a company and purchased something from them, but whether it was through a misleading privacy policy or poor data management practices, the company breached your personal information.

The concept of an alternative purchase decision is actually pretty common in consumer class actions when there have been misrepresentations or defects with products. In those cases, plaintiffs allege that, if they had known this was a defective product, consumers wouldn’t have purchased the product, or the consumer wants their money back or compensation for being misled in this purchase.

There are some well-known methods for calculating damages in these types of cases. Essentially, we conduct a survey to see what types of values consumers place on different characteristics of the product. We then ask, if one of those characteristics was not actually available at the time, how much less would the consumers have wanted to pay, or would they have just purchased something different?

A well-known case where this damages method was used was in In re: LinkedIn User Privacy Litigation.14 That case involved a data breach of purchasers of LinkedIn Premium. In examining the theory of damages, the Court punted, holding that because they said there was nothing different about the privacy policy between LinkedIn free and LinkedIn premium, consumers didn’t purchase any "extra privacy" with LinkedIn premium.

[Page 167]

The alternative purchase decision theory of damages could be used in cases where a privacy policy was misrepresented or where there’s been a data breach and people didn’t understand the actual chance of such a breach.

MR. EDELSON: My law firm, Edelson PC, represented the plaintiffs in Linkedin. And there was a revised opinion after the one Mr. Glasgow discussed. We moved for, and were granted, reconsideration of the order granting defendant’s motion to dismiss. But a better example of this damages theory in practice was Resnick v. AvMed, Inc.15 AvMed involved health insurance premiums and allegedly insufficient data security practices. We looked there are how much people would pay for health insurance if they knew that all their health records could have been exposed. We hired an expert to do surveys. And lo and behold, people said I really wouldn’t want to pay for that product. So better theories result in better settlements. When we settled AvMed and Linkedin, because our argument were based on diminution of value and benefit of the bargain, we would have been able to get a class certified and go to trial. And our settlements recognize that. Those were the first settlements where people started getting money without having to show any other harm because they overpaid for the service.

MS. ALEPIN: But are those damages limited to those where consumers pay for the service?

MR. EDELSON: If you don’t pay, it is a free service, and it is a much more abstract argument in terms of what you lost. Although you might still be able to point to consideration paid, and diminution of consideration. But I was a philosophy major in college, and that one would be a big stretch for me to even be able to articulate. And the courts have said, such as in LinkedIn, if you’re suing over something that’s free, it is going to be hard to claim that you lost value in it.

MS. ALEPIN: Mr. Serwin, any thoughts?

MR. SERWIN: A lot of what the Internet is based on is free service. So the alternative purchase decision theory will really only be used in certain cases.

The other thing is that we have been going down this path in the U.S. of pricing privacy. But the notion that you would pay more for privacy has been rejected by the consumers of the big companies. So I think one of the challenges with causation today is proving that actual consumers, or at least enough consumers in a class, would make a different buying decision. And that’s where I think you will have dueling experts and different theories on what the value was, but also who would have made a different decision.

MR. EDELSON: I want to rebut that. I don’t think it actually matters about the individual. If I buy a watch and I think I’m getting a watch worth X, and in fact, it is missing a feature, even if I don’t care about the feature, I still got less than what I paid for. That’s the theory.

[Page 168]

But we have not seen class certification decisions on that. We have seen motions to strike class allegations based on that theory, which plaintiffs have won. But it is an argument that we will obviously deal with down the road.

MR. GLASGOW: A fifth theory of damage is based on the lost value of information. For example, in a data breach or where a company is selling personal data with third parties. Clearly hackers think this information is valuable; they are going to some effort to steal it. And companies that share this information, sell it, and market it are also finding it valuable. So the question becomes: are customers being deprived of the economic value of their information because hackers are stealing it and companies are selling it. Can we put value on somebody’s personal information?

We have seen experts try to value how much this information is being bought and sold for. But this is not something that’s often done openly. But in In re Zappos.com, Inc.,16 the experts tried to estimate the black market price of stolen records. And they came up with as much as $30-45 per record.

But in In Re: Yahoo! Inc. Customer Data Security Breach Litigation17 we see a very different number. Hackers are selling records in batches, say $100 for 100,000 records. Some people think the reason that this price is so much cheaper is because the data breach happened quite a while ago and maybe all the valuable information has been mined and accounts have been closed.

But the challenge with that approach is demonstrating that the consumer has actually lost out because of the data breach. Can plaintiffs demonstrate that there was an actual market in which consumers would profit from their personal information? Sure the hackers can steal it and use it, and the marketers can use it, but are the consumers missing out in some way, and how does this affect what we think about claiming the lost value of information as damages?

MR. EDELSON: We made this argument roughly a decade ago in Claridge v. RockYou, Inc. 18 There, the Court actually, I don’t want to say accepted it, but allowed us to get by a motion to dismiss while expressing a tremendous amount of skepticism. Later the judge said that she regretted that decision.

But the lesson that I took, and it’s exactly what Mr. Glasgow was saying, is that that argument was a little bit ahead of its time. I believe that there actually is a market that is burgeoning where people can now go and sell their personal information individually.

For example, I can take information about myself, my name, my age, my address and actually sell it to data miners and get a certain amount of money for that. In England there’s a company that’s been doing this for a while, and other companies are starting to experiment with it. This reconciles with the view of Silicon Valley. Silicon Valley understands that if they have access to your data, it is worth money, and companies are willing to buy it from you.

[Page 169]

Once people start saying that this value of privacy makes sense, I think that we are going to see that argument brought more often in front of the courts. But I do think that is a real theory that hopefully will be accepted by the courts in the coming years.

MR. SERWIN: I think, as it stands right now, this is a theory that is difficult at best. I think one of the challenges down the road is that the data that might be the most valuable, people wouldn’t sell. You are not going to sell your Social Security number and name to a hacker. So there’s not really a market for that.

MR. EDELSON: But people would buy it, right?

MR. SERWIN: But I don’t think the plaintiff is being deprived of the economic value of their information because they are not going to sell it. Then you get into a debate of if there’s data that could arguably be sold, it is probably lower value data, and may only have value in an aggregated way for a company. I think this is a fight we are going to have down the road, but I there are some inherent challenges. There is data that is breached that people just inherently wouldn’t sell, and it might go to other theories, but if you’re not going to sell it, you haven’t lost economic value.

MR. EDELSON: What a profound irony would it be where the really important data is the stuff we are not going to allow damages for, it is only the simple stuff.

MR. GLASGOW: So the sixth and final theory is a loss of privacy. Does privacy itself have an intrinsic value? I am not aware of any cases where anyone has actually attempted to calculate the economic value of privacy, but I can take two minutes to walk through how that might be done.

First, would there be any precedent for saying privacy itself has any kind of intrinsic value? In economic damage cases or natural resource damage cases, economic experts have analyzed the intrinsic value of the environment. This is known as "existence value" or "passive use value." This is the kind of value that individuals get just knowing that the environment is pristine.

For example, in the case of Exxon Valdez, there was a very large oil spill in a remote area that most people would not visit and there was not a lot of economic activity there. But people were still very upset that this pristine environment had been damaged. And in those cases, courts have actually explicitly ruled that lost environmental existence value can be compensable. So this could lead to a theory where we’d say there’s an existence value from privacy that comes from the knowledge that your personal data is secure and hasn’t been stolen by hackers and shared by third parties in some unauthorized way.

Of course, there could be some argument that just because your data has been stolen does not mean it has been accessed. And we see a lot of this in hard drive cases: your hard drive has been stolen, but you can’t prove anybody looked at it. So just because there is a data breach and records were stolen, you can’t prove that your records were breached.

So is there such a thing as loss of privacy and something we can claim as damages and how we could go about doing that? I will give you a very quick example. My colleague, Sarah Butler, and I just wrote an academic article looking at how one would calculate the value of privacy, particularly personally identifying information and non-personally identifying information.19

[Page 170]

We did a hypothetical survey where consumers shopped for streaming video services. There were different characteristics of the hypothetical video streaming services. One of those characteristics was the data sharing policy. We asked whether the consumers would rather we kept all the data secure or would it be okay if the hypothetical company shared non-personally identifiable information like which movies you watch, or also shared personally identifiable information such as email address? We examined three possible privacy policies to see how do people make trade-offs, whether they would still purchase a service that shared some data, would they pay less for it, or are they indifferent? This is called conjoint analysis, which is regularly used in courts for lots of class actions, lots of consumer-related cases.

Thinking about the options you see here, which streaming service would you be most likely to purchase? (1 of 11)

Catalogue Size 5,000 movies and 2,500 TV episodes 5,000 movies and 2,500 TV episodes 2,000 movies and 13,000 TV episodes 2,000 movies and 13,000 TV episodes NONE: I wouldn’t choose any of these.
Availability of Content T.V. episodes next day, movies in 3 months T.V. episodes in 3 months, movies in 6 months T.V. episodes in 3 months, movies in 6 months T.V. episodes next day, movies in 3 months
Commercials Yes None None Yes
Privacy Policy Share usage Share both usage and personal Collect but not shared Share usage
Monthly fee $8.99 $10.99 $8.99 $6.99

The above slide is just an example of the choice task we gave to our survey respondents. The whole point is consumers are making trade-offs: how much would you pay for the service, what’s the privacy policy, how many videos are available, how fast is the streaming service and so on. We are treating privacy as just another feature of the product; how much privacy does this product offer you and what kind of trade-offs between money and privacy are consumers willing to make?

What we can take away from this analysis is a calculation of willingness to pay. We came up with some interesting results that we wouldn’t expect. First, consumers on average didn’t value the non-personally identifiable information very much. We estimated 85 cents a month, and that was statistically insignificant, so we couldn’t even rule out zero. In this hypothetical, if it was movies that were being watched (not tied to your account), this kind of aggregated data would not give us much of a basis for claim damages. People didn’t seem to care very much if somewhere out there was an anonymous list of all the shows consumers watched in the last month.

[Page 171]

But if we add personally identifiable information to it, the numbers shot up, and became statistically significant. It came out to about $5.50 a month that people would be willing to pay to avoid having their personally identifying information shared along with their non-personally identifying information.

And their willingness to pay for "privacy" was similar to the willingness to pay for other products that people found desirable: more content, faster availability and not showing commercials. So theoretically, at least, we can look at different kinds of privacy policies and what kind of information is being shared out there and place a value on that information. This could be the core for some kind of damage calculation.

MR. EDELSON: This theory is very similar to the idea of diminution of value, but it gets around the problem of what if there’s a free service.

MR. SERWIN: But one of the things on privacy, is privacy is not binary. I understand what Mr. Glasgow is doing here, but I have researched a lot on demographics and privacy, and people have different views.

So even if someone is privacy centric, they may care about different sets of information. Not everybody is going to have the same view. There are some people who will not care, fundamentally not care, but they sure care if the government does and vice versa. So I think one of the challenges in this space, just from a theoretical basis, is just because I care about privacy, I don’t care the same way you might.

And it does break down by certain demographic features, and I think that’s one of the challenges, frankly, from a regulatory standpoint and a company standpoint when you try to roll these privacy policies out. How do you understand who your customer is? How do you make sure you are meeting their expectations to the extent you can even discern them?

MR. EDELSON: I think that is going to be one of the big arguments in the next couple years, especially when it comes to class certification.

Our view representing plaintiffs is if I buy a car and it doesn’t have seatbelts in the back and the fair market value of a car without seatbelts in the back is less than one with seatbelts in the back, I get to claim damages. It does not matter if I have kids or not, does not matter if I am ever going to use it. I bought something worth less than what I thought I was paying for. I think the defense bar desperately wants there to be subjective views on privacy because then the cases all go away. But I do not think that’s how the law works.

[Page 172]

——–

Notes:

1. Assistant Regional Director for the Federal Trade Comission’s Western Region focusing on consumer protection and completion issues including data security and privacy law enforcement. Prior to joining the Federal Trade Commission Dominique was in private practitioner for over a decade, representing technology companies in government investigations, litigation and class actions.

2. PRIVACY 3.0–THE PRINCIPLE OF PROPORTIONALITY, 42 U. Mich. J.L. Reform 869, (2009)

3. Id. (citing William L. Prosser, Privacy, 48 Cal. L. Rev. 383 (1960)).

4. Video Privacy Protection Act of 1988, Pub. L. 100-618, 102 Stat. 3195.

5. Alaska Genetic Privacy Act, 2004 Alaska Sess. Laws ch. 176, § 1.

6. Biometric Information Privacy Act, 2007 Ill. Laws 994.

7. Video Rental Privacy Act, 1988 Mich. Legis. Serv. 378 (West).

8. Telephone Consumer Privacy Act, 47 U.S.C. § 227(b)(3)(A) (2012).

9. ALASKA STAT. § 18.13.020 (2016).

10. Spokeo, Inc. v. Robbins, 136 S. Ct. 1540 (2016).

11. No. 1:14-cv-01437 (N.D. Ill.).

12. 568 U.S. ___ ; 133 S. Ct. 1138 (2013).

13. 794 F. 3d 688 (7th Cir. 2015).

14. 932 F. Supp. 2d 1089 (N.D. Cal. 2013).

15. 693 F. 3d 1317 (6th Cir. 2012).

16. 108 F.Supp.3d 949, 954 (D. Nev. 2015)..

17. In re: Yahoo! Inc. Customer Data Security Breach Litigation, case number 5:16-md-02752 (N.D. Cal.).

18. 785 F. Supp. 2d 855 (N.D. Cal. 2011).

19. Glasgow, Garrett, and Sarah Butler. 2017. "The Value of Non-Personally Identifiable Information to Consumers of Online Services: Evidence from a Discrete Choice Experiment." Applied Economics Letters. 24(6): 392-395.

Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment