Antitrust and Unfair Competition Law

Competition: Fall 2018, Vol 28, No. 1

ANTITRUST IS ALREADY EQUIPPED TO HANDLE "BIG DATA" ISSUES

By Abiel Garcia1

The term "big data" is everywhere these days. Big data is often cited as an emerging issue that competition lawyers need to study and watch for potential competitive misuse. But what exactly is meant by "big data?" And does it warrant unique antitrust attention?

U.S. Department of Justice Deputy Assistant General Nigro recently offered one description of big data as "an imprecise, catch-all term that describes a broad range of ideas related to the collection and commercial use of large quantities of information."2 While big data has no precise definition, this paper will focus on large data sets that track consumer actions and attempt to show that most competition questions raised by "big data" are generally addressed by the current antitrust structure and do not require new rules or frontiers of practice. However, recent comments by government representatives, such as FTC Chairwoman Ramirez, suggest that big data presents new challenges for antitrust law to address: for instance, barriers to entry created solely by mass data collection. In their dystopic view, major technology companies, solely by virtue of their data collection activities, could run afoul of competition laws and be deserving of extreme remedies like forced data sharing. First, this paper will describe how big data is created. Section 2 of this paper will describe some common characteristics of big data and, ultimately, attempt to create a working definition for it. In section 3, the paper will walk through various conduct-based examples involving big data and discuss how current antitrust law is well equipped to handle them.3 Finally, Section 4 will question the premise that data collection per se could be a source of anticompetitive activity, and argue that antitrust laws are not well suited for dealing with companies compiling mass data sets.

I. HOW IS BIG DATA CREATED?

Data is created every moment of every day. As this is being written, data is being created by the computer used to write this article, the cell phone sitting next to the computer, and the wired network they are both sharing. It has been reported that there are three and one-half billion sensors out in the market place, with 18 billion connected devices around the world.4 That number is estimated to grow to more than 30 billion by 2020.5 It was reported in 2013 that almost 90% of the world’s digital data that exists today was generated in the past two years.6 Every time someone clicks on a website, taps on a smartphone, logs into a website, uses his or her email as a username, or walks around with his or her phone in their pocket, the data is being collected and sent to various companies that users have agreed to share (or to not share) the data with. The exponential increase and ubiquitous collection of data partly stems from the plunging cost of data storage, but also from the integration of "smart" objects into daily life. Data, on its own, is comparable to a river or the wind, meaning that data flows easily and freely.7 It is accessible to any that can capture it.

[Page 1]

Generally speaking, old data is not as valuable as new data, especially for start-up companies, as data lessens in value over time when trying to compete in the digital age.8 Unsurprisingly, as technology changes and shifts, data from two years ago is less helpful than current data for building consumer profiles or targeting ads, as people move, change jobs, climb socioeconomic classes, and shift their preferences.9 Because data is available across multiple products, it is also nonexclusive and non-rivalrous.10 Users create data across multiple platforms and products, by using various usernames, emails, and identifiers, and because of this, no one firm controls a significant portion of the world’s data.11 Data is also easy to collect, as anyone can create a website and begin tracking users through a variety of techniques, or create a Wi-Fi integrated product and track data through the product.12

Furthermore, if one uses the same email or device to interact with various devices, companies can track the data across devices and create more complete consumer profiles. Flash cookies, history sniffing, device fingerprinting, cross-app tracking, and cross-device tracking are all tools used to create consumer data profiles.13 Every single one of these tools and pieces of technology relays data back to Amazon, Apple, Facebook, Google, Samsung, and other companies. The data is then sorted, but the bundle of all the data collected is called a "data set." These data sets are examples of what we now call "big data."

[Page 2]

II. WHAT IS BIG DATA?

From an antitrust perspective, big data means nothing more than a very large data set. There is no big data as akin to big law or big oil, which generally refer to major corporations within those industries. In academia, however, the four big V’s—volume, velocity, variety, and value—often characterize big data.14 The four big V’s are standardized characteristics of data sets that can be used to compare data sets easily. But make no mistake, each data set is unique, and built for a specific purpose using a specific technology.15 Each of the four V’s represents an objective characteristic of a specific data set:

  • Volume: the volume of data refers to simply the amount of individual data points collected.
  • Velocity: the velocity refers to how quickly the data is generated, but also to how quickly one can access the data.
  • Variety: the variety refers to the amount of unique data points contained within the data set.
  • Value: the value refers to the increased socioeconomic value to be obtained from the dataset.16

Thus, the four big V’s provide useful characteristics to describe common elements in a data set, but the four big V’s do not provide enough information to perform any sort of meaningful antitrust analysis. Arguably, the value of each data set is the most important characteristic, because companies would only keep such massive data sets if they could derive value from them. Data sets themselves are less valuable than the economic value a company can extract (such as consumer preferences, insights, or correlations) from said data sets. For instance, a data set containing all the information relating to TV viewing habits of senior citizens would be useless to a shoe manufacturer.

The colloquial references to "big data" in legal circles seem to refer to massive data sets that reflect complete consumer profiles that are unmatched, and which allegedly could lead to anticompetitive harms. FTC Chairwoman Edith Ramirez stated that "there is no question the aggregation of data may have important implications for competition."17Yet, there have not been wide spread situations in which "big data" has in fact lessened competition.18 In fact, Deputy Assistant Attorney General Barry Nigro stated that:

[Page 3]

The term "big data" is not only imprecise but affirmatively unhelpful to the extent it is used to imply that data is different from other assets and carries with it special obligations. It is important to be precise when considering antitrust enforcement principles and to avoid general terms that may mean different things to different people depending on the circumstances.19

So, if there is no universal acceptance of a definition for big data, then what are people referring to when discussing it, especially if it is not the four big V’s? As stated above, big data is nothing more than a reference to the data-gathering practices of technology companies, which feed into various data sets held by the companies. These various and unique data sets are constantly growing and changing, and these various and unique data sets are what make up "big data."

When referring to data sets used by companies, it is useful to distinguish between data gathered internally to help boost company efficiency and data sets that track consumer interactions with companies. Internal metrics data would include measuring various internal data points that a company has established as useful for improving company profits and bottom lines. These could include employee processing times, time spent on various tasks, and so on. On the other hand, data sets that track consumer habits and purchases on company websites are used to specifically tailor the customer experience, either through targeted ads, different visual representations on the companies’ platforms, or customized sales experiences. These data sets could include figures like SKUs bought from a website, locations of customers, and average purchase price per customer.

Data sets that focus on consumer habits are the focus of this article, and while it may be feasible that internal company data sets could somehow be used in an anticompetitive way, data sets that describe consumer behavior and provide unique customer insights are the focus of antitrust practitioners today.

III. HOW ANTITRUST HAS TREATED "BIG DATA"

Since big data is nothing more than data sets made up of millions of individual data points, then big data can represent an asset for any given company. Big data, like other assets, can be used to leverage an existing network, build new products, or target specific consumers through ads. For example, website sales data can help Nike figure out what products to place in a new retail store by looking at the top 100 selling SKUs based on shipping locations within a 50 mile radius of the new retail store.

[Page 4]

Recently, antitrust practitioners and enforcers have expressed opinions that the action of amassing data, and not potentially sharing it with rivals, may be anticompetitive in and of itself.20 Yet, this hypothetical is a departure from traditional antitrust fact patterns that involve "data." Typical fact patterns involving data have usually revolved around a dominant firm denying access to previously accessible data sets, such as the Yellow Pages cases, where previous access to the yellow pages was denied through the use of exclusive licenses.21

European Commissioner for Competition Margrethe Vestager stated that data could "foreclose the market—[it] can give the parties that have them immense business opportunities that are not available to others."22 FTC Chairwoman Ramirez stated that there was "no question that the aggregation of data may have important implications for competition."23 A 2016 Franco/German study on competition law and data stated that the "collection of data may result in entry barriers where new entrants are unable either to collect the data or to buy access to the same kind of data, in terms of volume and/ or variety, as established companies."24 The study cites examples such as Facebook and Google as entrenched market participants that could be using such data to stay in power.25

As their statements demonstrate, antitrust officials appear to be imagining a hypothetical, where a competitor proclaims they cannot compete due to a lack of access to data, regardless of whether that data has ever been publicly available. Yet this hypothetical is only one of multiple ways in which data has been used over the past years. The Franco/ German study identifies a variety of ways that data could be used to stifle competition: 1) refusal to provide access to data; 2) discriminatory access; 3) exclusive contracts; 4) tied sales and cross-usage; and 5) discriminatory pricing.26 Additionally, other practitioners have commented on how big data could affect competition.27 The recent concern of companies having too much data seems to be true even with respect to companies who have never offered their data to the public. This distinction—between companies that sell or publicly offer their data and companies that only internally use their data—is a key one.

Until recently, antitrust cases that have involved data mostly contain fact patterns in which companies have previously offered data sets to the public, either for free or as a marketed product, and then later restricted access to the data or, in the context of mergers, acquisitions in which two dominant sellers of data are attempting to merge.28 The novel concern enforcers are discussing is one in which companies create internal data sets that are never publicly shared, but then are claimed to be so necessary to competition that they create barriers to entry.

[Page 5]

Both of these different scenarios can be handled by the current antitrust laws, but at least with respect to the second scenario, the question is: should antitrust be used to remedy the potential issues? In the first group of situations, where companies restrict access to public or marketed data sets, the antitrust laws are already adequately capable of handling any competitive issues, as they are akin to the same types of anticompetitive conduct seen in other industries. The second group of scenarios, where internal data sets are claimed to be necessary, may not be best addressed by the antitrust laws.

A. Data Offered Publicly

Most real world examples cited by the FTC, EU, and various other practitioners involve situations in which a company sells data as a product and then attempts to lock out, or restrict access to, other companies that rely on the data through a variety of means. In these scenarios, the data is no different than any widget sold in a typical transaction and as such, traditional antitrust analysis can easily apply and address any competitive concerns. None of these scenarios require a novel application of antitrust laws in order to alleviate potential competitive problems.

1. Discriminatory Access

One of the scenarios commonly raised when discussing issues with big data relates to the discriminatory access to data sets. In the Franco/German Study, one example cited is with respect to Cegedim, the leading provider of medical information databases in France.29 Cegedim refused to sell its main database to customers who used the secondary software of its main competitor in the software market.30 The French Competition authority considered such behavior discriminatory and concluded that since Cegedim’s database was the leading market dataset, the practice had the effect of limiting competition between 2008 and 2012.31

While denying access to an asset can be a problematic practice, denying access to a necessary data set that others have relied upon does not present a novel antitrust issue. The antitrust laws have always been equipped to handle situations in which parties with market power in one area try to leverage said market power into another market. The situation described in the Franco/German study is very similar to Eastman Kodak or Aspen Skiing.32 The situation cited by the Franco/German study does not propose a new theory of harm, rather repackages the theory of harm for when the essential or necessary product is a data set.

[Page 6]

2. Exclusive Contracting

Another cited example in how data could be used to stifle competition is through the use of exclusive contracting. While real world examples seem to be exceedingly rare, the Franco/German Study cited to the use of a series of exclusive contracts by Google in the search-advertising market.33 The public statements made by the EU Commission simply stated that Google imposed "exclusivity obligations on advertising partners, preventing them from placing certain types of competing ads on their websites . . . with the aim of shutting out competing search tools," but did not go further into the use of those contract provisions.34 Other practitioners have posed similar hypothetical situations such as dominant third-party suppliers using exclusive contracts to foreclose competition from other third-party data providers or the use of exclusive contracts to foreclose a potential competitor to a particularized data stream.35

As discussed below, the use of exclusive contracting, through dominant firms such as Google, has always been an issue that has been addressed by antitrust laws and has been enforced over many years.36 The concern with the above cited examples is not the data set itself, or even the amount of data being collected, but rather the actions of a dominant firm to foreclose others to a particular data set or stream they previously could access. Even assuming a dominant firm could foreclose a data stream to other competitors, a highly individualized review of the foreclosed data set would need to be undertaken since it would be necessary to show that the foreclosed data was so unique, the competitor could not access it anywhere else.

3. Tying Sales or Cross-Market Data Leveraging

Practitioners and the Franco/German study also propose another scenario in which data collection in one market could be used by a company to increase market power in another market.37 Citing to the UK Competition and Markets Authority, the fact pattern conceived includes the possibility of tied sales "whereby a company owning a valuable dataset ties access to it to the use of its own data analytic services."38 The Cegedim case above could be also be positioned as a tying case.

The Franco/German study also went further stating that monopolies that have access to unique data, such as a public utility that is also involved in selling secondary goods to utility consumers, could use the data to gain an undue competitive advantage.39 The study cited to a French Competition Authority action that imposed interim measures on GDF-Suez, a regulated gas provider.40 GDF-Suez was a gas supplier that had access to gas consumption data that other downstream secondary providers did not. This allowed GDF-Suez a competitive edge in making offerings to gas consumers in related secondary markets.41

[Page 7]

Both of these scenarios still do not represent novel theories of harm, but rather fall into theories of harm that have already been considered by the antitrust laws. The usage of data as a product, at least in the tying context, is addressed by a standard tying analysis and should be treated no differently just because data is one of the products being tied.42

Interestingly enough, the French Competition Authority case involving the monopolistic gas provider does present a novel situation that will be addressed in the following section. Essentially, GDF-Suez had access to a unique data set that could not be accessed by any other competitors (the gas consumption data of consumers). Access to such a unique dataset could have downstream effects that could raise competitive concerns.

4. Price Discrimination

Finally, the last scenario that was poised by the Franco/German study is the possibility of companies using data to price discriminate.43 The Franco/German study advances the proposition that data sets could be used to "set different prices for the different customer groups it has identified thanks to the data collected."44

Price discrimination is not a novel theory of harm, even if it is predicated on the usage of data to further price discriminate. Grocery stores and discount wholesale stores have been using data tracking to price discriminate since the early 2000s, with loyalty and memberships cards. The Franco/German study even concedes that "economic analysis also shows that the effects of price discrimination . . . are more ambiguous."45 Some consumers could "end up paying higher prices for a given good or service but some others would receive better price offers than in the absence of discrimination."46 Even though the pricing discrimination is a result from "collecting data about their clients" and purchasing habits, antitrust laws have always been in place to handle pricing discrimination.

Many of the hypotheticals and real world examples cited by the Franco/German study, and practitioners generally, arise from companies selling data as a product or widget, and then misusing its market power. In these scenarios, the antitrust laws are more than adequate to handle the typical competition issues that arise from market power abuses. The more interesting question, and the one that seems to be at the forefront of different authorities’ minds, is the possibility of acquiring so much data, that the amalgamation of said data in and of itself creates a barrier to entry for competitors, either actual or potential.

[Page 8]

B. Data as a Barrier to Entry

The novel scenario that seems to be the elephant in the room is the possibility of one company, such as Amazon, Google, or Facebook, using its massive data stores to keep competition at bay. In an interview with Vanity Fair, Tim Berners-Lee, the man who is credited with creating the world wide web, stated "Facebook, Google, and Amazon now monopolize almost everything that happens online, from what we buy to the news we read to who we like."47 He implies that the majority of all data produced on the internet is horded by these three companies. The question then becomes does access to this immense amount of data in and of itself create a barrier to entry?

The Franco/German study stated that "refusal to access to data can be anticompetitive if the data are an ‘essential facility’ to the activity of the undertaking asking for access."48 Chairwoman Ramirez stated that "[w]hether there is a competitive advantage associated with access to a large volume of data will depend on the particular set of facts."49 Other practitioners cite to the US v. Bazaarvoice merger challenge—in which company documents stated that the company’s ability to "leverage the data from its customer base" was a key barrier to entry—as an example where data access created a barrier to entry.50 Yet, all of these comments fail to take into account what exact data is being discussed and what was the exact barrier to entry.

The Franco/German study recognized the flaw in arguing that access to large amounts of data could create a barrier to entry, stating that in order to make the argument, it would need to be demonstrated that the data set creating the barrier to entry was "truly unique and there is no possibility for the competitor to obtain the data that it needs to perform its services."51 Thus, for data to create a barrier to entry in and of itself, the dataset must be unique, unduly burdensome to gain access to, and necessary for competition or potential competition to continue. It would be akin to an essential facilities doctrine claim.

Only a very limited amount of fact patterns would be able to meet this high threshold. While some examples may cite to the entrenchment of Facebook, Google, and Amazon as companies that are data-driven, dominant market players that are entrenched due to their data collection, the companies did not become such large companies based on their data-driven techniques.

Facebook created a better social network and toppled the likes of Myspace and other social websites due to its network design. Google was not the incumbent search engine when it launched, but rather Microsoft and Yahoo were once the top search engines used. Google became what it is today due to a better search algorithm and network. Amazon became a dominant market force due to its improved delivery and logistics network.

[Page 9]

All three of these companies emerged as dominant competitors not because of their ability to mine data and use that to exclude others, but because of their better networks and products. Their product-based success led them to use data to improve said networks and products.52

Now, the fact pattern imagined by big data commenters, which argues that data needs to be unique, unduly burdensome to access, and necessary for competition seem to have been shown in the case involving GDF-Suez. GDF-Suez was the monopolist gas provider that had access to a unique set of data, gas consumption information for a specific set of consumers. It then used this data to improve its market position in secondary markets; markets in which its competitors did not have access to the same data since they were not the monopolistic gas provider. Ultimately, the French authorities ordered the sharing of information between GDF-Suez and other competitors. But the circumstances in GDF-Suez appear to be unique to the particular market. To be analogous to Facebook, Google, or Amazon, a competitor would have to argue that the incumbent company had monopolistic access to a consumer data stream, which has already been shown to be highly unlikely.

IV. THE VALUE OF DATA

When discussing big data, it must be remembered that the real value behind companies like Amazon, Facebook, and Google is not in the data gathering itself, but in their use of data. The algorithms and analysis of said data create much more value than the data alone.53 The next competitor to Amazon, Facebook, or Google could do just the same by creating a better network or algorithm that improves consumer results. Even if one ordered the sharing of data, under an essential facility doctrine claim, there is no proof that a competitor will not effectively compete if the competitor cannot extract value from the same data set.

A. Antitrust Remedies Would Not Help Create Value for Competitors

Putting aside the privacy law concerns of forcing companies to exchange data without asking for consumers’ consent, "there are many reasons to be skeptical of using the antitrust laws to force the sharing of data."54 First, there is no evidence that providing the same data set to competitors would yield better competitive results for consumers, or new products.

[Page 10]

Another concern, as raised by Deputy Assistant Attorney General Nigro, is that if forced sharing of data sets were possible, then it would reduce the incentive to invest in innovation.55 Usually, companies in the tech space aim to become the dominant solution for a given problem. Companies like Postmates, Uber, and others create a network or solution to a given problem and then can use the data from their consumers to reap profits rightfully earned. As stated by Deputy Assistant General Nigro, "the opportunity to charge monopoly prices—at least for a short period—is what attracts ‘business acumen’ in the first place; it induces risk taking that produces innovation and economic growth."56 Additionally, if firms know that their potential unique data set could be forced to become public, the incentive to innovate on their own is reduced.57

Finally, another reason to be hesitant to force sharing of data sets is because of the administrative challenges presented by such a remedy. As Justice Scalia pointed out in Trinko, "[e]nforced sharing also requires antitrust courts to act as central planners" since ordering forced sharing will require continuing court supervision, inquiry into what exact data must be shared, and a highly detailed decree.58 "No court should impose a duty to deal that it cannot explain or adequately and reasonably supervise."59

While in some cases forced sharing may be deemed a potential remedy, it is too blunt of a tool to require companies to share data when there is no guarantee that it would help any competitor, potential or actual. Rather, in novel situations where the access to massive amounts of data is the supposed root of the problem, courts should be hesitant to apply a forced sharing remedy and look to other areas of the law to attempt to resolve any issues.

V. CONCLUSION

"Big data" is an imprecise term. It obscures and confuses practitioners and market participants alike when referring to what are essentially large data sets. These data sets are nothing more than very large data assets built by companies in an attempt to better understand their customers and users. The majority of issues that involve data sets are already contemplated by current antitrust theories of harm since many of the examples cited involve companies attempting to leverage data they sell into various markets or denying access to downstream competitors. The antitrust laws are properly set up to deal with these scenarios.

[Page 11]

In contrast, antitrust law is not well equipped to handle situations where a company’s internal data set is claimed to be essential for competition. This novel claim is similar to an essential facilities doctrine, but it differs in that most, if not all, data sets, are proprietary creations. There are unique situations in which the court may need to order proliferation of the data set in order to equalize the playing field among competitors, but in most circumstances, forced sharing of data sets should be discouraged for a variety of reasons. At the end of the day, the dominant internet companies that concern many antitrust practitioners owe their dominance to better networks or algorithms, not due to better data access. The data itself plays only a small part in determining competitive outcomes in a data-driven technology space.

[Page 12]

——–

Notes:

1. Abiel Garcia is an associate attorney at Gibson, Dunn & Crutcher. The views expressed herein are his own.

2. Bernard A. Nigro, Jr., " ‘Big Data’ and Competition for the Market: Remarks as Prepared for Delivery at The Capital Forum and CQ," at 2 (2017).

3. While data issues have arisen in both mergers and conduct-base cases, this paper will mostly focus on conduct-based issues that involve data.

4. Sam Lucero, IHS Technology, IoT Platforms: Enabling the Internet of Things 5 (Mar. 2016), available at https://cdn.ihs.com/www/pdf/enabling-IOT.pdf.

5. Id.

6. Edith Ramirez, Chairwoman, Fed. Trade Comm’n, Big Data: A Tool for Inclusion or Exclusion, at 1 (Sept. 15, 2014), available at https://www.ftc.gov/public-statements/2014/09/big-data-tool-inclusion-or-exclusion-opening-remarks-chairwoman-edith.

7. Xavier Boutin and Georg Clemens, Defining "Big Data" in Antitrust, Competition Policy International: Antitrust Chronicle 2017, at *6 (Mar. 21, 2017), available at https://ssrn.com/abstract=293897.com.

8. Lockwood Lyon, The End of Big Data, Database J. (May 16, 2016), available at: https://www.databasejournal.com/features/db2/the-end-of-big-data.html.

9. Ajay Agrawal, Joshua Gans, & Avi Goldfarb, Is Your Company’s Data Actually Valuable in the AI Era?, Harvard Business Rev., (January 17, 2018), available at https://hbr.org/2018/01/is-your-companys-data-actually-valuable-in-the-ai-era (stating that ongoing value of data usually comes from "the new data you accrue each day").

10. D. Daniel Sokol & Roisin Comerford, Antitrust and Regulating Big Data, 23 Geo. Mason L. Rev. 1129, 1137 (2016).

11. Id.

12. Catherine Tucker, The Implications of Improved Attribution and Measurability for Antitrust and Privacy in Online Advertising Markets, 20 Geo. Mason. L. Rev. 1025, 1030 (2013).

13. Edith Ramirez, Deconstructing the Antitrust Implications of Big Data: Keynote Remarks of FTC Chairwoman Edith Ramirez, at 5 (2016).

14. Sometimes "big data" is characterized by three V’s, which does not include value. Boutin and Clemens supra note 6, at 3. It could be posited that the four V’s could help in analyzing whether data sets are similar for substitutability purposes.

15. Allen P. Grunes & Maurice E. Stucke, No Mistake About It: The Important Role of Antitrust in the Era of Big Data, The Antitrust Source 1-14 (2015), http://ssrn.com/abstract=2600051 (last visited June 25, 2018).

16. Id.

17. Ramirez, supra note 13, at 9.

18. There have been cases where data is the consumer good being sold, and data has been at the core of inquiry, but those cases are ones in which data is seen as product or "widget" rather than as a necessary resource to compete. See Andres V. Lerner, The Role of "Big Data" in Online Platform Competition, 4—5 (2014), available at http://ssrn.com/abstract=2482780 (describing that claims of "Big Data" presenting competitive concerns are unsupported by real world evidence).

19. Nigro, supra note 2, at 2.

20. Nigro, supra note 2, at 3.

21. GTE New Media Svc In. v. Ameritech Corp., Case No. 1:97-v-02314 (D.D.C. October 6, 1997).

22. Babette Boliek, Tech Companies, Big Data, and Competition: Diverging Views in the US and EU, AEIdeas (Jan. 11, 2018), available at http://www.Aei.org/publication/tech-companies-big-data-and-competition-divering-views-in-the-us-and-eu.

23. Ramirez, supra note 13, at 9.

24. Autorite de la concurrence and Bundeskartellamt, Competition Law and Data (May 10, 2016), available at http://www.autoritedelaconcurrence.fr/doc/reportcompetitionlawanddatafinal.pdf.

25. Id. at 3.

26. Id. at 17-22.

27. See generally Grunes & Stucke, supra note 15.

28. See GTE New Media Svc In., supra note 20; Reed Elsevier NV, et al., File no. 081-0133 (filed Sept. 16, 2008), available at https://www.ftc.gov/sites/default/files/documents/cases/2008/09/080916reedel seviercpcmpt.pdf.

29. Competition Law and Data, supra note 24, at 18—19.

30. Id.

31. Id. at p. 19.

32. Eastman Kodak v. Image Tech. Servs., Inc., 504 U.S. 451 (1992); Aspen Skiing Co. v. Aspen Highlands Skiing Corp., 472 U.S. 585 (1985).

33. Competition Law and Data, supra note 24, at 19—20.

34. European Commission Press Release, Antitrust: Commission probes allegations of antitrust violations by Google, Nov. 30, 2010, available at http://europa.eu/rapid/press-release_IP-10-1624_en.htm?locale=en.

35. Jay Modrall, Antitrust Risks and Big Data, Competition World, 10-11 (2017).

36. See Standard Oil Co. v. U.S., 337 U.S. 293 (1949); U.S. v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001).

37. Competition Law and Data, supra note 24, at 20.

38. Id.

39. Id.

40. Id.

41. Id.

42. See Jefferson Parish Hospital Dist. No. 2 et al. v. Hyde, 466 U.S. 2, 12 (1984).

43. Competition Law and Data, supra note 24, at 21.

44. Id.

45. Id.

46. Id.

47. Katrina Brooker, Tim Berners-Lee, The Man Who Created The World Wide Web, Has Some Regrets, Vanity Fair (Aug. 2018), available at https://www.vanityfair.com/news/2018/07/the-man-who-created-the-world-wide-web-has-some-regrets.

48. Competition Law and Data, supra note 24, at 18.

49. Ramirez, supra note 13, at 8.

50. Grunes & Strucke, supra note 15 (citing U.S. v. Bazaarvoice, Inc., No. 13-cv-00133-WHO, 2014 WL 203966, at *50 (N.D. Cal. Jan. 8, 2014) (internal quotation marks omitted)).

51. Competition Law and Data, supra note 24, at 18.

52. It could be argued that the data they acquire allows them to purchase companies that could become rivals. While this potentially could be true, the remedy for limiting acquisitions would require government authorities to change the way it analyzes mergers, not a new application of existing statutes. See The World’s Most Valuable Resource is No Longer Oil, but Data, The Economist (May 6, 2017), available at https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-resource-is-no-longer-oil-but-data.

53. Shelly Blake-Plock, Where’s The Value in Big Data, Forbes.com (April 14, 2017), available at https://www.forbes.com/sites/forbestechcouncil/2017/04/14/wheres-the-value-in-big-data/#1cdfda530dad (stating that a human brain is not well-suited to the digestion of big data but that AI is helping in "tackling the challenge of getting value out of big data").

54. Nigro, Jr., supra note 2, at 3.

55. Id.

56. Id. at 4.

57. Id.

58. Id. at 5.

59. There are limited situations where the FTC has ordered the sharing of data, such as the sharing of Risk Evaluation and Mitigation Strategies in the pharmaceutical space. Requirements in this space facilitate compliance with mandates created by Congress in the Hatch-Waxman Act, rather than attempting to regulate a free market. Fed. Trade Comm’n, Brief as Amicus Curiae, Mylan Pharmaceuticals, Inc. v. Celgene Corp., No. 2:14-cv-2094-ES (D.N.J. June 17, 2014); Fed. Trade Comm’n, Brief as Amicus Curiae, Actelion Pharmaceuticals Ltd. v. Apotex Inc.., No. 1:12-cv-05473-NLH-AMD (D.N.J. Mar. 11, 2013).

Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment