Intellectual Property Law
Recap of the 48th Annual IP Institute
Three law student reporters, Erika Buenrostro, Mirna Champ, and Jenean Docter were invited to the institute to report on and summarize the event.
U.S. Privacy Law Update and Overview
By Student Reporter: Erika Buenrostro
Speakers: Hanifa Baporia (Oracle) and David Navetta (Troutman Pepper)
Hanifa Baporia, Managing Counsel at Oracle, and David Navetta, Partner at Troutman Pepper, provided an overview of the evolving landscape of U.S. privacy laws. Their discussion highlighted the shift in privacy enforcement from federal to state regulators, with significant developments in data broker regulations, AI-driven privacy concerns, and litigation trends such as CIPA and geolocation data lawsuits. As privacy laws continue to evolve, businesses must stay informed about federal, state, and local regulations that impact data collection, monetization, and compliance strategies.
The Expanding Scope of Privacy Regulations
One of the most pressing themes discussed was the fluid nature of data regulation. Privacy laws in the U.S. are increasingly fragmented, with rules emerging from various levels of government, including federal, state, and even municipal bodies. For example, Manhattan has taken the lead in enacting AI-related regulations, signaling a trend toward localized data governance.
A bipartisan approach to enforcement has also emerged, with states like California and Texas taking proactive roles in regulating data privacy. Class action suits related to consumer data rights have become more prevalent, and there is a notable shift in enforcement responsibilities from federal agencies to state attorneys general (AGs). The Texas AG’s recent lawsuit against Allstate for undisclosed geolocation tracking exemplifies this shift.
CCPA 2025 Updates and Data Broker Regulations
California continues to lead in privacy regulation with its updates to the California Consumer Privacy Act (CCPA). The definition of “data broker” has been expanded, and businesses now face stricter compliance requirements. Key updates include:
- A broader interpretation of “direct relationship” and “selling” of consumer data.
- Mandatory annual registration for data brokers, effective January 2024.
- A CCPA-mandated deletion mechanism to be established by January 2026.
- Data brokers required to begin processing deletion requests by August 2026.
- Independent audits of data brokers to commence in January 2028.
Comprehensive U.S. State Privacy Laws
As of 2024, 20 states have enacted comprehensive privacy laws, with seven states (New Jersey, New Hampshire, Kentucky, Nebraska, Maryland, Minnesota, and Rhode Island) passing such laws this year alone. These laws create broad obligations for covered entities, moving beyond sector-specific regulations. Notably, Maryland’s Online Data Privacy Act, set to take effect in late 2025, imposes some of the strictest data minimization standards yet, including:
- Prohibitions on the collection, sharing, or processing of sensitive data unless “strictly necessary.”
- A ban on the sale of sensitive data.
- No opt-in/opt-out consumer choice for data use.
- Higher standards to prevent AI bias and discrimination in automated decision-making.
Emerging Privacy Litigation Trends
Litigation risks for businesses are growing, particularly concerning consumer data tracking. The California Invasion of Privacy Act (CIPA) has led to an explosion of lawsuits targeting businesses that use session replay technology to track online user activity. Courts are broadening the definition of “trap and trace” surveillance, equating certain metadata collection practices with traditional phone wiretapping.
A significant shift toward metadata-focused litigation is underway, with claims arguing that website analytics and tracking tools violate privacy rights. Recent CIPA lawsuits have been dismissed due to lack of standing, but businesses should still be cautious as courts continue refining their stance on these issues.
Additionally, AGs are leveraging investigative tools to audit companies’ tracking practices. Some businesses have faced legal action for misrepresenting their cookie policies while continuing to collect user data. These “gotcha” moments underscore the importance of clear and truthful consumer disclosures.
Sector-Specific Privacy Concerns
- Healthcare and Reproductive Data Protection
In the wake of the Dobbs decision, there is increased scrutiny over geolocation data tracking that may reveal individuals’ visits to reproductive healthcare providers. Dobbs v. Martin Marietta Materials, Inc., No. 1:19-CV-4170-MHC-LTW, 2021 WL 4929074 (N.D. Ga. Sept. 13, 2021).
- Children’s Privacy
The FTC has updated the Children’s Online Privacy Protection Rule (COPPA), while states push forward with age-appropriate design laws. However, many of these laws face First Amendment challenges, leaving their future uncertain.
- AI and Data Monetization
The intersection of AI and privacy regulation is a growing area of concern. AI systems rely heavily on data for training, but businesses must ensure compliance with privacy laws governing notice, consent, and data ownership.
Upcoming Regulations and Enforcement Actions
- Colorado AI Act (effective February 1, 2026): Requires AI developers and users to document their due diligence and ensure that AI decision-making processes are non-discriminatory.
- DOJ Rule (April 2025): Implements consent requirements for bulk data transfers, aligning with international regulations that seek to impose stricter controls on cross-border data flows.
- Washington’s My Health Data Act: Grants consumers greater control over their health-related data and has already led to legal action against Amazon.
- New York’s Proposed Privacy Law: While establishing strict privacy protections, it notably does not include a private right of action.
Conclusion
As privacy laws continue to evolve, businesses must remain proactive in compliance efforts. With state attorneys general ramping up enforcement actions, new state laws imposing stricter obligations, and increasing litigation risks surrounding data tracking and AI, companies must adapt to a rapidly changing regulatory environment. Implementing robust privacy policies, staying informed on emerging legislation, and leveraging privacy-enhancing technologies (PETs) will be essential in navigating this complex landscape.
Patent litigation Venue Considerations
By Student Reporter: Erika Buenrostro
Speakers: Oleg Elkhunovish (Susman Godfrey) and Ellisen Turner (Kirkland & Ellis)
Patent litigation requires careful venue selection to ensure that a case proceeds efficiently and strategically. Oleg Elkhunovish and Ellisen Turner discussed the critical factors in determining where to file, including venue rules, judicial tendencies, and potential risks of case transfers. The main takeaway from their discussion was that venue strategy has become increasingly important following the TC Heartland decision, leading to shifts in filing trends away from Texas and a growing reliance on the International Trade Commission (ITC) and Unified Patent Court (UPC) for more efficient enforcement. TC Heartland LLC v. Kraft Foods Grp. Brands LLC, 581 U.S. 258, 258 (2017).
Where to File: Key Considerations
Selecting the right venue involves multiple factors, including ensuring that the venue is proper to avoid transfer, evaluating the time to trial, and assessing whether local rules are favorable to patent holders. Additionally, a defendant’s connections to the district can impact whether a case remains in a chosen venue or is transferred elsewhere. Given these complexities, a strong venue strategy starts with identifying the desired outcome and working backward to select the most advantageous jurisdiction.
Key U.S. Venue Rules
Under 28 U.S.C. § 1400(b), patent cases can only be filed where the defendant resides or has a regular and established place of business. The TC Heartland (2017) decision reinforced that, for domestic companies, residence is defined as the state of incorporation, limiting plaintiffs’ ability to forum shop. The In re Cray Inc. decision further clarified what constitutes a “regular and established place of business” through a three-factor test, emphasizing that merely doing business in a district does not automatically establish proper venue. In re Cray Inc. Derivative Litig., 431 F. Supp. 2d 1114 (W.D. Wash. 2006).
Since TC Heartland, venue trends have shifted significantly. There has been a decrease in filings in Texas, while California has seen an uptick in cases. Delaware initially experienced a surge in patent litigation but has since declined in perceived patent-friendliness. The once-dominant patent docket in the Western District of Texas under Judge Albright has also seen changes, influencing where patent owners file their cases.
Transferring Cases: Strategic Considerations
Motions to transfer cases should be carefully planned, as venue changes can significantly impact litigation strategy. Understanding the judge’s approach and the risks associated with a particular venue is essential. Additionally, maintaining credibility is critical—courts are less likely to grant subsequent transfer requests if a party is perceived as inconsistent or disingenuous in its venue arguments.
International Trade Commission (ITC) Considerations
The ITC provides a faster resolution for patent disputes compared to district courts and has jurisdiction over importation-related patent infringement issues. However, unlike district courts, the ITC does not award monetary damages—its primary remedy is an injunction preventing the importation of infringing products. This makes the ITC an attractive venue for competitor disputes where market flooding is a concern. Many ITC cases proceed alongside district court litigation, with plaintiffs first seeking an ITC import ban before pursuing damages in district court.
Unified Patent Court (UPC) in Europe
Launched in June 2023, the UPC serves 18 EU member states, with notable non-participants including Spain, Poland, and non-EU countries. The UPC offers centralized enforcement, allowing patentees to enforce their rights across multiple jurisdictions simultaneously. However, a key trade-off is that the UPC can also revoke a patent across all participating countries, posing a greater risk than national courts.
When choosing between the UPC and national courts, strategic considerations come into play. Germany is a popular choice due to its fast processes and injunction-friendly rulings. The pharmaceutical industry frequently uses the UPC, leveraging bifurcation to delay invalidity rulings while securing preliminary injunctions against competitors.
Key Takeaways
Venue selection is a critical aspect of patent litigation, and careful consideration must be given to speed, judicial tendencies, and transfer risks. The ITC is a strong option for blocking imports but does not award damages, making it ideal for competitor disputes. The UPC provides an efficient way to enforce patents across multiple EU countries, but its centralized revocation risk requires strategic opt-in decisions. Ultimately, patent litigation strategy should align with business goals, balancing enforcement speed, legal risks, and the potential for invalidation.
Combating the Gray Market: Trademark Enforcement and Supply Chain Control
By Student Reporter: Erika Buenrostro
Speakers: Deborah Greaves (Offit Kurman) and Morgan Smith (Foran Glennon Palandech Ponzi & Rudloff)
Deborah Greaves and Morgan Smith discussed the key legal and strategic measures for combating gray market goods, emphasizing trademark enforcement, supply chain control, and the significance of proving material differences to strengthen brand protection.
Every year, businesses lose billions due to gray market goods—authentic products imported into the U.S. without the trademark owner’s consent. Unlike counterfeit goods, these products are genuine but sold through unauthorized channels, creating confusion for consumers and undercutting authorized distributors. The legal battle against gray market goods hinges on proving material differences between authorized and unauthorized products, with enforcement strategies focusing on supply chain transparency and trademark protections.
Legal Framework for Combating Gray Market Goods
Companies can combat gray market imports through key legal provisions and proactive enforcement measures. Some of the major considerations include:
- Tariff Act (19 U.S.C. § 526(a)) – Allows U.S. Customs to stop unauthorized imports but only if the goods differ materially from those intended for U.S. sale.
- Lanham Act (15 U.S.C. § 1124 & § 1114(1)(a)) – Governs trademark infringement by requiring proof that gray market goods are materially different from authorized versions in a way that affects consumer perception.
- First Sale Doctrine – Provides a defense to trademark infringement claims unless the goods differ materially or interfere with the trademark owner’s quality control.
Material Differences: The Key to Enforcement
To block gray market goods, trademark owners must establish material differences—physical or non-physical variations that impact consumer expectations. Courts have found material differences in cases involving:
- Physical Variations – Changes in product quality, ingredients, or packaging (e.g., imported batteries lacking safety warnings or electronics missing serial numbers).
- Non-Physical Differences – Lack of warranties, customer service, or compliance with U.S. regulations (e.g., warranty exclusions in Therabody, Inc. v. Dialectic Distribution LLC, No. 23CV21995 (EP) (LDW), 2024 WL 3355308 (D.N.J. July 10, 2024)).
- Quality Control – If unauthorized sales undermine a brand’s ability to control product quality, courts may reject First Sale Doctrine defenses. Otter Products LLC v. Triplenet Pricing Inc, 572 F. Supp. 3d 1066 (D. Colo. 2021).
Recent cases like Energizer Brands LLC v. My Battery Supplier have reinforced that failing to include necessary safety information or warranty disclosures can establish material differences, strengthening claims under the Lanham Act. Energizer Brands, LLC v. My Battery Supplier, LLC, 529 F. Supp. 3d 57 (E.D.N.Y. 2021).
Proactive Strategies: Controlling the Supply Chain
Legal enforcement is just one tool — companies should also take proactive steps to limit gray market infiltration. Key strategies include:
- Supply Chain Control – Implement tracking systems (e.g., blockchain authentication) to monitor product flow and identify unauthorized resellers.
- Distributor Agreements – Enforce strict distribution terms, audit sales, and include clear prohibitions against unauthorized resale.
- Selective Distribution – Limit the number of authorized sellers and require adherence to quality control programs.
- Material Differences – Strengthen consumer expectations through warranties, loyalty programs, and exclusive benefits tied to authorized purchases.
- Marketplace Enforcement – File takedown notices on platforms like Amazon and eBay to remove unauthorized listings.
- Legal Action – Send cease-and-desist letters, conduct test purchases, and file trademark infringement claims when necessary.
Conclusion
Gray market goods present ongoing challenges for trademark owners. However, by implementing a comprehensive enforcement strategy that combines legal action with supply chain transparency, businesses can safeguard their brand integrity. Strengthening quality control measures and emphasizing material differences will further reinforce consumer trust while mitigating the risks associated with unauthorized imports.
Navigating AI Regulations and Enforcement: Key Considerations for Compliance
By Student Reporter: Erika Buenrostro
Speakers: Steve Millendorf (Foley & Lardner)
Steve Millendorf, Partner at Foley & Lardner LLP, presented key considerations for navigating AI regulations and enforcement, emphasizing critical factors such as upcoming enforcement timelines, the distinction between AI providers and deployers, risk classification of AI systems, and essential compliance obligations for businesses operating in the AI space. As artificial intelligence (AI) technology continues to evolve, so too do the regulatory frameworks governing its development and deployment. With important enforcement timelines on the horizon and significant compliance requirements outlined in emerging legislation, businesses must remain vigilant and informed to effectively navigate the complexities of AI governance and ensure compliance with evolving standards.
Upcoming Enforcement Timelines
Organizations operating in the AI space should take note of key regulatory deadlines. On August 1, 2024, some compliance requirements for the EU AI Act took effect. Additionally, on May 2, 2025, a third draft of the Codes of Practice for General-Purpose AI is expected, setting further standards for AI compliance.
Defining AI Systems
To determine whether a system qualifies as AI, it must meet specific criteria. The system must be machine-based with some level of autonomy and have the ability to infer outputs from inputs based on explicit or implicit objectives. It should also be capable of making decisions that influence physical or virtual environments and exhibit adaptiveness after deployment, meaning it can learn from experience.
Common AI technologies include deep learning, reinforcement learning, machine learning, natural language processing, and neural networks. These technologies are designed to improve and refine their outputs over time.
Systems That Are Not Considered AI
Certain technologies do not qualify as AI under regulatory definitions. Traditional software systems and basic spreadsheets do not meet the necessary criteria, though they may fall under California Consumer Privacy Act’s (CCPA) Automated Decision-Making Technology (ADMT) requirements. Robotic process automation (RPA) that lacks learning capability is also unlikely to be classified as AI.
Understanding AI Operators: Providers vs. Deployers
Regulations differentiate between two main types of AI operators. Providers are those who develop and market AI systems, ensuring they meet regulatory standards before making them available. Deployers, on the other hand, utilize AI systems developed by providers and are responsible for their implementation and oversight. General-purpose AI models, such as ChatGPT, have not been trained for specific tasks and may require additional scrutiny to determine their compliance obligations.
Jurisdictional Scope of AI Regulations
Companies operating in the European Union (EU) are subject to AI regulations. However, non-EU entities may also fall under regulation if their AI targets EU users or if their AI-generated output is used within the EU. This extraterritorial scope means that companies developing AI outside the EU must be aware of how their products are used and whether they fall under regulatory oversight.
Risk Classification of AI Systems
AI systems are classified by risk levels, which impact regulatory obligations. Prohibited AI systems include those that use subliminal techniques to influence consumer behavior, predictive policing based solely on personal characteristics, and untargeted scraping of facial images for AI training. Emotion recognition systems in workplaces and schools, as well as biometric categorization for protected characteristics such as health data, are also banned. However, exceptions exist for law enforcement in cases involving abduction victims, imminent threats, and other specific scenarios.
High-risk AI systems are defined based on their purpose rather than the technology itself. These systems require rigorous assessments and regulatory approvals. Examples of high-risk AI applications include biometric authentication, employment-related AI tools, and law enforcement applications. Certain exemptions apply to AI systems performing narrow procedural tasks, supporting human decision-making, or detecting patterns within datasets.
Transparency and Disclosure Requirements
Providers must disclose when AI interacts with people, generates synthetic media, or creates deepfake content. Deployers must inform users when AI-generated content is used for public interest purposes. Transparency in AI deployment is crucial for maintaining consumer trust and regulatory compliance.
Compliance Obligations for High-Risk AI Systems
Providers must establish AI literacy programs for employees to ensure an understanding of AI risks and limitations. High-risk AI systems must undergo extensive testing and reporting, particularly if they are covered by other regulatory frameworks, such as medical devices, toys, or machinery. Risk management protocols require AI system providers to identify and mitigate foreseeable risks associated with their technology.
Data Governance and Compliance
AI providers must implement data governance practices to ensure transparency in data collection and processing. Special categories of personal data, such as race, religion, and sexual orientation, may be used for bias detection but require additional safeguards. Companies must prove that synthetic or anonymized data is insufficient, enforce strict access restrictions, and ensure that third-party processors do not handle sensitive data.
Technical Documentation and Record-Keeping
AI systems must maintain detailed records regarding their development, functionality, and risk management measures. High-risk AI systems must log and monitor usage, with at least six months of data retention required. Comprehensive documentation ensures accountability and compliance with regulatory expectations.
Human Oversight and Cybersecurity
Human oversight must be integrated into high-risk AI systems to minimize potential risks. AI systems must also be resilient to cybersecurity threats, including model poisoning attacks that corrupt AI training data and hacking attempts that manipulate AI behavior. Security measures must be in place to safeguard AI models from exploitation.
Compliance Responsibilities by Role
Providers are responsible for developing AI systems and ensuring their compliance with regulations. Deployers must train personnel, report incidents, and disclose AI use to relevant stakeholders. Importers must conduct due diligence to verify compliance, while distributors must ensure that AI products meet regulatory requirements before placing them on the market.
Penalties for Non-Compliance
Failure to comply with AI regulations can result in severe penalties. Fines can reach up to EUR 35 million or 7% of a company’s global turnover, whichever is higher. Additional penalties apply for providing misleading information to regulators, with fines reaching EUR 7.5 million or 1% of global turnover. General-purpose AI providers face separate fines for intentional or negligent violations. Importantly, these fines are based on revenue rather than profit, emphasizing the financial risks of non-compliance.
Final Compliance Steps
To ensure adherence to AI regulations, businesses must verify all compliance obligations and meet both the letter and spirit of the AI Act. Companies should monitor emerging jurisprudence to stay updated on evolving legal interpretations. As AI regulations continue to develop, organizations must take a proactive approach to compliance, mitigating risk and ensuring responsible AI deployment. By aligning with regulatory frameworks early, businesses can enhance transparency and accountability while fostering innovation in AI technology.
Open Source Licensing in the Age of AI
By Student Reporter: Jenean Docter
Speakers: Heather Meeker and Kate Downing
Heather Meeker and Kate Downing discussed open source licensing in the age of Artificial Intelligence (AI). Downing describes herself as “out house counsel” and founded a law practice providing legal services to technology companies, tailored to their growth stage. Meeker works on open source licensing and technology transactions, advising clients on IP matters.
Because machine learning models are not human-written instructions—given that the models are trained rather than written—there generally exists no source code. So, when you try to “glue this idea onto AI,” Meeker noted, a non-trivial question arises: what is open source? When trying to apply the goals of the open source movement to AI—transparency and replicability—there are many different considerations.
What Does ‘Open Source’ Mean in Terms of AI?
The Open Source Initiative (OSI) is one of several organizations trying to define what “opens source” means in the world of AI. Principally, the way OSI defines it, an open source AI licensee would have the right to use, study, modify, and share—a similar set of rights to open source software.
In other words, a model would be considered “open source” if its creator provided more or less everything that you need to replicate it, with the exception of certain training data, noted Downing.
Possible Regulatory Challenges
However, the concept of “open source” does not apply to “field of use restrictions.” For instance, providing open source access to a model while privileging or prohibiting certain uses would not mesh here. This raises a significant question around legal liability. If you allow people to use the model for anything and everything, it’s possible you could be responsible for users violating any number of regulations.
One could easily imagine claims related to speech regulations, tort liability, product liability, IP claims, or even criminal liability. Significantly, the developers who want to release models do not necessarily want to comply with these many regulatory frameworks.
So, when clients want to release an AI model and ask about placing restrictions on that model—which would make it antithetical to open source—the impetus usually revolves around controlling liability, rather than behavior.
With most software, customers sign use contracts allowing developers to disclaim liability for any number of things. But with AI, there are many people with whom you cannot limit liability.
“To be a tech lawyer is a lot more like being a physical product lawyer where you care about real life risk,” Downing noted. She suggested we look at the open source definition skeptically as the open source software definition is centered on the engineer. In trying to apply it to AI, it is “probably correct” that in some cases, we should be centering this definition on the subjects instead.
Downing further noted that the EU AI Act noted that open source AI has to be “something that can’t be monetized.
In the case of AI, it may not be possible to meet both the definition of open source and the needs of regulatory law, commerce, and private business uses, according to Meeker.
The Advantages and Drawbacks of the Open Source AI Approach
I. Standardization
First, Meeker noted that a hidden huge benefit of open source is that it has a high level of standardization. So, a person can learn a great deal about the licensing terms of open source software. Given that open source is a frictionless license, she stated that it “can be extremely powerful in order to make things available to people.”
Developers also broadly understand what “open source” means. The binary definition of “open source or not” allows them to rely on it. Indeed, part of the branding around “open sources” in recent decades centered on its existence as a simple and reliable concept.
Yet, Downing added the caveat that because AI is moving so quickly, it can be hard to standardize anything at all. It may be better, she argued, to create an open source AI matrix or grades, instead of a definition. This might be more useful for people to make decisions based on, rather than a binary definition.
II. Regulatory Complications – Don’t be Evil?
Though developers joke about the “don’t be evil” license in the open source software world, it’s a real question in the changing world of AI.
Most licenses include restrictions that are very vague, painting with broad terms like “discriminate” which vary depending on which jurisdiction’s law is being referenced. Stating a restriction for the whole world certainly poses a challenge.
As Europe has begun working on AI regulations, AI researchers are trying not to be the people that participated in the Manhattan Project, Meeker noted. But, enforcement poses another challenge; if someone uses a license for a prohibited—or even criminal use—suing that individual for infringement may not curb that behavior. It’s pretty cold comfort to sue for copyright infringement if someone is human trafficking, Meeker elaborated.
III. Unclear Access to Copyright Protections
Machine learning models on their own may not enjoy much—or any—copyright protection. After all, humans do not write them. So, beginning logically from the premise that the models are not copyrightable, the entire licensing model fails.
Thus, Meeker described a tension in open source licensing: it is not clear “what it is that you hang this license on to begin with.” In the past, when developers tried to apply open source licensing to data or hardware, they encountered a fundamental issue about what it is that they are trying to brand. Sometimes they solve this by framing the products as contracts. But in many countries these contracts are not enforceable.
Having a contract instead of a license also disincentivizes suing. As an example, if Party B is using Party A’s model for a prohibited use—and Party A wants to sue for breach of contract in the US—Party A needs to prove personal damages. But if that prohibited use is something awful or criminal (say, human trafficking), it’s likely that Party A would be able to show little to no personal damages.
Meeker further noted that the free software movement redefined what a derivative work was. Today, the software community has advanced ideas about what a derivative work is that many attorneys do not agree with. This is even more confusing when applied to machine learning.
As a practical takeaway, Downing stated, people should not be spending a lot of time writing these licenses. After all, once you discover a problem, they will have a new model. Part of the success of open source licensing was that it had Linux; but for machine learning, the landscape is changing too fast.
Common Licenses
Speakers noted that there currently is not significant standardization in machine learning. In particular, the RAIL license morphed into nearly one-hundred different licenses, with an appendix of relatively vague prohibitions. As lawyers, we should be concerned that the licenses are extremely vague, suggested Meeker.
Downing noted that some may seek to emulate the Llama 2 community license, but added the caveat that Meta’s strategy here was different from an ordinary company making an AI model. Instead of carefully considering how to monetize it, Meta’s path was to release a model, allow users to improve it, and then mine the data to further fine-tune it.
In the open source space, normally every person receives the license directly from the author of the code—so, there is privity with every user.
Further, Meeker noted there are at least thirty-five pending lawsuits on whether GenAI is infringement or not. But, if you train on material that is under open source, Meeker stated, training cannot be infringement—it’s what you do after that matters.
On the Future of Open Source Software
Meeker cast doubt on the proposition that AI could make coding obsolete. The existence of machine learning models, she said, doesn’t mean there is no longer software. Rather, think of the model like an engine, there must be many parts around it.
But, she said, there is a need for transparency around AI tools which should drive the adoption of open source—not only in the model itself—but in the tooling that you use to build the model. So, software is not going to simply “go away,” but what coding tools can do is absolutely incredible.
Downing, however, stipulated that copyrights and patents in software become less valuable as you can ask an AI to make something. Curation, she suggested, may become the big thing.
Comparative Overview of IP Protection through Patents and Trade Secrets
By Student Reporter: Jenean Docter
Speakers: Bridget Smith and Chris Buntel
Bridget Smith and Chris Buntel led a thoughtful discussion comparing two avenues of IP protection: patents and trade secrets. Smith leads IP at Relativity. Buntel’s career focuses on patents and trademarks.
Both Smith and Buntel emphasized that IP attorneys miss opportunities to fully utilize trade secrets. Smith emphasized that trade secrets are not just the information omitted from a patent application—they require additional steps. Trade secret protection can obviously step in where existing information is already subject to regulations or other necessary secrecy—especially where employee mobility and discretion may be top-of-mind in this startup economy.
Buntel noted that trade secrets started garnering more attention about five years ago. In the old days, he said, you needed a patent to get meaningful damages but recently trade secrets have exceeded patent damages.
Trade Secrets Provide a Different Type of Protection
Buntel noted that the combination of patents with trade secrets can spark a “nervous, anaphylactic” reaction as many assume they are like oil and water. But with creativity, they do not have to be mutually exclusive.
There is a three-part test to trade secrets. First, they are secret. Second, they are valuable because they are secret. And, third, the owner must have taken reasonable measures to protect that secret.
However, there exists a difference between trade secrets and confidential information, noted Smith. All trade secrets are confidential information, but not all confidential information rises to the level of a trade secret.
There are always trade secrets if you know where to look, Buntel emphasized. For instance, valuable trade secrets might include information competitors would want to know—but that is not eligible for patent protection—such as procedures for managing customer feedback in a regulated industry.
Quality, marketing, and sales department information also comprise many trade secrets. Unpublished patent applications less than eighteen months old also are trade secrets, unless voluntarily disclosed.
Trade Secrets: Considerations
Smith suggested avoiding a patent-first perspective. She recommended beginning with the default presumption that there are almost always trade secrets around the work to be protected. Then, ask what about the information might lead to it being at risk. Only then should patents be considered—and, careful attention should be paid to the links between patents and trade secrets.
Buntel further suggested a variety of factors to consider with this inquiry.
- Is the technology patentable? And, is this the country you want to patent it in?
- Is the technology easy or inexpensive to reverse engineer? Buntel noted that trade secrets make for an interesting story where the is always a bad person on one side of the relationship.
- Patents are slow—so, if technology is changing, your technology might not be covered once it evolves.
- Will your technology still be around through the lifetime of the patent?
- Do you have to make a public disclosure due to government regulations or product approvals?
- Is the value of the technology greater than the patent expense?
- Do you have weak policies or procedures to protect trade secrets? Without the time and effort to protect a trade secret, you may not have a trade secret at all.
- Do you have weak—or nonexistent—employment contracts and Non-Disclosure Agreements with your employees and partners? If you cannot show it in writing—but later allege that you had a trade secret—you have a difficult story to sell.
- Do you have high employee turnover?
- Do you want to spend your money building confidentiality principles (supporting a trade secret) or through aggressive patenting? If it’s a commercial product, you can file for a patent just for marketing value.
On the whole, trade secrets must be proactively managed in order to increase the likelihood of positive litigation outcomes. You have already won or lost five to ten years before the theft occurs, Buntel stated. Setting up policies and training in advance are the entry fee.
Trade Secrets: Practically
One to two decades ago, trade secrets need not be as clearly defined as they do today. Today, the trend is to show that you were diligent before there was even a problem.
Many smaller companies fail to realize that they have any trade secrets at all. Smith suggested a top-down approach of ‘what keeps you up at night’. First, ask Senior Directors where they would be most concerned with an employee leaving with sensitive information. Then, talk to those employees to understand the inventory of who knows what, where that information is stored, and what the relevant developmental efforts are.
On the topic of cost, many mistakenly believe that trade secrets are free while patents are expensive. Though there are no government fees for trade secrets, companies who spend nothing to protect an ostensibly valuable asset will not likely be viewed as having taken reasonable measures. This avenue may be cheaper per asset than a back-and-forth with the USPTO. But trade secrets are not free—especially as the amount of litigation per year grows and the damages recoverable for trade secrets are becoming staggering in Buntel’s words.
Further, some believe that after filing the patent application, they are done. But what if an employee develops an improvement later on that is not included in that application? It may not be that the company intentionally withheld the important information—but rather, that the commercial implementation might be in that optimized version. There can be many ways of combining patents and trade secrets.
It is also more productive according to Buntel to think about trade secrets by categories instead of pleading standards. He noted that the standard gives reasonable measures rather than perfect measures, with many ways to start an analysis of what to protect as a trade secret. Hopefully any given company never has a problem—but, statistically, about half of companies fall victim to trade secret theft.
Trade Secret Pick List
The speakers suggested a trade secret pick list to include economic, legal, and technical considerations. Economic considerations might include business forecasts, market growth projections, or regional growth projections.
Technical considerations may include reverse engineering efforts, product stability testing data, customized software and computer programs, planned concepts, and other issues.
Lastly, legal considerations might include unpublished patent applications, search reports, competitive intelligence and analysis on competing intellectual property, and privileged documents.
In the end, as Buntel noted, the value for clients is increasingly in trade secrets.
Dealing with Jerks and Bullies Ethically
By Student Reporter: Jenean Docter
Speaker: Michelle Galloway
Michelle Galloway led a thoughtful discussion on ethical strategies to mitigate workplace bullying. Galloway is Of Counsel at Cooley LLP, focusing on patents and strategy, electronically stored information and compliance, and risk management.
The concept of zealous advocacy—which can lead to bullying—is interesting according to Galloway. In actuality, the requirement to be a zealous advocate appears nowhere in any of the rules. It’s a complete fiction, she noted.
Indeed, the only true requirement is to be competent. Under the ABA rule on competency, attorneys must have the mental, emotional, and physical ability necessary to regulate their behavior, comply with rules, and represent their client. The ABA rules also require reasonable diligence and promptness.
Galloway suggested a two-part visualization to deal with bullying: first, imagine spraying oneself with Teflon so nothing sticks; second, choose not to retaliate. At every point, attorneys should ask themselves whether the conduct they contemplate would serve their client. For instance, name calling probably would not lead to a quicker settlement.
Further, name calling and verbal aggression have become more prevalent in today’s online world. People hide behind email and feel more comfortable engaging in antisocial behavior. And prior to the pandemic, more people belonged to bar organizations where they might run into one another in real life.
Sometimes, this can even result in physical action—an important note on the heels of a new ABA opinion stating that if a lawyer is a victim of a crime, they can share information about representation.
When someone is being bullied, about forty percent of people do nothing. Worse, nearly seven percent join the bully. The remainder most commonly talk to the victim to acknowledge what they saw, which is the best thing a witness can do.
Research on bullying within the legal profession is new. In 2019, Illinois released a twelve-month bullying survey including over six thousand respondents. The results were broken down by gender, race, and other categories—though one limitation was that the survey failed to ask about religious discrimination.
The youngest lawyers surveyed reported feeling that they had been bullied at the greatest frequency. About twice as many women reported being bullied compared to men, and more Black, Hispanic, and Asian lawyers reported bullying compared to white lawyers. This statistically significant jump for underrepresented communities signifies that microaggressions can be quite subtle—though stereotypes can impact performance.
Bullying carries real consequences. If a person thinks they are being bullied, their amygdala prompts a cortisol spike, causing blood pressure to increase and negative health effects to follow. Interestingly, people who witness disrespect suffer very similar health consequences even if they were not the target.
Bullying also affects work performance. Attorneys report decreasing their work effort when they are being bullied. Across professions, there are higher rates of safety accidents and customer complaints when bullying is prevalent. Yet, only about twenty percent of lawyers who are bullied report the bullying at work.
California recently moved to align itself more closely with the ethics rules. In the future, it’s possible that judges may have greater ethical requirements in reporting bullying. Today, judges must report sanctions against attorneys greater than $999—but judges who witness bullying in the courtroom do not always sanction the perpetrating attorneys.
Another area getting a lot of attention is California’s Rule 8.4. Amended in 2016, this rule prohibits attorneys from engaging in discrimination or harassment based on a protected class. California’s amended rule identifies more protected classes than the ABA rule. So, showing that a target of bullying was in a protected class can require a bully to be disciplined by the state bar. And, this rule is not even limited to legal work—the ABA can look at other behavior outside of legal work. Today, this might include pointing to activity on social media, including views on politics or DEI.
As for practical advice, Galloway recommended a few tips. First, attorneys experiencing or witnessing bullying should document everything in writing. They should not try to characterize the behavior, just clearly state the facts; judges might interpret characterizations as a mere disagreement between attorneys. Ensuring that mentees and support staff are following rules and regulations is also critical.
With the momentum from more stringent rules, it’s possible that judges will become more scrutinizing of bullying. Galloway noted that lack of accountability is one of the greatest reasons bullying continues to occur.
Nuts and Bolts: Calculating Damages in IP Cases and Valuing IP
By Student Reporter: Jenean Docter
Speakers: Nischa Mody and Doug Bania
Nischa Mody, PhD and Doug Bania provided an engaging discussion of practical methods to calculate damages in IP cases. Mody is a Managing Director at Secretariat, where she specializes in economic valuation on IP, antitrust, and business valuation matters. Bania is the founding principal of IP consulting firm Nevium, where he specializes in IP valuation and damages calculations.
Big-Picture: Valuation Techniques
Mody and Bania noted three general IP valuation techniques: (1) income based; (2) market based; and (3) cost based. The income based method examines expected outcomes and cash flow.
The market based method estimates the value of IP on the market value of similar technology. Mody noted that this method might feel familiar to anyone who has seen the value of a house estimated based on similar homes in the same neighborhood.
Lastly, the cost based method looks at the cost of creating technology entirely anew or to purchase it. This method is usually the least reliable. For instance, the purchase price would be greater than a license price, likely awarding more intellectual property rights than necessary. Further, the rule of thumb used to conduct business transactions that are no longer welcome in the litigation world are used in the valuation world.
Speakers then provided a compelling review of recent cases considering the use of sale prices in valuation, which follows below. Using these purchase prices in a hypothetical negotiation is tricky. Though the purchase price may be greater than a license, it also provides greater ability to monetize and use the intellectual property. Yet, there exists a need to determine how to just value the patents in the suit—and not the larger bundle.
Spectralytics Inc. v. Cordis Corporations and Norman Noble, Inc. (Fed. Cir. 2011)
In Spectralytics, the Court of Appeals for the Federal Circuit considered whether a damages ruling was excessive. In doing so, the court allowed a sale price to be used in determining the outcomes of a hypothetical negotiation—but said that all of the back and forth could not be used as background on why that value should be used as a benchmark.
Mody characterized Spectralytics as interesting where it looked at the back-and-forth to examine what parties were thinking, but where the court only really considered the purchase price in seeking to determine a reasonable royalty.
Headwater Research LLC v. Samsung Electronics America, Inc, 2:22 cv-00422 (E.D. Tex.)
Headwater Research is a patent infringement action against Samsung arising from its use of certain wireless communications technology. Here, the court did not permit a purchase price to be used in valuation because there existed a letter of intent and an execution.
Rex Medical L.P. v. Intuitive Surgical, No. 1:2019cv0005, Document 282 (D. Del. 2023)
Even though the main focus of the purchase in Rex Medical was the patents, Mody stressed that it could not be assumed that this price solely included the patents. You have to do some sort of apportionment, she noted, emphasizing that getting down to the final value is one of the most important issues in IP.
My first shades, Inc. et al. v. Solarna LLC
My First Shades arose between two former partners in the sunglasses industry. In this case, Bania rebutted the economic damages expert by utilizing Google Historic Search. Bania found that the other side did not dominate the Google search result as alleged after searching for “unbreakable sunglasses” during the time period at hand. This suggested that there should be no economic damages, because the other side never actually dominated the search results.
Shifting from the nuts and bolts of IP valuation to tools, Bania noted that infringement doesn’t mean there’s actually economic harm—when someone was infringing, we need to really determine whether that caused economic harm.
Google Historic Search allows users to query for a phrase based on a specific time period—such as before or after alleged infringement—to determine how search results would have appeared to all users at any given time.
Stone Creek v. Omni Leather
Bania represented the defendants, who manufactured furniture for an Arizona-based company before beginning similar work for an East Coast company. This case asked whether the defendant’s use of the mark caused economic harm; in other words, whether customers were buying the furniture because of the name. Here, Bania utilized Google Keyword Planner to show that neither causation nor damages were appropriate.
Every decision typically starts with a Google search these days, he noted. People don’t go window shopping anymore. Today, if a company chooses to advertise on Google, Google Keyword Planner will tell that company how many people search for the desired term each month and what the “pay-per-click” price will be.
Bania examined the handful of retail sources that were selling the Stone Creek furniture. Setting Google Keyword Panner to the zip code of each store, he looked to see how many people searched for each store within the zip code of those neighborhoods. When he searched “Stone Creek” in those areas, “nothing was happening” Which suggests that the mark was not driving sales and, therefore, the defendant’s use of that mark had not caused economic harm to the plaintiff.
Google Keyword Planner is a user-friendly tool that juries understand—and it works, said Bania.
Internet and Social Media Analytics on Public Perception of Johnny Depp
Examining the Public Perception of Johnny Depp with Google Trends and Q Scores
Bania summarized his work examining the change in the public perception of Johnny Depp following the Amber Heard op-ed accusing him of abuse.
Bania first utilized Google Trends, which allows users to see what search is trendy at a certain time. Here, Bania used Google Trends to compare several points in time. First, Bania considered the search activity related to Depp before Heard filed a restraining order against him . Then, he compared this to Depp’s public perception after the restraining order but before Heard published the op-ed accusing him of violence. Lastly, Bania pulled data to compare these points to Depp’s public perception after the op-ed.
Bania searched for Depp with the Google historic search tool at each of these dates, analyzing the top three organic results. The top three results typically get fifty to seventy-five percent of clicks, so it’s most likely what people are reading, Bania noted.
Prior to the restraining order, everything was just about Johnny Depp. But, after the order, the narrative shifted to “wife abuser” and the allegations contained in that order. And then, Bania explained, the results after the op ed switched to “alcoholic,” “bad work ethic,” and “drug addict.” It appeared that the public perception did change.
But this was not the only tool that proved useful here—Bania also employed Q Scores. Q Scores is a company that surveys individuals on their thoughts about public figures: how well known they are, how well liked they are, and how disliked they are. Q score studies are conducted every two years, with the industry relying on them.
Bania ordered Depp’s scores, during the same time periods. In 2016—before the restraining order—Depp’s positive Q score was 35, and his negative was -11. After the order, his positive score dropped to 31 and his negative to -16. After the op-ed, his positive score dropped to 29 and his negative to 15. In front of the jury, this constituted evidence that Depp’s public perception changed negatively following the op-ed.
Comparing the Timing of Twitter Activity to Alleged Defamatory Statements
Bania discussed his rebuttal of the other two experts in the case. Opposing experts referenced an orchestrated Twitter attack on Amber Heard. The first expert said the timing and frequency of the tweets was meant to harm Heard.
Before Twitter was sold, you could go in and qualify to get Twitter data to analyze trends. Now, said Bania, it’s different—you have to be very specific. Still, he noted that the number of mentions, hashtags, and negative points associated with Heard did seem consistent with a coordinated effort.
The first opposing expert identified four hashtags that were defamatory, negative to Heard, and consistent with manipulation—spiking when Heard was hitting milestones in her professional careers.
These spikes in Twitter activity were not connected to the so-called “Waldman statements”—three statements in the Daily Mail made by Depp that Heard alleged to be defamatory. In fact, the first spike happened before any of the Waldman statements.
Without a connection to the Waldman statements, the opposing expert’s opinions were irrelevant. This was the story I told before the jury, noted Bania. Further, replicating the opposition’s data revealed that only two percent of the Twitter hashtags happened concurrently with the “Waldman statements.”
Bania returned to Google Historic Search to confirm that the “Waldman statements” did not actually cause the negative Twitter hashtags. There appeared no correlation between the alleged defamatory statements and any economic damage suffered by Heard.
Heard initially alleged that she was on the same trajectory as other “A-list” celebrities. But for the “Waldman statements,” her side argued, she would be making comparable salaries. Though Heard did not bring this allegation to trial, Bania noted that Q scores could be used to show that she was not comparable to “A list celebrities.” Her positive Q score was very low (9) with her negative Q score even lower (-28). Thus, using well-known celebrities to estimate a market salary for her when she is not in the same universe would not be appropriate as a reasonable benchmark for estimating lost profits.
When They See Us – Representing the DA in the Central Park 5 Case
Bania described the process of estimating economic damages suffered by the District Attorney who was the subject of When They See Us, a Netflix documentary about the infamous “Central Park Five” case.
Here, the relevant damages were reputational damages and but-for damages including lost profits, wages, and future opportunities. For reputational damages, an expert can determine how many people viewed or read a defamatory statement to determine what type of reputational repair program is needed to educate people on the truth, Bania noted.
There are about fifty scenes in the case that were allegedly defamatory at the time, Bania stated. Thus, they decided to state that the entire series was defamatory after submitting a report and conducting an economic damages analysis.
The plaintiff had written twenty-four crime novels—including a New York Times bestseller. After the Netflix series, every publisher dropped her. It seemed clear that the proximate cause was the series, Bania concluded. However, given that she did not have the best reputation before the series, it was necessary to separate the economic harm caused to her specifically by the series, and the judge required the identification of specific scenes as opposed to the series as a whole
To locate defamatory scenes, Bania used a social media analytics tool called Brandwatch. This technology allows a user to look at themes underlying specific scenes, where the user builds a query sending out a “web crawler” to collect relevant articles and social media posts. This tool revealed sixteen posts directly related to one scene, and five-hundred and seventy-nine related to another.
Though the case ultimately settled, Bania is of the opinion that five scenes could be directly connected to economic harm. Netflix ultimately donated $1 million to the Innocence Project, and there is now a disclaimer noting dramatization in the film.
AI (or Die) – The Future is Now
By Student Reporter: Mirna Champ
Speaker: Enrico Schaefer, Traverse Legal, PLC
Panelist Enrico Schaefer, founding attorney of Traverse Legal, PLC, discussed how artificial intelligence is reshaping the legal profession. He emphasized that AI is not a futuristic concept but a current necessity. Lawyers who adopt AI tools will outperform those who do not, as AI unlocks human potential, enhances creativity, and improves legal analysis.
Schaefer noted that law is one of the most language-intensive professions, making it especially susceptible to disruption by large language models like ChatGPT and Claude. He cautioned that employees are already using AI without supervision, creating risks related to privilege and confidentiality. He urged firms to establish clear policies and use vetted tools with proper security settings.
Drawing from personal experience, Schaefer explained how he integrates AI into daily legal practice. His firm uses AI to transcribe meetings, draft emails, generate briefs, and manage case timelines. Tools like Fireflies.AI, Microsoft Planner, and custom GPT assistants help streamline workflow and improve accuracy. He highlighted recent advances in deep reasoning AI models that allow for more thoughtful analysis and research capabilities.
Schaefer discussed how AI enables new legal service models. Clients increasingly expect high-quality results at lower costs, making flat-fee and subscription-based billing more attractive. He described the potential of AI resolution clauses in contracts and outlined how firms can use AI tools to serve a broader client base through automated systems and monthly subscriptions.
In the Q&A portion, Schaefer addressed whether AI-generated work product remains privileged, comparing its use to legal research platforms like Westlaw and Lexis. He maintained that AI tools still require attorney oversight and judgment, which preserves the attorney’s role. He also emphasized that human input remains essential, noting that poor results are often due to inadequate prompts rather than flaws in the AI itself.
The panel concluded with a list of trusted AI tools and resources, including ChatGPT, Claude, Perplexity.AI, Fireflies.AI, and the foundational paper “Attention Is All You Need.” Schaefer encouraged attorneys to take the initiative, develop their skills, and integrate AI into their practice now rather than wait for firm-wide adoption.
Keynote – A Fireside Chat with Kathi Vidal
By Student Reporter: Mirna Champ
Speakers: Kathi Vidal, Winston & Strawn; Immediate Past Director of the USPTO
Moderator: Jeff Smyth, Finnegan
The IP Institute opened with a keynote fireside chat featuring Kathi Vidal, former Under Secretary of Commerce for Intellectual Property and Director of the USPTO, and now a partner at Winston & Strawn. Vidal provided insights into the strategic shifts she implemented during her tenure, emphasizing systemic reform to expand access to the innovation ecosystem and strengthen the IP infrastructure both domestically and internationally.
A central focus of Vidal’s leadership at the USPTO was inclusivity. She launched the agency’s first National Inclusive Innovation Strategy, designed to reduce opt-out rates among underrepresented groups, including women, rural inventors, and economically disadvantaged individuals. She also oversaw reforms to job descriptions and recruitment pipelines to eliminate unnecessary barriers, resulting in a 5% increase in workforce diversity in just one year—achieved without displacing existing personnel.
In addressing pendency and quality, Vidal outlined a data-driven, long-term hiring model. On the patent side, she emphasized the need to hire examiners with relevant technical backgrounds, correct routing misalignments, and improve retention through programs like “Accepted Day.” On the trademark side, she helped mediate longstanding process inefficiencies by facilitating collaboration between union leadership and PTO management. She also highlighted how examiner classification systems and attrition presented serious challenges, but could be tackled through systemic fixes and improved training.
AI and automation were recurring themes. Vidal discussed cautious AI adoption, balancing innovation with confidentiality safeguards. She initiated pilot programs involving interview summaries and reasons for allowance to improve prosecution clarity and create a more reliable record for later litigation. She also warned of potential foreign abuse of generative AI to flood systems, pointing to China’s trademark filing history as precedent, and emphasized the need for U.S. diplomacy and cooperation to prevent such misuse.
Vidal also spoke about the PTAB’s role in post-grant litigation. While she praised the caliber of its judges, she acknowledged that stakeholder uncertainty remained, especially concerning discretionary denials and ITC overlap. Her guidance, now rolled back, had aimed to promote predictability for both small and large patent holders. She noted that during her tenure, patents invalidated by the PTAB were clearly deficient under the law, and that changes in approach could impact litigation strategy going forward.
Internationally, Vidal played a leading role in TRIPS waiver negotiations and design treaty efforts, balancing global health access with U.S. innovation incentives. She underscored that global IP alignment is essential for maintaining U.S. leadership.
Finally, Vidal addressed remote work, noting that the USPTO’s flexible model increased productivity and job satisfaction while drawing top AI and tech talent. However, she emphasized the ongoing need to support employee connection and mission alignment.
Now back in private practice, Vidal continues to advance diversity in IP litigation through initiatives such as NextGenLawyers.com and has proposed programs to expand expert witness access at the PTAB to better support smaller inventors.
Patent Damages: A Deep Dive Into Assessing Damages in Patent Cases
By Student Reporter: Mirna Champ
Speakers: Bita Rahebi (Morrison Foerster) and Erin Crockett ( Charles River Associates)
This informative session offered a comprehensive look at the complex and evolving landscape of patent damages. Panelists Bita Rahebi and Erin Crockett led attendees through the practical, strategic, and technical considerations involved in assessing damages in patent litigation. With expertise spanning both legal and economic analysis, the panel provided insights into trial preparation, expert coordination, and the methodologies used to quantify reasonable royalties and lost profits.
The presentation opened with a discussion on remedies available in intellectual property disputes, highlighting that in patent cases, damages can include reasonable royalties or lost profits, but never less than a reasonable royalty under 35 U.S.C. § 284. In evaluating lost profits, the panel emphasized the importance of the “but for” analysis, referencing Rite-Hite Corp. v. Kelley Co., and the well-established Panduit factors: demand for the patented product, lack of acceptable non-infringing alternatives, manufacturing and marketing capacity, and the amount of profit the patentee would have made.
A significant portion of the presentation focused on the Georgia-Pacific factors used to assess reasonable royalty damages. Crockett joked she would tattoo the list of 15 factors, but emphasized the importance of factors 4 (licensing), 5 (commercial relationships), 9–11 (utility, usage, and profit attribution), and especially 15, which—while technically a “hypothetical negotiation”—was described as something experts “made up but believe is real.”
Apportionment also received extensive attention. The panel reviewed acceptable methodologies, such as technical expert testimony, econometric analyses, market surveys, and feature counting. Emphasis was placed on ensuring the patented feature’s incremental value was properly isolated from other features, with reference to Ericsson v. D-Link. Bita Rahebi stressed that rules of thumb like the “25% rule” were insufficient, citing Uniloc v. Microsoft.
Several statistical techniques were discussed, including hedonic regression, structural break models, and differences-in-differences regression. The panel explored these in context through case studies like VLSI Technology v. Intel Corp., where regression errors made by technical experts led to a remand on damages. Crockett shared how she used differences-in-differences analysis successfully in a tortious interference case, explaining its utility in identifying the causal impact of a patented feature on sales.
The market approach and comparable licenses were flagged as tools to be used with caution. The panel advised engaging technical associates early and identifying a clear link between the license and the patented feature. Timing, economic comparability, and the royalty structure were critical considerations. Rahebi warned of the risk that a single excluded license could unravel the entire damages analysis.
Under the income approach, tools such as regression models and firm-conducted experiments were highlighted as particularly persuasive when done in the ordinary course of business. Experts can leverage these models to examine changes in price or volume attributable to the patented feature. The panel also reviewed conjoint analysis and patented feature removal studies, though noted they are less common and sometimes vulnerable to challenge.
Sampling and survey design were also addressed, particularly in the context of non-infringing alternatives (NIAs). Rahebi emphasized the importance of statistical rigor—sample representativeness, size, and margins of error—with a personal threshold of p < 0.10 for testifying to results. Experts too often rely on convenience sampling without the statistical foundation necessary for admissibility.
The panel concluded with strategic considerations from both the expert’s and counsel’s perspective. Both Rahebi and Crockett underscored the importance of early collaboration, detailed deposition prep, and expert feedback loops. Trial readiness required anticipation of Daubert challenges, preemptive documentation, and constant coordination, including weekly calls and pretrial dry runs.
Review of Patent Law Year in Review
By Student Reporter: Mirna Champ
Speaker: Professor Jeffrey Lefstin, University of California College of the Law, San Francisco
Professor Jeffrey Lefstin (UC Law San Francisco) delivered an engaging and analytically rich review of the past year’s most impactful Federal Circuit patent decisions, offering historical context, doctrinal critiques, and strategic insights. A scientist-turned-scholar, Professor Lefstin brought his dual expertise in molecular biology and patent law to bear in exploring shifts in design patent obviousness, Section 101 eligibility, functional claiming, and global licensing jurisprudence.
A highlight of the presentation was LKQ v. GM Global, where the Federal Circuit, sitting en banc, overturned the Rosen-Durling test and aligned the standard for design patent obviousness with the Graham factors traditionally used for utility patents. Professor Lefstin questioned how courts will assess “motivation to combine” in design contexts—where visual appeal, not function, drives innovation. He also explored unresolved issues such as defining the “ordinary designer” and determining what constitutes analogous art when aesthetics dominate the inquiry.
In addressing patent eligibility under §101, Lefstin compared AI Visualize v. Nuance and Contour IP v. GoPro, two decisions that reached opposite outcomes despite parallel fact patterns. He emphasized the lack of coherence in how courts apply the “abstract idea” exception, observing that such unpredictability—particularly in software and diagnostics—chills investment and innovation.
Turning to functional claiming, Professor Lefstin discussed PureCircle v. SweeGen, where claims were invalidated for blending natural phenomena and abstract ideas without disclosing the technical means of achieving the claimed result. Drawing parallels to American Axle, he warned that §101 is increasingly swallowing §112, undermining the statutory separation between disclosure and patentable subject matter introduced in the 1952 Patent Act.
On the topic of prior art, Sanho v. Kaijet clarified that private sales—even if not confidential—do not necessarily constitute “public disclosures” under §102(b). This nuanced interpretation preserves the ability to commercialize inventions while protecting against premature invalidation.
Professor Lefstin also covered important updates on FRAND litigation, highlighting Ericsson v. Lenovo, which reaffirmed the Microsoft test for foreign antisuit injunctions, and Amarin v. Hikma, which revived concerns over “skinny labels” and induced infringement in the pharmaceutical sector.
Looking forward, Lefstin flagged key developments to watch: EcoFactor v. Google (en banc) may further shape the boundaries of patent eligibility; and the use of Rule 36 summary affirmances continues to draw scrutiny, potentially inviting Supreme Court review. He also noted the proposed Patent Eligibility Restoration Act, stressing that any legislative fix must balance clarity with innovation incentives across emerging sectors like AI and biotech.