California Lawyers Association

California Lawyers Association Task Force on Artificial Intelligence

Report on AI in the Practice of Law, California

On November 20, 2022, OpenAI released ChatGPT—an artificial intelligence (“AI”) chatbot. ChatGPT quickly became the fastest-growing consumer application in history, estimated to have had 100 million active users within two months1. While ChatGPT could pass the Uniform Bar Exam and the Multistate Professional Responsibility Exam,2 its hallucination of case law and potential embedded bias3 has left regulators with the task of designing rules that would allow legal practitioners to take advantage of the benefits of AI, while managing potential dangers.

The American Bar Association (“ABA”) quickly sought to formulate guidelines as to how the profession should approach the use of AI. On February 5, 2023, its House of Delegates adopted a resolution at the 2023 ABA Mid-Year Meeting calling on organizations that “design, develop, deploy, and use artificial intelligence . . . systems and capabilities” to embrace the following guidelines: (1) AI products, services, systems, and capabilities should be subject to human authority, oversight, and control, (2) responsible individuals and organizations should be accountable for the consequences of AI use, unless they have taken reasonable measures to mitigate against that harm of injury, and (3) AI products should be transparent and traceable, while protecting associated intellectual property.4 The guidelines allow for the development of AI, while ensuring that AI remained within existing legal frameworks of culpability. In other words, humans would remain responsible for the oversight and usage of the technology as the technology developed.

The State Bar of California (“State Bar”) sought a similar balance as it released practical guidance based on existing Rules of Professional Conduct (“Rules”). By mapping the common risks for the profession presented by AI technology directly onto the salient Rules, the State Bar has led the way in assisting the state’s lawyers to integrate this valuable new tool into their practice in a safe way. And in doing so, the State Bar guidelines acknowledge significant legal, ethical, and redistributive consequences—and corresponding duties—that are by-products of the integration of generative AI technology, as it advances, into the state’s existing legal and ethical framework.

II. Challenges For the Legal Profession As AI Develops

The Duties of Competence and Diligence (Rules 1.1 and 1.3) require that lawyers possess sufficient skill and capability, and that lawyers apply their abilities with reasonable aptitude. Intertwined in those responsibilities are recommendations to “understand to a reasonable degree how technology works” and to ensure that “AI-generated outputs . . . [are] carefully scrutinized.”5 As generative AI proliferates and becomes more sophisticated, it will be challenging for lawyers to keep up with changes in the technology and tools available, and identify and appreciate new risks that arise from that development. Meanwhile, lawyers who do not utilize these technologies as they develop may put their clients at a disadvantage. It will thus be important to develop continuing practical education and training materials over time, and have this basic level of information available in a format accessible to all the lawyers of the state.

Similarly, the Duty to Comply with the Law (Bus. & Prof. Code, § 6068(a) and Rules 8.4,1.2.1) and the Duty of Candor to the Tribunal; and Meritorious Claims and Contentions (Rules 3.1, 3.3) require that lawyers faithfully fulfill their roles as public servants, upholding the law and its institutions, and refrain from engaging in behavior that would erode public trust or facilitate illegal conduct. The proper and ethical use of generative AI as it develops will be a necessary part of meeting the profession’s duties of candor and compliance, as AI technology becomes more sophisticated and AI-produced work product becomes more difficult to distinguish from non-AI- produced work product. Enhanced and accessible continuing legal education and training may be necessary for lawyers to ensure their compliance with these important ethical rules. It may also become more important to impose a duty on proponents where appropriate of AI-produced work product to identify it as such.

As for duties between lawyer and client, perhaps the most well-known danger of utilizing AI in the course of law practice involves betraying the client’s secrets by inputting them into ChatGPT and similar AI chatbots. The Duty of Confidentiality (Bus. & Prof. Code, § 6068, subd. (e), Rules 1.6, 1.8.2), requires lawyers to be conscientious about using lawyer-appropriate AI tools and otherwise maintaining client confidentiality with any generative AI that trains their models on user input.6 There is currently no requirement for vendors providing AI tools requiring them to publish lay-person-friendly explanations of this aspect of their platforms and products, making it difficult for lawyers, judges, and other business people to adopt those new technologies without concern about the safety of their confidential information.

The Duty of Communication Regarding Generative AI Use (Rules 1.4, 1.2) requires lawyers to reasonably consult with clients and gain their informed consent regarding whether and how to use AI in connection with the case. The use of AI adds a layer of complexity to the advice and counseling a lawyer must provide to client. The State Bar’s guidance recommends that lawyers explain the “benefits and risks” of AI use to their client. This means lawyers must be in a position to accurately discuss the types of AI that may be used, and the pros and cons of each.

As generative AI improves and the usage case broadens, lawyers will be challenged in providing accurate and useful disclosures.

The Prohibition on Discrimination, Harassment, and Retaliation (Rule 8.4.1) prohibits lawyers from discriminating or retaliating against individuals on the basis of any protected characteristic whether in relation to individual representation or to a law firm’s representations. It is already clear that AI trained on biased information will discriminate.7 Lawmakers and regulators have scrambled to issue guidance addressing the issue.8 The unchecked use of biased AI to screen clients or employees, for example, is likely to further exclude individuals already marginalized by the legal system, while the use of AI in judicial decision-making may exacerbate existing systemic biases,9 and the inability of AI to process different codes of speech and conduct may render certain claims incognizable. The responsible use of AI may reduce bias, but its unsupervised usage is likely to further exclude already marginalized individuals. Perhaps even worse, Rules that appear to manage bias without actually doing so may legitimize harmful patterns of discrimination that already exist. As AI develops, practitioners will need to be kept aware of these changing risks, and learn how to mitigate or control them.

The Rules governing fees for legal services (Rule 1.5 and Bus. & Prof. Code, §§ 6147- 6148) prohibit attorneys from charging unconscionable or illegal fees or certain fees under an unclear terms. The guidance issued by the California State Bar requires ethical and transparent behavior, allowing lawyers to charge for “costs associated with generative AI” through a “fee agreement . . . [that] explain[s] the basis for all fees and costs, including those associated with the use of generative AI,” but does not allow a lawyer to “charge hourly fees for the time saved by using generative AI.”10 The State Bar appears to take the clear redistributive stance that time saved from the use of generative AI should benefit the client monetarily rather than the lawyer. However, as AI products become more sophisticated, the costs associated with such products may increase and create a differential between the service received by clients who can afford it and the service received by clients who cannot. While not all practitioners will have all technologies available to them, it will be important to establish a reasonable level of awareness, competence and skill for the use of AI by the state’s lawyers to minimize any such future gap in capabilities.

Finally, the California Lawyers Association Task Force on Artificial Intelligence (“Task Force”)11 recommends that lawyers “analyze the relevant laws and regulations of each jurisdiction in which a lawyer is licensed to ensure compliance with such rules.”12 However, the continuing development of generative AI technologies and tools will likely complicate the analysis of the professional responsibilities owed to other jurisdictions (Professional Responsibilities Owed to Other Jurisdictions, Rule 8.5), particularly as those jurisdictions’ own guidelines also develop independently over time. If it is the case that certain jurisdictions allow the use of AI in certain ways and others do not, legal practice in different jurisdictions may even diverge.

III. Benchmark and Comparison Analysis

Recognizing these future challenges, the Task Force surveyed AI regulations and rules in a variety of jurisdictions to see how those jurisdictions are working to meet those challenges.

In the State of Utah, the Utah Bar Association requires that the use of AI being provided by a licensed attorney to be disclosed and approved by the client.13 California, the Silicon Beach to Utah’s Silicon Slopes, does not have any such mandatory disclosures, but licensed attorneys in the California must follow the Rules and be cognizant of inputs used to create outputs with privacy, confidentiality, copyright, and trademark concerns in mind. The United States government through the Executive Office of the President has issued guidelines for the creation and use of AI,14 while the European Union15 has passed similar legislation and regulatory guidelines.

In sum, there seems to be three common themes and concerns in all AI use whether in the legal profession or beyond. The three themes are (1) privacy and confidentiality in using AI to create content (e.g., purchasing the paid AI platform option for a more robust license and other protections), (2) human editing and review of AI content, and (3) appropriate disclosure of AI use upon release, use, or sharing. With the proper application of all three in representation, lawyers in California can protect themselves and their clients to the best extent short of refusing to use AI technology, which is increasingly becoming more difficult to avoid.

The State Bar has taken the novel approach of taking the existing Rules and applying them to an attorney’s use of AI while practicing law.16 The State Bar has already issued a technology rule17 that requires attorneys to keep up with technological advances. Therefore, the State Bar concludes that as long as the attorney is following the Rules, the use of AI is allowed. This is of course guided by the three points mentioned above that also touch the Rules, but also go beyond the Rules and into further protection of client’s interests.

The question however remains: Should attorneys use AI in the practice of law? That question can only be answered by the individual attorney in their capacity to zealously advocate for their clients. The State Bar has left the question to the attorney similar to the states having states rights apart from the federal government. One important point to consider before using AI is the creative honesty question. This is two parts: disclosure of use, but also it is a soul question. The human element in creation is at the heart of AI use.

The use of AI creates efficiency and reduces time billed to clients to spend time on other clients or personal matters, but does the use of AI take away from human innovation? Human innovation that leads to learning. AI essentially shortens the step to the answer and removes the “show your work” requirement or experiment akin to math class in school. The attorney essentially places an AI chatbot or platform in the role of the research attorney or associate with the human attorney acting as the partner reviewing the work.

Therefore, the use of AI must come with the attorney’s knowledge that using the technology may enhance the work like a wrench makes tightening a bolt faster versus a power tool or by hand, but it may also take away from learning in the work completed or even the satisfaction in the work completed that goes to the psychology of practicing law, health, and wellness. In other words, the wrench and power tool helped with human knowledge in knowing where the tire or part goes and how, but it makes repeating success less guaranteed from attorney to attorney and generation to generation without a human process. The reliance on the AI technology could lead to idleness in the attorney and innovativeness on behalf of the client to their own legal work. Either way, the attorney’s legal services expands with AI use or shrinks to industry innovation without moderation.

As mentioned previously, the State Bar requires the use of AI in practice to follow the duty of confidentiality (knowing and protecting your inputs into an AI database, platform, or program), duty of competence (Mandatory Continuing Legal Education (“MCLE”) on technology), duty to comply with the law, duty to supervise, duty to communicate, with the prohibition of attorneys not charging a client for time saved using generative AI as a service, with continued candor to the tribunal, and prohibition against discrimination (AI can be biased and untruthful in its outputs), and follow professional responsibilities in other jurisdictions.

Attorneys must also be aware of the issues with AI. Namely, the issue of reluctance in accepting use of AI as a tool compared to the legal profession as a whole and case by case versus the rush to adopt AI. There is also a need for a human AI bill of rights making sure humans utilize AI as a tool and do not become the tool in the process. Meaning, humans need to drive use and application, not generative AI. There are also copyright, trademark, licensing, and privacy issues with AI that attorneys should be aware of before using AI. Attorneys should be focused on training and understanding the use, limits, and pitfalls of using AI.

AI also has a potential for prediction bias in the information it provides, a “being right” bias where its preference is to provide an answer, and a classification bias and confidence bias as to data and results. AI can tend to be like the Marvel character Drax where it takes inputs too literally versus conceptually more like a human. Humans and lawyers are very smart so relying too much on technology can potentially reverse learning and growth particularly in future generations. Lawyers should be focused on using AI to upscale their services and practices where lawyers are using ChatGPT as a tool versus ChatGPT being a crutch for lawyers. Like analytics in baseball, AI should be a tool, not an end result.

Attorneys should resist the temptation to rely on AI, while establishing boundaries when using generative AI. Attorneys should help with drafting policies on AI and insist upon regulation. The digital revolution does not and should not change the ethics of the honorable legal profession.

In applying AI use to the Rules, the State Bar has struck a good balance between regulation and protection. The Utah State Bar however takes it too far by requiring clients to approve of AI use during representation. Client approval places the unfair burden on the client to choose the best strategy, taking tools away from the attorney to practice law and zealously advocate. The equivalent would be asking an attorney not to use Spell Check or Word documents for drafting because a client wants everything handwritten. Disclosure, where appropriate, is satisfactory and if a client does not want to use AI, the client can just as easily terminate representation or vice versa. Federal and state regulations can help with protecting privacy and the public by making sure AI use is safe and in compliance with the law. In using AI, the question should be whether the AI increases human (or the attorney’s performance) or decreases human involvement. If the former, great, if the latter, regulation and ethics should limit such use and attorneys would be wise to avoid it.

The keys to success in using AI as a tool in California practicing law are: (1) privacy and confidentiality in using AI to create content (e.g., purchasing the paid AI platform option for a more robust license and other protections), (2) human editing and review of AI content, and (3) appropriate disclosure of generative AI use upon release, use, or sharing. With all three, attorneys in California can protect themselves and their clients to the best extent short of refusing to use AI technology, which might be burdensome considering the rapidness of changes in practicing law and the Technology Rule. Following the Rules and the above guidance may help the attorney stay in compliance, while providing the best legal services to their clients.

III. Current Status of AI Use Within Certain Practice Areas

Summary conclusions:

  1. There appears to be little uniformity in the nature of the adoption and use of generative AI in each substantive legal practice area within the State of California. Each substantive practice area, and further, each individual practice within a given area of substantive expertise, at this point in time uses generative AI in a slightly different manner.
  2. Given the wide variation afoot, the State Bar and California Lawyers Association (“CLA”) can benefit from a concerted, empirical investigation or study concerning the use of generative AI within each of the major substantive legal practice areas common to California attorneys. Give the dynamic nature of the technology, such empirical investigation should be ongoing, and subject to periodic updates to the extent possible.

For this initial exploratory foray, the Task Force engaged in anonymous anecdotal interviews with California practitioners in several prominent, easily accessible substantive practice areas, which include: intellectual property, trust and estates, land use, business formation and transactions; as well as some less substantive areas, such as law practice management. Interviewees all expressed a preference for anonymity in the informal interview process, given the nascency of the availability, use and regulation of generative AI for the legal practice industry, and the Task Force’s inquiry.

During the course of informal interviews, the Task Force prepared a series of preliminary interview questions to guide the inquiry, but did not document specific responses, in order to facilitate anonymity. These initial queries can serve as a starting point for more sophisticated inquiries. The preliminary interview questions typically consisted, at least in part, of the following inquiries:

  1. How is the development of generative AI changing your practice area?
  2. To what extent do you incorporate the use of generative AI into your law practice?
  3. Specifically, how do you utilize generative AI for your practice?
  4. Has this been favorable for your practice?
  5. To what extent do you envision an increase in the use of AI by your fellow practitioners in your practice area?

The interviews were conducted informally, in a conversational manner, over the course of a four-month long period. From these interviews, the Task Force determined that there is indeed a very wide range of variation in the nature of the acceptance, adoption, and use of generative AI by California attorneys within each specific substantive legal practice area, and in turn, great variation between individual practices within a given substantive practice area. While some substantive practice areas, such as intellectual property or trust and estates law, appear initially conducive toward the use of generative AI (e.g., for purposes of populating forms); and although some practices are embracing generative AI technologies, given the underlying issues, and attendant uncertainties/liabilities, other firms are shunning them altogether until such time as a more developed regulatory framework and guidance shall emerge.

Exemplars / Interview Summaries by Practice Area

Trust & Estates

Our very first informal interview was with a seasoned trust and estates practitioner operating a small firm (under five attorneys) headquartered in the California. The interview elicited some interesting findings. First, the practitioner relayed their experience incorporating generative AI software into their practice in order to create efficiencies and better serve clients, e.g., in the preparation of simple wills and trusts. This was especially true as to some of the more rote form- filling functions, where the client responses are somewhat anticipated, and the process otherwise lends itself to automation. The resulting efficiencies enabled this practitioner to scale up the business, hire new counsel, and arguably better serve additional clients in the local community as a result of using trust and estates software featuring generative AI capabilities.

Notably, the practitioner was in the process downsizing their practice and transitioning toward retirement, when it was discovered that the incorporation of AI afforded such a transition, while ironically, scaling up their firm practice, and hiring additional practitioners to serve a greater volume of clients in the community that require such services. This account seems like a “win” for all parties involved – and notable for its inclusion of such AI services to aid at once: (1) the smaller law firm practices and its practitioners in scaling to meet a greater public demand for critical legal services such as wills, trusts and estate counsel services; and (2) more readily affording continued involvement by senior attorneys in the practice of law, while senior practitioners may otherwise seek an earlier retirement.

Intellectual Property

One subsequent interview was with the partner at a mid-size (approximately 25-attorney) firm in California. The practitioner acknowledged the fact that the increasing prevalence and use of generative AI in the daily legal practice makes it practically impossible not to address. Particularly, there are identified uses by third party support services, e.g., trademark clearance search service and intellectual property investigative service providers that can greatly help to control the scale and volume of data (and attendant “noise” – or non-relevant results) that would otherwise be generated in response to a given search, and/or appropriately narrow search results to assist counsel in analyzing such results at a more reasonable scale, to provide more cost-effective reports and recommendations to clients. In this way, the incorporation of generative AI apparently helps to address the natural tension that exists between the level of thoroughness of a clearance search, versus its attendant expense.

In addition, Task Force members interviewed an in-house technology and commercial transactions attorney. The Task Force learned that the use of generative AI tools by in-house attorneys was increasing. ChatGPT was mentioned as one tool. “GC AI” was mentioned as another. Initial use cases included using the generative AI tool to help create a first draft of an agreement, to create checklists, to compare drafts, to summarize and help explain legal documents to non-attorney stakeholders, and to perform preliminary legal research. It was acknowledged, unsurprisingly, that the output of the tools would need to be checked carefully. The current level of use could be best described as exploratory, and it was acknowledged that only a subset of the capabilities of the tools was being investigated and/or used. Generative AI tools capable of allowing users to build profiles to enable the tools to deliver customized content were viewed as particularly useful. Similarly, tools trained on vetted (closed) knowledge sets and providing outputs responsive to feedback given on previous outputs to similar prompts were viewed as more useful. It was expected that in the coming years the increase in the use of generative AI tools would be significant. It was acknowledged that generative AI tools at some point will be able to replace some of the more rote functions on in-house transactional counsel, but, more so, that these tools could be multipliers, allowing in-house counsel to do more, perhaps with fewer physical resources.

Litigation / e-Discovery

A third interview was with a senior e-discovery litigation attorney employed by a very large multinational firm, with multiple offices located in California. This practitioner underscored the importance of the use of generative AI in reducing the scale and volume of data generated in response to discovery requests (e.g., requests for production of documents, in the context of electronic discovery). This practitioner relayed that the entire complexion of e-discovery practice is changing as a result of the incorporation of generative AI, and changing for the better. For example, not only is generative AI effective in reducing data and attendant need for review by humans, it is also greatly reducing the cost of e-discovery document review, document designation and production. According to the interviewee, this purportedly has effectively evened the proverbial playing field for litigants with disparate economic capabilities, while facilitating discovery in litigation (and resolution), all while making life easier for litigators.

Finally, one attorney interviewee from a “full service” mid-sized business law firm in California disclosed a policy of strict nonuse of generative AI for client matters and marketing at this time. The decision was the result of consultations between the firm’s management group and outside general counsel. The decision was based on exploring the attendant realities of liability exposure at this early stage in the development of such AI tools, generative or otherwise.

Clearly, given such a broad range of practice experiences in relation to the adoption of generative AI, more in depth study is appropriate. Such research shall first and foremost be best informed by a comprehensive desk study of all existing publications/guidelines by/from the ABA, as well as sister state bars and legal professional associations, and their findings on the attorney use (or non-use) of AI, followed by engagement of the Task Force and its Advisory Committee, and possibly ultimately third party professional survey consultants, in order to more fully develop the methodology and objectives of an eventual study into the use of generative AI by California attorneys whose results consist of actionable data.

In sum, there is wide variation in the nature of acceptance, adoption and use of AI by different California practitioners. A “one-size” approach may not be as effective as an approach that accounts for such variation in practices, thus warranting an appropriately commissioned, empirical survey or study across California attorneys in order to better document the nature of use of generative AI within and across the myriad substantive legal practice disciplines.

IV. Suggestions and Proposals

To ensure that the legal profession receives the benefits of AI while managing the dangers, governance must continue to evolve with the technology’s development. The following section articulates tenets of a governance approach for the legal profession in California that would facilitate the ABA principles of oversight, accountability and transparency. It is our view that the ethical development of any governance initiatives must involve all stakeholders, including not only different types of practitioners (i.e., lawyers, judges, legislators), but non-legal community members as well.

A. Growth

As AI develops, the sophistication of AI governance must grow in an iterative cycle of learning and teaching. Responsible entities must continue to pursue understanding of AI and its consequences as the technology and tools become more advanced. Task forces, standing committees, and Offices of AI of entities like the State Bar and other legislative or legal entities may work in partnership with AI developers and vendors and affected stakeholders like business and community leaders to better understand and assess the impacts, including risks, of AI usage. These partnerships may be informal or formal and could be encouraged by providing a state or state bar certification to vendors whose products meet certain levels of disclosure and safety. The knowledge resulting from these inquiries will be especially important to lawyers’ ability to meet their duties of competence, diligence, and communication. Making it easier for the legal industry to keep pace with developments in generative AI will permit lawyers to fulfill their duties to utilize and apply AI and properly explain any uses to clients.

Policy- and decision-makers as well as industry and community leaders, should be consulted regarding the overall state of the technology for feedback and comment on a regular basis. The usages of AI in the legal industry in the legal industry will be varied. Judges, for example, will use AI very differently than practitioners in large law firms, who will likely use AI differently than non-profit or legal services organizations. Those working alongside attorneys, such as paralegals, court reporters, and discovery vendors must also be considered. Governance must be designed with and for all legal practitioners, with the input of any populations that might be affected. Constituents of these groups should consider, from their unique perspectives, for example, how to distinguish between AI and non-AI produced work product, how to mitigate any biases in AI algorithms, and how to comply with AI regulations in different jurisdictions. A non- exhaustive list of subject areas of particular interest and concern are as follows: cybersecurity, privacy and data protection, access and equity, ethical and responsible use guidelines.

The output of these efforts will be the consistent provision of up-to-date information for all stakeholders via required law school courses; MCLE requirements; the issuance of ethics opinions by the State Bar and CLA’s Ethics Committee; educational programs for the judiciary; and information posted on public websites, for example to help pro se litigants navigate the use of AI in the legal system.18

B. Governance

The iterative learnings about “growth” should be synchronized with a similarly iterative cycle with newly issued rules of governance. As AI tools and technologies develop, effective rules of governance might have to also develop. Rules of governance should not be limited to voluntary associations like the ABA, but may require legislation. Following are three non–mutually exclusive angles from which the use of AI in the legal industry may be regulated.

First, the legal profession may choose to amend or supplement existing rules of governance to address new issues raised by the use of generative AI. For example, U.S. District Judge Paul Grim and Dr. Maura R. Grossman of the University of Waterloo have suggested to amend Federal Rule of Evidence 901(b)(9) to specifically address evidence created by AI.19 Similarly, proposals in New York seek to amend New York’s Criminal Procedure Law and Civil Practice Law and Rules to prohibit the use of AI-generated or processed evidence unless subject to additional standards of verification.20 Such rules can help to serve transparency interests by requiring disclosure and description of AI processes before admitting AI-produced evidence. Amendments or comments to ethics codes, such as the Judicial Ethics Code, may also help legal practitioners to understand their duties.

Second, the use of AI in the legal industry may be regulated through licensing. Utah’s Artificial Intelligence Policy Act, for example, requires licensed or certified professionals to disclose and get approval of use of AI by their clients,21, an approval the Task Force did not agree with and is not required by the Rules in California. Legislation requiring AI vendors to obtain licenses to work with legal professionals may be another avenue of regulation. The State Bar could also provide optional certification for lawyers, for example on the condition of fulfilling certain education or training requirements. Obligations may also imposed through oaths promising principled AI usage—for example when attorneys are sworn in to practice law before an officer of the court. Regulation through licensing can help ensure that professionals working with AI have the requisite knowledge to do so ethically and responsibly.

Third, regulation may be imposed by creating reasonable enforcement mechanisms. Legislation seeking to establish an AI bill of rights for those impacted by AI-usage22 may allow for enforcement through licensing, new causes of action, or fines. Utah’s Artificial Intelligence Policy Act is one example of legislation that imposes administrative fines and empowers courts to enjoin unlawful activity and order disgorgement of any money received in violation of the act.23

Critical to the creation of any AI “bill of rights,” will be definition of the affected population and values advanced. A proposed bill of rights in New York, for example, would declare that “any New York resident affected by any system making decisions without human intervention to be entitled to certain rights and protections to ensure that the system impacting their lives do so lawfully, properly, and with meaningful oversight.”24 The EU AI Act, which entered into force on August 1,25 is intended to ensure fundamental rights, representative governments, and the rule of law and environmental sustainability are protected from high-risk AI.26 Third-party enforcement can be another tool to help accountability for the usage of AI products.

​C. Collaboration.

Woven throughout the iterative processes of “growth” and “governance” should be efforts to collaborate across professions, organizations, and jurisdictions in order to facilitate the development of ethical rules and to ensure the administrability of a uniform legal system. Cross- jurisdictional collaboration will be especially important as the data, algorithms, and outputs of AI move across borders. Efforts to work across jurisdictions may take the form of AI summits or conferences to share best practices, the tracking and monitoring (and sharing) of legislation and governmental agency regulations, or the identification of governance trends. Conversations must also include non-legal stakeholders. Partnerships with technology companies will be especially important as AI developers and vendors drive the evolution of generative AI. Legal organizations may work with AI vendors to ensure that they understand the risks of feeding certain kinds of legal data (e.g., criminal history reflecting “structurally biased application of laws, policies or practices”) to AI27 and that AI vendors are sufficiently transparent such that legal practitioners can evaluate any AI tool’s neutrality. Working with the AI industry ensures that AI tools develop according to lawyers’ and clients’ needs.

Partnerships with leaders from other industries or communities can help lawyers assess the impacts of lawyers’ use of generative AI on different professions and demographics and adjust their rules of governance accordingly. Listening to feedback allows the legal industry to make informed decisions about governance and to assess whether the impacts of its rules are as it intends. Only in collaboration with different industries and communities can lawyers effectively harness the productive power of AI, while avoiding its pitfalls.


1 Catherine Thorbecke, A year after ChatGPT’s release, the AI revolution is just beginning, CNN (Nov. 30, 10:32 AM EST) https://www.cnn.com/2023/11/30/tech/chatgpt- openai-revolution-one-year/index.html.

2 Darla Wynon Kite-Jackson, 2023 Artificial Intelligence (AI) TechReport, ABA https://www.americanbar.org/groups/law_practice/resources/tech-report/2023/2023-artificial- intelligence-ai-techreport/.

3 Larry Neumeister, Lawyers blame ChatGPT for tricking them into citing bogus case law, AP (June 8, 2024, 8:25 PM PDT) https://apnews.com/article/artificial-intelligence-chatgpt- courts-e15023d7e6fdf4f099aa122437dbb59b.

AMERICAN BAR ASSOCIATION, RESOLUTION 604 (2023). (See also American Bar Association, Resolution 512 “Generative Artificial Intelligence Tools” (2024)).

5 THE STATE BAR OF CALIFORNIA, STANDING COMMITTEE ON PROFESSIONAL RESPONSIBILITY AND CONDUCT, PRACTICAL GUIDANCE FOR THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN THE PRACTICE OF LAW (2023).

6 Id.

7 Lyle Moran, Pretrial risk-assessment tools should only be used if they’re transparent and unbiased, warns ABA House, ABA JOURNAL (Feb. 14, 2022, 2:58 PM CST), https://www.abajournal.com/news/article/resolution- 700#:~:text=Resolution%20700%2C%20which%20was%20approved,structurally%20biased%2 0application%20of%20laws%2C.

EEOC, EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness,
U.S. EQUAL EMPLOYMENT OPPORTUNITY COMMISSION (Oct. 28, 2021),
https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic- fairness.

9 Matthew Stepka, Law Bots: How AI is Reshaping the Legal Profession, ABA, Business Law Section (Feb. 21, 2021) https://businesslawtoday.org/2022/02/how-ai-is- reshaping-legal-profession/.

10  THE STATE BAR OF CALIFORNIA, supra note 5.

11 Jeremy M. Evans, California Lawyers Association, “California Lawyers Association Launches Task Force on Artificial Intelligence” (2023) https://calawyers.org/california-lawyers- association/california-lawyers-association-launches-task-force-on-artificial-intelligence/.

12  Id.

13 UTAH STATE BAR, “Using ChatGPT in Our Practices: Ethical Considerations” (2023) https://www.utahbar.org/wp-content/uploads/2023/05/ChatGPT-article.pdf.

14 THE WHITE HOUSE, “FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (October 30, 2023) https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president- biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/.

15 European Commission, “Shaping Europe’s digital future” [AI Act] (August 8, 2024) https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.

16  THE STATE BAR OF CALIFORNIA
https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf.

17 THE STATE BAR OF CALIFORNIA, Rules of Professional Conduct, Rule 1.1 Comment (March 22, 2021) https://www.calbar.ca.gov/PORTALS/0/DOCUMENTS/RULES/RULE_1.1.PDF.

18  See e.g., TASKFORCE FOR RESPONSIBLE AI IN THE LAW, INTERIM REPORT TO THE
State Bar of Texas Board of Directors 5, 7, 13 (2023).

19  TASK FORCE ON ARTIFICIAL INTELLIGENCE, REPORT AND RECOMMENDATIONS OF THE
New York State Bar Association 67–68.

20  Id. at 63.

21  UTAH STATE LEGISLATURE, S.B. 149 ARTIFICIAL INTELLIGENCE AMENDMENTS (2024),
https://le.utah.gov/~2024/bills/static/SB0149.html.

22 See e.g., TASK FORCE ON ARTIFICIAL INTELLIGENCE, supra note 13 at 64; THE WHITE HOUSE, BLUEPRINT FOR AN AI BILL OF RIGHTS (2022).

23  UTAH STATE LEGISLATURE, supra note 15.

24  TASK FORCE ON ARTIFICIAL INTELLIGENCE, supra note 13 at 64.

25 Directorate-General for Communication, AI Act enters into force, EUROPEAN COMMISSION (Aug. 1, 2024), https://commission.europa.eu/news/ai-act-enters-force-2024-08- 01_en.

26  TASK FORCE ON ARTIFICIAL INTELLIGENCE, supra note 13, at 71.

27  See Moran, supra note 7.


Jeremy M. Evans, Chair
California Lawyers Association Task Force on Artificial Intelligence
Immediate Past President, California Lawyers Association
President, California Lawyers Foundation
CEO, Founder & Managing Attorney, California Sports Lawyer®

Dr. Chris Mattmann, Vice Chair
California Lawyers Association Task Force on AI
Chief Technology Officer

Nicole Bautista, CEO & ED
California Judges Association

Diane Cafferata, Partner & Author
Quinn Emanuel Urquhart & Sullivan, LLP

Joshua de larios Heiman, California Lawyers Association Board of Representatives, Privacy Law Section
Owner and Managing Attorney, Data Law

Christopher Passarelli, Past Chair, Business Law Section, California Lawyers Association
Partner, Dickenson, Peatman & Fogarty

John Pavolotsky, Chair, Intellectual Property Law Section, California Lawyers Association
Stoel Rives, LLP
Of Counsel

A special thank you to the California Lawyers Association Task Force on Artificial Intelligence and the Advisory Committee (Jeffrey A. Streiffer, Brett Cook, Christopher D. Hughes, Daniel C. Kim, Farshad Ghodoosi, Jordanna Thigpen, Joy Murao, Leonard Sansanowicz, Mark Porter, Elissa D. Miller, Osha Meserve, Perry Segal, Suzanne Weakley, Uzzi O. Raanan, Hon. Khymberli Apaloo, Gary Rudolph).

California Lawyers Association
400 Capitol Mall, Suite 650
Sacramento, CA 95814

Copyright ® 2024. All Rights Reserved.


Forgot Password

Enter the email associated with you account. You will then receive a link in your inbox to reset your password.

Personal Information

Select Section(s)

CLA Membership is $99 and includes one section. Additional sections are $99 each.

Payment