California Lawyers Association
Navigating AI Risks and Ethical Concerns When Engaging With Third-party AI Providers
By Emanuela Canegallo*
The use of Artificial Intelligence (“AI”) systems by lawyers, both in firms and within in-house legal teams, has significantly increased in the last few years. More frequently, AI technology is leveraged by lawyers to assist with a wide range of tasks, including legal research, litigation support, predictive analysis, contract drafting, legal document review, due diligence, and client correspondence management.
But what is an AI system? An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments[1] Generative AI (“GAI”) is a subset of AI that uses advanced computer algorithms to generate content, such as text, audio, images and video, that approximate those created by humans.
While AI offers significant benefits by enhancing efficiency and automating tasks, it poses risks of privacy violations, security threats, bias and copyright infringement, due to several factors, including the large amount of information processed, and the lack of transparency and explainability on how the specific AI module works. This is of particular concern and explains the regulatory framework that is developing.
The European Union enacted a comprehensive AI regulation in 2024 (the “EU AI Act”)[2] and some U.S. states, including California, have followed the EU’s lead by passing AI bills governing certain aspects of AI.[3] Lastly, President Trump issued an Executive Order (“EO”)[4] with the stated intent to take steps towards a uniform national framework for AI, primarily by attempting to limit the negative consequences of a patchwork of different state-level AI laws. Moreover, existing privacy laws[5] and security standards[6] also apply to AI technologies.
In this scenario, a thorough third-party risk assessment is uppermost to ensure that AI providers have in place all necessary measures to comply with any applicable AI regulation, privacy laws and security standards.
Lawyers need to appreciate the risks and the benefits of AI when used in clients’ matters, especially in complying with their ethical obligations. On November 16, 2023 the State Bar of California issued the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (the “State Bar Guidelines”)[7] to provide guiding principles to lawyers using GAI in their legal practice, urging lawyers to adopt appropriate safeguards to continue to protect their client, and act in conformity to the California Rules of Professional Conduct (“Rules”).
This article first describes the ways in which lawyers have been using AI-powered tools. It then discusses how various Rules might be implicated in that use. Finally, it highlights the relevance of a third-party risk management program to mitigate the risks generated by the use of AI.
How lawyers use AI-powered tools?
There are many ways in which lawyers use AI-powered tools to improve productivity and provide better legal services for their clients.[8]
Legal Research. Compared to traditional legal research tools, AI-powered tools are faster, more efficient and now highly sophisticated. AI-powered tools can scan vast datasets in a very short time, help with drafting and summarizing documents, provide citations, generate direct answers complete with source citations, create comparisons of laws across multiple jurisdictions, and generate comprehensive reports or memos.
Electronic Discovery. AI-powered Electronic Discovery tools can help lawyers sift through a large volume of digital data to identify, classify and review electronically-stored information (ESI) for legal proceedings and investigations. They can also assist lawyers categorizing documents in a case, producing case narratives, and identifying key issues in litigation.
Litigation Support and Predictive Analysis. This type of AI tool focuses on evidence management, trial preparation, and deposition analysis. AI is also used to predict the outcome of litigation through the analysis of the case and the identification of patterns in current and past data, based on case law, public records and jury decisions.
Contract Management and Review. AI-powered Contract Management tools focus on the contract lifecycle management, with the ability to draft, review and analyze contracts, redlines, create playbooks, highlight non-standard clauses, flag deadlines, and identify critical provisions.
Due Diligence Review. AI can also be used to automate due diligence review for complex corporate transactions, assisting lawyers in reviewing a large number of documents and contracts, and in identifying and summarizing key clauses from contracts.
Practice Management and Client Communication. AI-powered tools can help to automate routine tasks such as managing calendars, drafting emails, and providing legal document translations. There are also AI-powered virtual receptionists and chatbots services that can help with client intake forms, phone calls, and appointment scheduling.
While the use of AI-powered tools streamlines the workflow and enhances automated processes, there are downsides of such a high efficiency, which include the costly implementation, the privacy and ethical concerns, and the risk of biased or otherwise inaccurate outputs (e.g. the well-publicized “hallucinations”) that require review and verification by lawyers.
What Rules Are Relevant in the Use of AI?
After listing the main ways lawyers leverage AI in their legal practice, let’s now focus on how the use of AI-powered tools can jeopardize compliance with the Rules and which safeguards lawyers should adopt.
Rule 1.1 (Competence)
Pursuant to Rule 1.1, a lawyer shall perform legal services with competence. Rule 1.1, comment [1] states that “[t]he duties set forth in this rule include the duty to keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology.” The State Bar Guidelines recognize that Rule 1.1 requires a lawyer to ensure competent use of the technology, including the associated benefits and risks. Also, a lawyer should understand to a reasonable degree how the technology works, its limitations, and the applicable terms of use and other policies governing the use and exploitation of client data by the product.[9] ABA Formal Opinion 512 on Generative Artificial Intelligence Tools, issued on July 29, 2024 (“ABA Formal Opinion 512”)[10] clarifies that lawyers must remain vigilant given the fast-paced evolution of GAI tools, and consider reading about GAI tools targeted at the legal profession, attending relevant continuing legal education programs and consulting others who are proficient in GAI technology.
Risk of failure of AI tools
AI-powered tools present an inherent risk of failure. Absent appropriate human supervision, AI may generate inaccurate or biased outputs, or even fake content so-called “hallucinations” (i.e. fabricated citations, quotations and misattributions of law). AI tools are trained on large datasets and rely on learned patterns to perform their functions. Poor quality, incompleteness, or bias in the training data may cause inaccurate or biased outputs. Other reasons of failure include the inability of the AI tool to understand the factual context or appreciate slightly different inquiries, the poor integration of the AI tool into the existing workflow, or the degradation of the AI tool over time.
Hallucinations. Fictitious cases supporting briefs or misleading and inaccurate citations or quotations – as a result of AI hallucinations – became more frequent and stem from lawyers’ failure to check the validity of the research.[11] This may expose lawyers to sanctions and reputational damage,[12] and may also trigger the violation of Rule 3.1 (Meritorious Claims and Contentions) and Rule 3.3 (Candor Towards the Tribunal), as described below.
Bias. When AI-powered tools are trained on biased data there is a high risk that the content generated from AI is also biased and discriminatory. For instance, certain AI practice management and client communication tools, that are used for client screening, may raise the risk of generating discriminatory outputs when trained on biased data and, as a result, lead to violation of Rule 8.4.1 (Prohibited Discrimination, Harassment and Retaliation). Rule 8.4.1(a)(1) provides that “In representing a client, or in terminating or refusing to accept the representation of any client, a lawyer shall not … unlawfully harass or unlawfully discriminate against persons on the basis of any protected characteristic”. Further, Rule 8.4.1(b)(1)(i) states that “In relation to a law firm’s operations, a lawyer shall not … on the basis of any protected characteristic, … unlawfully discriminate or knowingly permit unlawful discrimination”. Therefore, lawyers are required to engage in continuous learning and implement adequate policies to identify, report and address bias in order to avoid discriminatory practices in violation of Rule 8.4.1.
Requirement of Review and Correction. All AI tools that lawyers might use present a risk of error. The duty of competence requires a lawyer to carefully review, correct and validate the outputs, including, e.g., verifying the accuracy of the contract review performed with AI, the correctness of the deposition summary, or the truthfulness of the case citations generated by AI and used by lawyers in support of briefs in court or legal advice, just to mention a few.
Lawyer Professional Judgment
A lawyer can competently represent a client only when the lawyer’s independent professional judgement is preserved. The exercise of a lawyer’s professional judgment is a combination of knowledge, skill, experience and ethical principles allowing lawyers to make sound decisions in the client’s interest. Lawyers retain full responsibility for the representation, and therefore, over-reliance on AI generated outputs should be avoided. As clearly stated in the ABA Formal Opinion 512,[13] “lawyers may not abdicate their responsibilities by relying solely on a GAI tool to perform tasks that call for the exercise of professional judgment” and also “lawyers may not leave it to GAI tools alone to offer legal advice to clients, negotiate clients’ claims, or perform other functions that require a lawyer’s personal judgment or participation.” The risks of lawyers’ uncritical reliance on the content generated by AI are not limited to providing incompetent representation or inaccurate legal advice to their clients, but may also adversely affect lawyers’ core skills and their ability to exercise professional judgement. As of this writing, AI tools have proven incapable of replicating the fundamental human skills that are required to exercise professional judgment (e.g. moral judgment, ethical reasoning, negotiation skills, deep contextual understanding, risk perception and tolerance, analytical thinking and emotional intelligence). The ability to make sound judgments, and weighing different and conflicting positions, remains exclusive to human beings. Unlike other technologies, the use of AI in legal practice may lead lawyers to gradually lose their capacity to make judgments now delegated to AI. This is a big concern, and lawyers must be very vigilant with such rapidly evolving technology.
Rule 1.3 (Diligence)
Rule 1.3(a) mandates that lawyers act with reasonable diligence in representing a client. Rule 1.3(b) provides that “reasonable diligence shall mean that a lawyer acts with commitment and dedication to the interests of the client….” With reference to the use of AI, the duty of diligence requires lawyers to provide thorough and prepared representation by actively ensuring the technology is used responsibly and its output is accurate and reliable. Considering that AI-powered tools may not understand the factual context, lawyers must critically analyze the AI generated content and verify that the AI outputs are not only free from errors, but also aligned with clients’ interests. As stated in the State Bar Guidelines, lawyers must apply diligence and prudence with respect to facts and law and ensure that AI-generated content accurately reflects and supports the interests and priorities of the client in the matter at hand, including as part of advocacy for the client.[14]
Rule 3.1 (Meritorious Claims and Contentions) and Rule 3.3 (Candor Towards the Tribunal)
Pursuant to Rules 3.1 and 3.3, a lawyer cannot bring or defend frivolous proceedings and cannot knowingly make false statements of law or fact or fail to correct false statements previously made to a tribunal. When lawyers file a case based on false information generated by AI-powered tools or submit a brief containing references to non-existent cases or inaccurate citations generated by AI, without verifying the validity of the research, lawyers risk not only violating Rule 1.1, but also Rules 3.1 and 3.3.
AI outputs must be carefully reviewed so that information presented to the tribunal is not false, including analysis and citations. A lawyer should also check for any rules, orders or other requirements in the relevant jurisdiction that may necessitate the disclosure of the use of generative AI.[15]
Several federal district judges have issued judicial standing orders on AI requiring disclosure of the use of AI to generate any portion of a brief, pleading, or other filing, and a separate certification that the filer has reviewed the source material and verified that the artificially generated content is accurate and complies with filer’s obligations pursuant to Federal Rule of Civil Procedure 11,[16] which imposes a personal and non-delegable responsibility on the attorneys to validate the truth and legal reasonableness of the papers filed in the action.[17]
Rule 8.4 (Misconduct)
Lawyers’ ethical duties established by Rule 8.4 may also be violated when lawyers submit to court fictitious cases and quotes generated by AI. Rules 8.4(c) and (d) provide, respectively, that it is professional misconduct for a lawyer to “engage in conduct involving dishonesty, fraud, deceit or reckless or intentional misrepresentation” and “in conduct that is prejudicial to the administration of the justice.”[18]
Rule 1.6 (Confidential Information of a Client)
Lawyers are bound by the duty of confidentiality towards their clients under Rule 1.6 and California Business and Professions Code Section 6068(e)(1). As part of their duty of confidentiality, lawyers must make reasonable efforts to protect client confidential information from unauthorized access, or inadvertent or unauthorized disclosure.
When a lawyer inputs client confidential information into a AI tool, to perform various tasks, such as legal research or contract review or litigation support, the risks to which the client’s information is exposed vary based on several factors, including the security protocols of each AI tool and whether or not the client’s information is used to train the AI tool. In fact, when the client’s information is used to train the AI tool, and assuming that such AI tool is public (e.g. Copilot, ChatGPT), there is a high risk that the client’s information might later be disclosed in the form of output to users outside the lawyer’s law firm, which may constitute unauthorized disclosure of client confidential information.
It is of paramount importance that before inputting client confidential information into a AI-powered tool, lawyers understand the capabilities and limitations of the AI tool, and carefully review the AI provider’s privacy policy and security protocols, including data retention policies, and also the terms of service to determine how the AI tool is trained, where the information will be stored, and how confidentiality is protected. The client’s informed consent to the lawyer’s use of AI may also be required. This in turn may require lawyers to provide their clients with a detailed description of the benefits and risks of the AI tool and how it will be used.
The State Bar Guidelines[19] provide that “a lawyer must not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections” and it further states “a lawyer must anonymize client information and avoid entering details that can be used to identify the client.” Accordingly, it is also crucial that lawyers adopt adequate precautions to prevent clients’ confidential information from becoming part of the AI’s training set, by confirming that the written agreement with the AI provider expressly excludes the use of client’s confidential information to train the AI tool.
Rule 1.4 (Communication with Clients)
Rule 1.4(a)(2) states that a lawyer shall “reasonably consult with the client about the means by which to accomplish the client’s objectives in the representation.” [emphasis added] Lawyers’ duty to communicate with their clients includes providing any information that is relevant for the representation and that is necessary to permit a client to make informed decisions about the representation. In light of Rule 1.4 and considering AI tools as “means” used by lawyers to represent clients, lawyers must disclose their use of AI when requested by the client and when such disclosure is necessary for the client to make informed decisions regarding the representation.
As highlighted earlier in this article, when lawyers intend to input information related to the representation into an AI tool, lawyers must communicate and explain to their client the risks and benefits of using AI, how the AI tool works, and obtain from clients informed consent on such specific use of AI. Similarly, lawyers must consult clients about the use of AI when its output will influence a significant decision in the representation, such as when a lawyer relies on AI to evaluate potential litigation outcomes or jury selection.[20] Lawyers are also required to disclose the use of AI when such use may have an impact on the basis of the lawyer’s fee and its reasonableness.
Lawyers should generally keep open communication with their clients regarding the use of AI and, in accordance with the State Bar Guidelines,[21] lawyers should review any applicable client’s instructions or guidelines that may restrict or limit the use of generative AI.
Rule 1.5 (Fees for Legal Services)
When engaging with AI providers, lawyers must also be mindful of Rule 1.5 and charge their clients only for the actual time spent[22] on the matter, avoiding adding an extra charge for the time saved by using AI or for the time spent by learning AI. The cost of AI tools varies based on the tool and the services provided. Lawyers may bill the cost of a certain AI tool to their clients when such tool is used for a specific task only performed for that client, but when the tool is generally used for maintaining the legal practice, it should be considered as general office overhead expenses.[23] Considering the difficulty of determining how to charge clients for the use of AI, the best approach would be a written fee agreement with the client, explaining the basis for all fees and costs, including those associated with the use of AI.
Supervisory Duties: Rule 5.1 (Responsibilities of Managerial and Supervisory Lawyers) and 5.3 (Responsibilities Regarding Nonlawyer Assistants)
When employing AI tools in their legal practice, lawyers also have supervisory responsibilities pursuant to Rules 5.1 and 5.3. Managerial lawyers must make reasonable efforts to ensure that clear policies on the permissible use of AI are implemented, and adequate training is provided to lawyers and non-lawyers within the firm. The same duties apply to outsourced work. When outsourcing services to AI providers, lawyers are responsible for ensuring that AI providers adequately protect security and confidentiality of the information related to the representation. For this purpose, lawyers should conduct a third-party risk assessment as described in the remainder of this article.
Third-Party Risk Management
Before selecting an AI provider, lawyers must evaluate the specific tool against their ethical and practice needs, based on a reasonable understanding of (i) how the tool works; (ii) the benefits and the risks of the tool; and (iii) how the tool uses and retains information during and after termination of the services.
A third-party risk management program, including standard processes to assess, monitor and mitigate the risks introduced by AI providers, constitutes a structured approach to managing third-party risks. In this respect, ABA Formal Opinion 477R[24] highlights the requirement of a “process” to assess risks, identify and implement appropriate security measures responsive to those risks, verify that they are effectively implemented, and ensure that they are continually updated in response to new developments.
As a result, the scope of a thorough risk assessment of AI providers should include the following:
- How information is collected, stored, used and whether it is shared with any third-parties.
- How the AI model is trained to be able to opt out of having data used for model training.
- Technical security measures (e.g. data encryption at rest and in transit, access control and authentication measures to limit access to information on a “need to know basis”, antivirus and firewalls).
- Third-party privacy and security policies and procedures.
- Completion of a training program for third-party’s employees.
- Completion of regular audits to identify and document potential vulnerabilities, compliance issues and past incidents if any.
- Whether the AI provider engages any sub-contractor in connection with the services, to ensure that sub-contractors have in place adequate measures to protect confidentiality and security of the information.
- Whether the third-party operates within a privacy legal framework (e.g. GDPR[25]) imposing strict requirements for data processing.
- Compliance with any applicable laws based on the scope of the services (e.g. cross-border data transfer).
- Terms of use to be reviewed against the agreement executed with the AI provider.
- Third-party reputation and reliability.
Conclusion
The use of AI has several positive effects: it increases efficiency, permits tasks automation, improves legal research, and enhances e-discovery and predictive case analysis. However, AI also carries risks (e.g. data security, privacy, bias) and presents challenges to lawyers’ ethical duties. Lawyers must approach the AI use in client matters with caution, understand AI capabilities and risks, adopt necessary safeguards, communicate with clients and always oversee and validate AI outputs in the exercise of lawyer’s professional judgment.
* Emanuela Canegallo is an accomplished data privacy, security and tech transactions attorney with extensive experience in building and leading privacy programs for global corporations. Emanuela currently serves as Director of Privacy for Viking Cruises. Prior to Viking, Emanuela was with Hard Rock International, and earlier with Accenture. Previously, she practiced law for several years in international law firms in Milan, Italy, supporting clients in intellectual property law and data privacy. The views expressed herein are her own.
[1] See OECD (2024) “Explanatory Memorandum on the updated OECD definition of an AI system” Artificial Intelligence Papers, No. 8, OECD Publishing, https://doi.org/10.1787/623da898-en.
[2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act, OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj.
[3] Cal. Bus. & Prof. Code § 22757, https://leginfo.legislature.ca.gov; Colo. Rev. Stat. § 6-1-1701 and seq. (2024), https://leg.colorado.gov/bills/sb24-205.
[4] Exec. Order “Ensuring a National Policy Framework for Artificial Intelligence” (12.11.2025), available at https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy.
[5] Cal. Civ. §§ 1798.100 – 1798.199, https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?division=3.&part=4.&lawCode=CIV&title=1.81.5&.
[6] National Institute of Standards and Technology (2023) “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”, https://www.nist.gov/itl/ai-risk-management-framework.
[7] State Bar of California (2023) “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law”, https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf ; see also CLA-Task Force on Artificial Intelligence (2024) “Report on AI in the Practice of Law” available at https://calawyers.org/california-lawyers-association/california-lawyers-association-task-force-on-artificial-intelligence/.
[8] ABA House of Delegates, Resolution 112 (2019), available at https://www.americanbar.org/content/dam/aba/directories/policy/annual-2019/112-annual-2019.pdf.
[9] State Bar Guidelines, above, at page 2.
[10] ABA Comm. On Ethics & Prof’l Responsibility, Formal Op. 512 (2024) available at https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
[11] Lacey v. State Farm General Insurance Co., 2:24-cv-05205 (C.D. Cal. 7.24.2025).
[12] Noland v. Land of the Free L.P., 114 Cal.App.5th 426 (2025) and People v. Alvarez, 114 Cal.App.5th 1115 (2025). Both opinions resulted in monetary sanctions against lawyers for filing briefs that contained fabricated legal citations and quotations generated by AI, and required that the State Bar be notified.
[13] ABA Comm. On Ethics & Prof’l Responsibility, Formal Op. 512 (2024) available at https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
[14] State Bar Guidelines, above, at page 3.
[15] Ibid.
[16] SeeStanding Order For Civil Cases Assigned to Judge Stanley Blumenfel, JT. (C.D. Cal., March 1, 2024) available at: https://www.cacd.uscourts.gov/sites/default/files/documents/SB/AD/1.%20Civil%20Standing%20Order%20%283.1.24%29%20%5BFinal%5D.pdf. Moreover, a bankruptcy court in California has issued similar guidelines for the use of AI in documents submitted to the court. See Bankruptcy General Order no. 210 (Bankr. S.D. Cal., Nov. 18, 2025) available at https://www.casb.uscourts.gov/sites/casb/files/documents/general-orders/General%20Order%20210_2025-11-18.pdf.
[17] See Fed.R.Civ.P.11(b).
[18] See, e.g., Lacey v. State Farm General Insurance Co., 2:24-cv-05205 (C.D. Cal. 7.24.2025).
[19] State Bar Guidelines, above, at page 2.
[20] ABA Comm. On Ethics & Prof’l Responsibility, Formal Op. 512 (2024) available at https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
[21] State Bar Guidelines, above, at page 4.
[22] Ibid.
[23] ABA, Formal Op. 512, above, page 13.
[24] ABA Comm. On Ethics & Prof’l Responsibility, Formal Op. 477R*(2017) “Securing Communication of Protected Client Information” available at https://www.americanbar.org/news/abanews/publications/youraba/2017/june-2017/aba-formal-opinion-477r–securing-communication-of-protected-cli/.
[25] Regulation EU 2016/679 (General Data Protection Regulation) issued by the European Parliament and Council of the European Union, OJ L 119, 4.5.2016.
