Skip to main content
Legal

How Law Firms Can Safely Leverage AI in 2026

Ken Satkunam, CISM
Ken Satkunam, CISM

March 19, 2026 · 10 min read

How Law Firms Can Safely Leverage AI in 2026

In February 2026, a federal appeals court sanctioned an attorney $2,500 after discovering 21 instances of fabricated quotes and misrepresentations in an AI-generated brief she submitted to the 5th Circuit. When confronted, the attorney initially denied using AI at all. The court was unsparing: "AI-generated inaccuracies in filings have become a pressing issue within the legal system, despite nearly three years of media coverage on similar occurrences." As of early 2026, a database maintained by a French legal data scientist had catalogued 239 instances of AI-produced inaccuracies in US attorney filings. Meanwhile, the Clio 2025 Legal Trends Report found that 79% of legal professionals use AI in some capacity—yet only 40% are using legal-specific AI solutions with appropriate data protection, down from 58% in 2024. That gap between adoption and compliance is where serious risk lives. This article is for managing partners and firm administrators who want to capture AI's genuine productivity benefits while staying on the right side of their professional obligations.

What Does ABA Formal Opinion 512 Require of Lawyers Using AI?

On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512—Generative Artificial Intelligence Tools—the first comprehensive national guidance on lawyers' ethical obligations when using AI. The opinion does not prohibit AI use. It does make clear that no AI tool changes a lawyer's professional obligations. Formal Opinion 512 identifies six areas where existing Model Rules apply directly to AI:

  • Competence (Rule 1.1): Lawyers must understand the capabilities and limitations of AI tools before using them—including how they handle data, what their error rates look like, and what hallucinations are and how they occur. "Using AI" is not the same as understanding AI. Competence requires the latter.
  • Confidentiality (Rule 1.6): Using a public AI tool that trains on user inputs—such as free-tier versions of ChatGPT or other consumer-grade tools—to process client information likely violates the duty of confidentiality. Data entered into these platforms may be retained, accessed by employees of the provider, or used to train future models. Lawyers must conduct vendor due diligence before entering any client information into an AI tool.
  • Communication (Rule 1.4): Clients generally want to know when AI is being used in their matters. The Clio 2025 Report found that 78% of clients want AI use disclosed—yet 35% of legal professionals said they rarely or never disclose it. Florida's Opinion 24-1 mandates disclosure when AI use affects client billing. Other states are moving toward broader disclosure requirements.
  • Supervision (Rules 5.1 and 5.3): Partners and supervising attorneys must establish policies governing AI use by associates and non-lawyer staff. Telling an associate to "use AI to draft this" without training, oversight, and verification structures is an ethics violation, not a time-saver.
  • Candor toward tribunals (Rule 3.3): AI-generated legal citations, case summaries, and arguments must be verified before submission to any tribunal. The proliferation of AI hallucination sanctions makes this non-negotiable. A lawyer cannot avoid responsibility for a fabricated citation by blaming the AI.
  • Reasonable fees (Rule 1.5): If AI dramatically reduces the time required to complete a task, billing the same hours as if the work had been done manually may be unreasonable. Firms need to think carefully about how AI use affects their billing practices and fee agreements.

What Are the Biggest Risks of AI Hallucinations in Legal Work?

AI hallucinations—instances where a language model generates confident-sounding but entirely fabricated content—represent the most immediate and embarrassing risk of AI use in legal practice. The problem is structural: large language models generate statistically plausible text, not verified facts. They do not know what they do not know, and they will cite a case that does not exist with the same apparent confidence as citing a real one.

The consequences in legal practice are severe:

  • Court sanctions: The 5th Circuit sanctioned the attorney in February 2026 despite finding no bad faith—the court's frustration was with the continuing pattern of AI reliance without verification. Earlier landmark cases, including Mata v. Avianca (2023), established that AI hallucinations in filings warrant sanctions, and courts have become less sympathetic, not more, as awareness of the risk has grown.
  • Malpractice exposure: An AI-generated brief or contract that contains a fabricated precedent or mischaracterizes a statute creates direct malpractice exposure. The client hired an attorney, not an algorithm, and the attorney is accountable for what is filed in their name.
  • Client harm in transactional work: AI-generated contract clauses that misstate legal standards, produce incorrect jurisdictional analysis, or fail to identify relevant precedent can cause direct harm to clients in deals and transactions—often without the firm or client discovering the error until it is too late.

Effective protection against hallucinations requires a firm-wide verification policy: every AI-generated citation must be verified in Westlaw, Lexis, or another authoritative legal database before use. Every AI-generated legal analysis must be independently reviewed by an attorney who understands the underlying law. "The AI said it" is not a defense, professionally or legally.

Which AI Tools Are Appropriate for Law Firms to Use?

Not all AI tools carry the same risk profile. The critical distinction is between legal-grade AI platforms designed for law firm use and consumer/generic AI tools that were not built with the legal profession's confidentiality obligations in mind.

Legal-grade AI platforms typically offer:

  • Data isolation—client inputs are not used to train the model or shared with other users
  • Data processing agreements or BAAs that meet professional responsibility standards
  • Integration with verified legal research databases, reducing (though not eliminating) hallucination risk
  • Audit logging and access controls appropriate for legal IT governance

Platforms in this category that have been adopted at scale include Harvey (used by large and mid-size firms, with documented 85% of users reporting faster work execution and power users saving an average of 36.9 hours per month), Clio Duo (built into a widely-used practice management platform, approved by 100+ bar associations), and Microsoft Copilot for Microsoft 365 when deployed in an enterprise configuration with appropriate data governance settings. An AllRize 2025 survey found that 89% of law firms already use Microsoft infrastructure, making a properly configured Microsoft 365 Copilot deployment a natural path for many firms—though only 2.4% have achieved seamless AI integration across their applications.

Consumer tools—free or low-cost versions of ChatGPT, Gemini, Claude, or similar platforms without enterprise data processing agreements—should not be used for any work involving client information. The Illinois Attorney Registration and Disciplinary Commission published a guide specifically warning that "public tools that are operated and controlled by an entity other than the lawyer or law firm may lack the proper ethical safeguards." The Clio 2025 data showing a decline in legal-specific AI use (from 58% to 40%) suggests many firms are moving in the wrong direction on this point.

What Do State Bar AI Disclosure Requirements Look Like in 2026?

As of March 2026, more than 35 state bar associations have issued formal guidance or opinions on AI in legal practice, and the pace is accelerating. While ABA Formal Opinion 512 provides the national framework, state-specific requirements vary:

  • Florida (Opinion 24-1): Mandates disclosure when AI use affects client billing or costs.
  • California: Published a Practical Guide emphasizing that competence requires understanding large language model risks before use, including hallucinations. Attorneys must conduct due diligence on any AI tool before using it for client work.
  • Texas (Opinion 705, February 2025): Human oversight is mandatory for all AI-generated legal work. AI-generated content must be verified before filing. Confidentiality requires attorneys to never share client information with unsecured AI tools.
  • Multiple states in 2026: States that were in the task force phase in 2025 are expected to issue formal opinions, with the trend converging around the ABA framework plus state-specific disclosure and billing requirements.

Disclosure requirements are expanding beyond billing to encompass disclosure whenever AI is used to generate substantive work product. Firms should review their engagement letter templates to include standard AI use disclosure language and obtain informed client consent for AI-assisted work—both as an ethics compliance measure and as a client relationship management best practice.

How Should Law Firms Build an AI Use Policy?

An AI use policy is no longer optional for law firms. ABA Formal Opinion 512 explicitly requires firms to establish AI policies under the supervisory obligations of Rules 5.1 and 5.3. A partner who allows associates and staff to "figure out AI on their own" is not just leaving efficiency gains on the table—they are creating documented ethics exposure.

A compliant AI use policy should include:

  • Approved tool list: Identify specifically which AI tools are approved for which types of work. Generic AI tools should not appear on this list for any client-related work without a data processing agreement reviewed by counsel.
  • Data handling rules: Explicit prohibition on entering client names, case details, privileged communications, or identifiable case facts into any non-approved tool. This rule must apply to attorneys, paralegals, and all support staff.
  • Verification requirements: Mandatory independent verification of all AI-generated legal citations, case summaries, and factual assertions before use in any client deliverable, filing, or communication.
  • Disclosure standards: When and how to disclose AI use to clients and courts, including template language for engagement letters and filing certifications.
  • Billing guidelines: How to handle billing for AI-assisted work—particularly when AI substantially reduces time spent on a task that would historically have been billed hourly.
  • Training requirements: All attorneys and staff who use AI must receive training on the policy, the tools, and the hallucination risk before using any AI tool for client work.
  • Quarterly review cycle: Given the pace of regulatory development, AI policies should be reviewed at minimum quarterly in 2026.

What Cybersecurity Controls Does AI Use Require?

Beyond ethics compliance, AI use in law firms creates specific cybersecurity requirements that IT and firm administrators must address:

  • Shadow AI prevention: The AllRize 2025 survey found that 38.8% of law firms have no AI integration with existing applications, and another 31.8% report only limited integration. This "shadow AI" adoption—attorneys and staff independently downloading and using tools outside IT governance—introduces data leakage risks that traditional security controls may not catch. Endpoint monitoring and acceptable use policies must specifically address AI tool use.
  • Vendor due diligence: Every approved AI tool requires a security review of the vendor's data handling practices, breach history, encryption standards, and contractual data protection commitments before approval.
  • Access controls: AI-generated work product should be treated as firm work product for access control purposes—retained in firm-controlled systems, not in personal cloud storage or AI platform histories.
  • Audit logging: Maintain records of what AI tools were used, by whom, and for what matters. This creates accountability, supports supervision obligations, and provides documentation in the event of a dispute about the AI's role in producing work product.

What Should Law Firms Do Next?

The legal profession's AI adoption wave is not coming—it is already here. The question for managing partners in 2026 is not whether to use AI but whether your firm is using it in a way that protects clients, satisfies your ethical obligations, and captures the genuine efficiency benefits these tools offer. Firms that adopt AI thoughtfully and with governance in place will be more competitive than those that avoid it entirely—and far better positioned than those who adopt it without guardrails.

At NorthStar Technology Group, we help law firms evaluate, select, and deploy AI tools with the security architecture, data governance, and training programs that ABA Formal Opinion 512 and state bar guidance require. For a deeper look at how AI fits into a firm's broader technology strategy, see our article on agentic AI for law firms. To discuss your firm's AI readiness and cybersecurity posture, visit northstartechnologygroup.com/services.

AI and AutomationLegal AIABA EthicsClient ConfidentialityLaw Firm TechnologyGenerative AIAI Compliance
Share this article

About the author

Ken Satkunam, CISM

Ken Satkunam, CISM

President & Founder, NorthStar Technology Group

Ken has spent over 25 years in IT leadership, serving in roles from technical support to CIO for organizations as large as 23,000 employees. He founded NorthStar Technology Group in 2000 to help regulated organizations build secure, compliant, and operationally resilient technology environments. Ken holds the Certified Information Security Manager (CISM) credential from ISACA and is the co-author of the Amazon best-seller "Cyber Attack Prevention." He has been quoted in industry publications including eWeek and DM News, and NorthStar has been recognized on the Inc. 5000 list in both 2024 and 2025.

CISMInc. 5000MSP 500Published Author25+ Years

Need Help With Your Technology Strategy?

Our experts can help you assess your current posture and build a roadmap for success.