The Open AI Lawsuit: Emerging AI Liability Risks and Insurance Considerations 

Facebook
Twitter
LinkedIn

March 2026 The recent filing of Nippon Life Insurance Company of America v. OpenAI Foundation et al. (Case No. 1:26-cv-02448, N.D. Ill., filed March 4, 2026) represents a significant and distinctive development in AI liability litigation. While OpenAI has faced numerous prior suits—primarily centered on copyright infringement, data misuse, and other issues—this case stands out as a prominent civil action directly alleging that consumer-facing generative AI (ChatGPT) enabled the unlicensed practice of law under Illinois statute (705 ILCS 205/1), tortiously interfered with a settlement contract, and aided an abuse of process—by providing personalized legal assistance to a pro se litigant. Nippon claims these actions caused approximately $300,000 in compensatory damages (primarily defense costs and related expenses), plus additional reputational damage from the public, inflammatory filings, with total exposure further amplified by a demand for $10 million in punitive damages. 

Key Allegations in the Complaint 

The underlying facts, as pled: A claimant, Graciela Dela Torre, settled a long-term disability benefits dispute with Nippon in January 2024, executing a full release and securing dismissal with prejudice. In 2025, after expressing regrets and receiving confirmation from her attorney that the settlement was final and could not be reopened, Dela Torre uploaded the attorney communications to ChatGPT. The AI purportedly analyzed them as “gaslighting,” invalidated her attorney’s advice, encouraged her to terminate counsel, drafted Federal Rule of Civil Procedure 60(b) arguments to vacate the settlement (alleging coercion and newly discovered evidence), and assisted in preparing a motion to reopen the case.  

Dela Torre then fired her attorney and, proceeding pro se, filed the motion to reopen—relying on ChatGPT’s guidance and drafts. When the court denied reopening in February 2025—reaffirming the settlement’s validity—the filings allegedly continued with ChatGPT’s ongoing assistance. A new lawsuit was initiated in February 2025 (Case No. 1:25-cv-01483), initially against other parties. Per the complaint, ChatGPT then helped amend the pleading in March 2025 to add Nippon as a defendant, reasserting claims covered by the prior release—constituting an alleged breach. The docket reportedly saw over 44 motions, memoranda, demands, petitions, and requests, plus 14 standalone judicial notices—many claimed meritless, procedurally improper, and drafted with ChatGPT’s assistance. Nippon attributes an ulterior motive of sustained harassment and revenge, driven by the claimant’s ongoing grievances, rather than legitimate relief.  

Nippon seeks declaratory relief, injunctions barring OpenAI from providing legal assistance in Illinois, $300,000 compensatory damages (tied to relitigation expenses and related harm), and $10 million punitive. 

Insurance Coverage Analysis 


These claims bridges professional liability and potential oversight/governance exposures, testing the fit between Technology Errors & Omissions (Tech E&O) and Directors & Officers (D&O) insurance.  

Tech E&O is likely positioned as the primary responder. These policies cover negligence, errors, omissions, or failures in professional services/products causing third-party financial loss. The allegations center on ChatGPT’s output—legal research, analysis, advice, and document drafting—that allegedly induced breach and enabled downstream harm, including hallucinated citations (e.g., a nonexistent “Carr v. Gateway” case). This aligns with classic E&O triggers for defective tech services.  

A key consideration here is the critical need to closely examine Tech E&O exclusions when assessing coverage for generative AI/LLM activities. Many policies contain exclusions for professional services of a traditional nature (e.g., legal, medical, or accounting advice), rendering of opinions, or activities that could be construed as practicing a licensed profession. In this context, where an LLM generates tailored legal arguments, drafts pleadings, or provides case-specific guidance—particularly to a pro se user in live proceedings without professional oversight—carriers may argue these fall under excluded “professional services” rather than covered tech outputs. Policyholders must review definitions of “professional services,” “media content,” or “technology services” carefully—and consider negotiating narrower exclusions or AI-specific endorsements—to avoid coverage denials when AI performs functions adjacent to regulated professions. Note that while the complaint ties compensatory damages to $300,000 in legal fees and costs, it also references unquantified “other damages” like reputational harm, which could push total exposure higher if proven. 

D&O coverage could play a secondary or overlapping role, particularly for claims implicating executive or board-level decisions on AI design, safety protocols, or policy timing (e.g., the restriction on “tailored legal advice” added only in October 2025). Side A (individual) or Side C (entity) might respond to alleged mismanagement contributing to the product’s deployment.  

The tortious interference with contract claim warrants particular attention. It requires knowledge of the contract, intentional and unjustified inducement of breach, actual breach, and damages. Nippon alleges OpenAI/ChatGPT was aware of the settlement terms via user prompts yet generated arguments and drafts that facilitated challenge—potentially framing this as a strategic or governance-level failure in how the AI was built and constrained, rather than a mere output error. This element could lean toward D&O territory in some policy interpretations, highlighting the blended nature of the exposure. 

Potential Intersection with Employment Practices Liability (EPL) 


While this particular case does not appear to trigger Employment Practices Liability exposures, it is not far-fetched to envision scenarios where AI/LLM outputs intersect with EPL risks. For example, if an employer’s use of generative AI in HR processes (e.g., performance reviews, hiring recommendations, or disciplinary drafting) produces biased, discriminatory, or harassing content, it could give rise to claims of wrongful termination, harassment, or disparate impact. Algorithmic “advice” or document generation in employment contexts might blur lines between tech errors (E&O) and employment wrongs (EPL).  

It is therefore essential that insurance programs include no broad exclusion for EPL—or, better yet, feature a carve-back or affirmative coverage extension—when Tech E&O and EPL sit in separate towers. Without this, allocation disputes could arise, leaving gaps in protection for emerging hybrid risks. 

The Case for Imagining Combined D&O + Tech E&O Structures 

These intersections we’ve touched on earlier—where a single AI output spirals into both governance-level fallout and professional-service-style harm—remind me of the financial institutions (FI) world, where D&O and E&O have long been blended into unified policies. In FI spaces (think banks, asset managers, mutual funds), the lines between management decisions (D&O) and professional services/errors (E&O) blur so naturally—fiduciary duties, investment advice, regulatory scrutiny, back-office ops—that separate towers often lead to messy allocation fights, gaps, or finger-pointing when a claim hits. Combining them under one form with consistent terms, shared limits, and coordinated handling became the smart, practical fix: it simplifies claims, reduces disputes, and better matches how risks actually manifest in that ecosystem. 

While nothing like that mature blended structure exists yet on the tech/AI side—carriers are still mostly bundling Tech E&O with cyber/media or layering on AI endorsements/exclusions—it’s an intriguing thought experiment. As our lives, work, creativity, and decisions become ever more intertwined with AI (generative tools advising, automating, influencing at every layer), why wouldn’t we see similar policy mergers? Risks that once sat in neat silos—algorithmic defects causing third-party loss (Tech E&O) vs. board oversight failures or “AI washing” (D&O)—start overlapping in real, messy ways. A hallucinated legal draft leads to tortious interference; unchecked model bias triggers discrimination claims; governance lapses around safety protocols amplify downstream harms. The boundaries dissolve, just like they did in finance. 

Go a bit further down the rabbit hole—or peel back another Russian doll—and you land somewhere trippy: at the heart of it all could sit the AI itself as the ultimate risk assessor. Imagine an insurance company’s own AI (or a network of them) deeply understanding these converging exposures—modeling probabilities across hallucinations, bias cascades, regulatory tangles, reputational ripples—in real time. It doesn’t just underwrite; it anticipates, prices, and even shapes the blended coverage forms needed to close the gaps. The policy evolves with the risk, because the AI “gets” the entanglement better than any human silo ever could. 

This isn’t about copying FI forms verbatim—Tech E&O isn’t FI E&O, and AI risks are uniquely wild—but about letting imagination run: as AI permeates everything, insurers who dare to merge what we’ve kept separate could create something new, fluid, and truly protective. Brokers and carriers bold enough to experiment—pushing custom blends, AI-specific carve-backs, unified towers—might just redefine coverage for an era where the lines aren’t blurring… they’re dissolving 

Practical Recommendations 

  • Audit current placements: Confirm Tech E&O definitions capture generative AI outputs; scrutinize exclusions for professional-adjacent activities (especially in pro se or unrepresented contexts); review D&O for professional-services exclusions and entity coverage breadth.  
  • Engage early on integrations: For AI-focused clients, explore carriers offering coordinated or blended forms, or advocate for custom structures (including EPL carve-backs where relevant).  
  • Risk mitigation: Document AI governance, safety testing, and usage policies to strengthen underwriting and claims positions. 

This lawsuit remains in its infancy (OpenAI has called the complaint “without merit”), but it underscores the need for forward-thinking coverage in an evolving risk landscape. Brokers and carriers who anticipate these convergences—and structure solutions accordingly—will best support the next generation of AI innovators. 

Connect with Katie directly via LinkedIn: https://www.linkedin.com/in/katherine-pope-esq/ 

Looking for Insurance?

See how Liberty can provide you or your business with great coverage and great rates.