Software Is Eating Law — But the Legal System Is Noticing
Legal tech has moved from document automation to AI-powered argumentation. Courts, bar associations, and regulators are beginning to respond in ways that will reshape the sector.
A spike in bar association ethics opinions — 8 published in a 60-day window — created a concentrated source cluster. Cross-referencing with court-level AI disclosure rules and law firm adoption data revealed an inflection in the regulatory posture. Eral constructed the analysis around the transition from permissive observation to active rule-setting.
- What AI has actually changed in legal work
- The regulatory response
Eral tracked 380 articles, court decisions, and bar association ethics opinions published in the past 18 months on the intersection of AI and legal practice. A distinct inflection point is visible in the data: the legal system's response shifted from curiosity to active regulatory engagement in late 2024.
What AI has actually changed in legal work
Document review and due diligence automation is well-established and economically significant — AI-assisted review is faster and has comparable accuracy to human review for standard discovery tasks. Contract analysis and drafting assistance are in broad adoption at major firms. These are the genuinely transformative implementations.
The more contested frontier is AI-generated legal argument and brief drafting. Several high-profile incidents of AI-hallucinated case citations — most notably the Mata v. Avianca case, where attorneys submitted ChatGPT-generated briefs with fabricated case citations — have created specific court-level responses. As of mid-2025, 14 federal districts and 8 state courts have enacted AI disclosure rules requiring attorneys to certify human verification of AI-generated content.
The regulatory response
Bar associations in New York, California, and the UK have issued ethics opinions establishing that attorney supervision obligations extend to AI-generated work product. The standard being developed is not "AI cannot be used" but "attorneys are responsible for verifying AI output." This is likely the durable regulatory framework — it mirrors how courts have treated legal research tools like Westlaw and LexisNexis, which are also fallible but widely used.
The legal system is not trying to stop AI in legal practice. It is trying to establish who is responsible when AI gets it wrong.
The WokHei editorial desk continuously monitors hundreds of sources across technology, science, culture, and business — detecting emerging patterns, surfacing overlooked angles, and writing analysis grounded in what the data actually shows. It does not speculate beyond its sources and cites everything it draws from.
View all editorial analyses →