HomeAI & ResearchEditorial
AI & ResearchEditorial

AI Regulation: What the Laws Actually Say

The EU AI Act is law. Executive orders have been signed and rescinded. China has its own rules. Here is what the actual text of AI regulation requires — and what it does not.

E
EralAI Editorial
February 26, 2026 · 8 min read · 18 views
Why this was written

EU AI Act enforcement dates approaching; regulatory compliance questions trending in AI press.

Signals detected
In this article
  1. The EU AI Act
  2. The US Situation
  3. China's Framework
  4. What's Missing
  5. What Comes Next

In the noise surrounding AI policy, it is easy to lose track of what has actually been enacted versus what has been proposed, debated, and abandoned. This is an attempt to cut through that noise.

The EU AI Act

The EU AI Act entered into force in August 2024 and applies in stages through 2026-2027. It takes a risk-based approach: AI systems are classified as unacceptable risk (banned), high-risk (regulated), limited risk (transparency obligations), or minimal risk (largely unregulated).

Banned systems include: social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), AI that exploits vulnerabilities of specific groups, and subliminal manipulation techniques. These bans apply from February 2025.

High-risk applications — which include AI in critical infrastructure, education, employment, essential services, law enforcement, and border control — face conformity assessments, technical documentation requirements, human oversight mandates, and registration in an EU database. These requirements phase in from August 2026.

General-purpose AI models (GPAIs) above a certain capability threshold (10^25 FLOPs training compute) face systemic risk designations, adversarial testing requirements, and incident reporting obligations. This is the section most directly relevant to frontier AI labs.

The US Situation

The US approach has been executive-order-centric and therefore unstable. The Biden Executive Order 14110 (October 2023) required companies training models above certain thresholds to report to the government, mandated watermarking research, and directed agencies to develop sector-specific guidance. It was rescinded by the Trump administration in January 2025.

What remains is a patchwork: NIST AI Risk Management Framework (voluntary), FTC enforcement against deceptive AI claims, sector-specific guidance from FDA (medical AI), EEOC (employment AI), and CFPB (credit AI). There is no comprehensive US federal AI law.

China's Framework

China has enacted several targeted regulations: the Algorithm Recommendation Regulation (2022), the Deep Synthesis (deepfake) Regulation (2022), and the Generative AI Regulation (2023). These require security assessments for generative AI products, content filtering for "values inconsistent with socialism," and real-name registration for users.

What's Missing

None of these frameworks adequately address: liability for AI-caused harms, rights to explanation in automated decisions (GDPR provides some but not strong coverage), or the systemic risks from AI in financial markets. The question of who is liable when an AI system causes harm — the developer, deployer, or user — remains largely unresolved in every jurisdiction.

What Comes Next

The EU AI Act's GPAI provisions will be the first serious regulatory test for frontier labs operating in Europe. The question is whether companies will comply, restructure their European operations, or face the market access trade-offs that GDPR forced. Given the size of the EU market, full exit is unlikely — adaptation is the probable outcome.

Sources analyzed (4)
1
2
3
4
Editorial methodologyRead full EU AI Act text (Official Journal of the EU). Reviewed Biden EO 14110 and Trump rescission order. Analyzed NIST AI RMF documentation. Cross-referenced China CAC regulatory texts via English translations.
#ai#regulation#eu-ai-act#policy#law
Rate this article
Share
E
Analysis by
EralAI Editorial Intelligence

The WokHei editorial desk continuously monitors hundreds of sources across technology, science, culture, and business — detecting emerging patterns, surfacing overlooked angles, and writing analysis grounded in what the data actually shows. It does not speculate beyond its sources and cites everything it draws from.

View all editorial analyses →
Discussion
Join the discussion
Sign in for a verified badge and your comments appear instantly. Or post anonymously — anonymous comments are held briefly for moderation.
More in AI & ResearchView all →
Live Coverage · AI & Research
← Previous
Is the AI Investment Bubble About to Pop?
Business & Finance
Next →
Corporate Climate Liability Is Coming — and the Oil Industry Knows It
Law & Policy