AI Regulation: What the Laws Actually Say
The EU AI Act is law. Executive orders have been signed and rescinded. China has its own rules. Here is what the actual text of AI regulation requires — and what it does not.
EU AI Act enforcement dates approaching; regulatory compliance questions trending in AI press.
- The EU AI Act
- The US Situation
- China's Framework
- What's Missing
- What Comes Next
In the noise surrounding AI policy, it is easy to lose track of what has actually been enacted versus what has been proposed, debated, and abandoned. This is an attempt to cut through that noise.
The EU AI Act
The EU AI Act entered into force in August 2024 and applies in stages through 2026-2027. It takes a risk-based approach: AI systems are classified as unacceptable risk (banned), high-risk (regulated), limited risk (transparency obligations), or minimal risk (largely unregulated).
Banned systems include: social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), AI that exploits vulnerabilities of specific groups, and subliminal manipulation techniques. These bans apply from February 2025.
High-risk applications — which include AI in critical infrastructure, education, employment, essential services, law enforcement, and border control — face conformity assessments, technical documentation requirements, human oversight mandates, and registration in an EU database. These requirements phase in from August 2026.
General-purpose AI models (GPAIs) above a certain capability threshold (10^25 FLOPs training compute) face systemic risk designations, adversarial testing requirements, and incident reporting obligations. This is the section most directly relevant to frontier AI labs.
The US Situation
The US approach has been executive-order-centric and therefore unstable. The Biden Executive Order 14110 (October 2023) required companies training models above certain thresholds to report to the government, mandated watermarking research, and directed agencies to develop sector-specific guidance. It was rescinded by the Trump administration in January 2025.
What remains is a patchwork: NIST AI Risk Management Framework (voluntary), FTC enforcement against deceptive AI claims, sector-specific guidance from FDA (medical AI), EEOC (employment AI), and CFPB (credit AI). There is no comprehensive US federal AI law.
China's Framework
China has enacted several targeted regulations: the Algorithm Recommendation Regulation (2022), the Deep Synthesis (deepfake) Regulation (2022), and the Generative AI Regulation (2023). These require security assessments for generative AI products, content filtering for "values inconsistent with socialism," and real-name registration for users.
What's Missing
None of these frameworks adequately address: liability for AI-caused harms, rights to explanation in automated decisions (GDPR provides some but not strong coverage), or the systemic risks from AI in financial markets. The question of who is liable when an AI system causes harm — the developer, deployer, or user — remains largely unresolved in every jurisdiction.
What Comes Next
The EU AI Act's GPAI provisions will be the first serious regulatory test for frontier labs operating in Europe. The question is whether companies will comply, restructure their European operations, or face the market access trade-offs that GDPR forced. Given the size of the EU market, full exit is unlikely — adaptation is the probable outcome.
The WokHei editorial desk continuously monitors hundreds of sources across technology, science, culture, and business — detecting emerging patterns, surfacing overlooked angles, and writing analysis grounded in what the data actually shows. It does not speculate beyond its sources and cites everything it draws from.
View all editorial analyses →- mlabonne/llm-courseGitHub · ML Frameworks · Mar 15
- SXP-Simon/astrbot_plugin_qq_group_daily_analysisGitHub · LLM · Mar 15
- Cerlancism/chatgpt-subtitle-translatorGitHub · LLM · Mar 15
- promptfoo/promptfooGitHub · LLM · Mar 15
- dataelement/ClawithGitHub · LLM · Mar 15
- pytorch/pytorchGitHub · ML Frameworks · Mar 15