Apr 20, 2026
By Katherine Grillaert, Chief Strategy Officer, Assessed Intelligence
If you want to understand the gap between AI safety rhetoric and regulatory reality, consider what it means that two of the most consequential AI accountability decisions this month were made not in legislatures or agencies, but in corporate strategy meetings.
The week of April 14, 2026 offered two instructive examples of the familiar strategic logic by which powerful companies shape the conditions under which government oversight operates, often to limit its own reach. These cases raise hard questions about who bears risk and who holds power over deployment. They don’t all have easy answers.
Liability and Safety: The Illinois Debate
Illinois state bill SB 3444 proposes shielding AI laboratories from civil liability for “critical harms” — defined in the bill as mass casualties, serious injuries to at least 100 people, or property damage exceeding $1 billion — provided the lab has published a safety framework and the harm was neither intentional nor reckless. The leading frontier labs are sharply divided.
OpenAI supports the bill. The position isn’t obviously indefensible. Proponents of liability caps can point to precedent: the nuclear and pharmaceutical industries have operated under modified liability regimes paired with stringent regulatory oversight, on the theory that unlimited exposure would deter socially beneficial but inherently risky development. Applied to AI, the argument is that catastrophic-liability exposure could chill development in ways that ultimately harm the public — and that self-published safety frameworks, however imperfect, create at least some accountability pressure through reputational risk.
Anthropic opposes the bill, arguing that removing liability eliminates one of the most powerful incentives for safety. Governance researchers have raised a related concern: when labs can self-certify compliance by posting a document online and then claim immunity from catastrophic harm, the “safety framework” requirement functions as a checkbox, not a constraint — especially if no regulator evaluates the content or implementation of those frameworks.
Both positions reflect genuine disagreement about how to design accountability for systems capable of diffuse, hard-to-attribute harm. Both also happen to serve the strategic interests of the companies holding them. That doesn’t make either wrong, but it is relevant context when evaluating the arguments on their merits.
Here is where the rhetoric-reality gap opens most clearly. The bill’s safety framework requirement sounds like accountability. In practice, it asks labs to self-certify compliance with standards they wrote themselves, with no independent verification and no regulator assigned to evaluate whether those frameworks actually bind behavior. That is not a safety mechanism. It is the language of safety wrapped around a liability shield.
What should give Illinois lawmakers particular pause is a specific structural feature: an automatic preemption clause that voids the bill the moment Congress passes overlapping federal legislation. Critics warn this turns any weak state standard into a ceiling rather than a floor — locking in a self-certification regime at precisely the moment federal legislators might impose something more rigorous. The preemption clause doesn’t just limit liability. It limits the possibility of stronger accountability later.
When the developers of frontier AI disagree sharply on liability for mass casualties, the appropriate response from lawmakers is not to pick a side based on who lobbied harder. It’s to recognize that safety rhetoric without independent verification is not safety policy — and to demand more than a self-published framework as evidence of practices that actually bind behavior.
Timing as Strategy: Meta’s Name Tag
The second case illustrates a different mechanism for the same gap — one that requires no legislation at all.
This week, more than 75 civil society organizations — including the ACLU and EPIC — raised alarms over Meta’s reported plans to roll out “Name Tag,” a real-time facial recognition feature for their Ray-Ban smart glasses. The feature would allow wearers to silently identify strangers in public. Proponents point to potential accessibility benefits, including for people with face-blindness. But it poses serious risks for abuse survivors, people in witness protection, LGBTQ+ individuals in unsupportive environments, and others with a legitimate interest in not being identified by strangers on the street.
The feature itself will be debated on its merits. The accountability concern lies in the reported execution strategy. An internal Reality Labs memo, first reported by the New York Times in February 2026 and cited in the April civil society open letter, revealed that Meta had deliberated launching during a “dynamic political environment,” calculating that civil society organizations and regulators would have their attention focused elsewhere.
This is worth sitting with. Meta did not conclude that Name Tag was safe or that regulators had evaluated it and found no objection. It concluded that regulators would be too distracted to push back effectively. The safety question and the regulatory capacity question were treated as separate — and the latter was actively managed.
Aggressive product timing is not uncommon in tech. Companies routinely move fast in ambiguous regulatory environments. But there is a meaningful distinction between launching before rules exist and deliberately timing a launch to avoid oversight capacity you know exists but expect to be distracted. The former is opportunism. The latter is a strategic choice to ensure the gap between rhetoric and reality never closes — at least not before the product is already in market.
The feature may clear existing legal bars under current U.S. privacy and biometrics law. It may even prove to have legitimate uses that outweigh its risks. Those questions should be decided through a process where relevant watchdogs have the bandwidth to engage. The memo suggests Meta calculated they wouldn’t.
What the Gap Actually Costs
These two cases share a common dynamic: the gap between AI safety rhetoric and regulatory reality is not accidental. It is, in part, actively maintained — through liability frameworks that use the language of safety to limit accountability, and through deployment strategies that treat regulatory distraction as a resource to be managed.
That dynamic isn’t unique to AI. It tests whether regulatory institutions can keep pace with industries where technical complexity consistently outstrips oversight capacity. But it is particularly acute here, where the systems in question are capable of catastrophic harm at scale, and where the accountability structures being set now will be difficult to revise once they are normalized.
The liability structures and deployment strategies being institutionalized today will define what accountability looks like for years. That is not a reason for fatalism. It is a reason for urgency.
Three questions follow directly from these cases. Should liability frameworks for catastrophic AI harm rely on self-certification, or require independent verification — by regulators, third-party auditors operating under statutory authority, or both? Should biometric surveillance features in consumer products face pre-market review rather than post-hoc enforcement? And how should regulatory agencies build enough resilience into their oversight capacity that distraction stops being a viable product launch strategy?
Those questions are harder to answer than identifying bad actors. But they are the right questions — and they are ones that existing regulatory traditions, from product safety law to financial supervision, have at least begun to work through.
The gap between AI safety rhetoric and regulatory reality will not close on its own. State lawmakers, federal regulators, and civil society organizations all have a role in ensuring that managing that gap doesn’t quietly become the industry’s default operating mode — and that safety means something more than a document posted online.
Assessed Intelligence delivers vCISO and vCRAIO leadership, ARISE Framework™ implementation, and continuous assurance through the OPERATE retainer. If your organization is deploying AI and needs governance that operates at the speed of your systems, speak with an advisor.


