Assessed Intelligence | Your Secure and Responsible Technology Partner

Across The Industry April 1, 2026 · Policy, Regulation & AI Industry Developments


POLICY & REGULATION


Tags: News | United States Date: March 20, 2026

White House Releases National AI Legislative Framework, Calls on Congress to Override State Laws

On March 20, the Trump administration released its National Policy Framework for Artificial Intelligence, outlining seven legislative recommendations for Congress to adopt a single federal standard and preempt state-level AI laws deemed to impose undue burdens on innovation. The White House stated its intent to convert the framework into signed legislation within the year.

The framework advises against creating a new federal AI regulatory body, directing Congress toward sector-specific oversight through existing agencies and industry-led standards. It addresses six priority areas: child safety and data protections for minors; intellectual property and creator rights; free speech and anti-censorship guardrails for AI platforms; AI infrastructure and energy cost protections for ratepayers; workforce development and AI literacy; and enabling innovation through regulatory sandboxes. The framework is a policy document and does not impose new compliance obligations or displace existing state law. Congressional action remains required for any preemptive effect.

Democratic lawmakers introduced the GUARDRAILS Act the same day, which would repeal the underlying December 2025 executive order and block federal preemption of state AI regulations. Senator Schatz is expected to introduce companion legislation in the Senate. Organizations operating under California, Colorado, Texas, or other state AI frameworks should not alter their compliance posture based on the framework alone.

Source: https://www.cnbc.com/2026/03/20/trump-ai-policy-framework.html


Tags: News | European Union Date: March 26, 2026

European Parliament Votes 569–45 to Open AI Act Omnibus Trilogue; April 28 Agreement Targeted

The European Parliament formally adopted its negotiating position on the Digital Omnibus on AI by a 569–45 margin on March 26, clearing the path for trilogue negotiations with the Council of the EU and the European Commission. An initial political trilogue convened the same day; technical sessions began immediately after. The Cypriot Council Presidency has set a target of reaching agreement by April 28, ahead of the EU AI Act’s August 2 general application date.

The Omnibus proposal carries significant compliance implications. Both the Parliament and Council positions propose extending the application deadline for high-risk AI system obligations to December 2, 2027 for Annex III systems (including those used in biometrics, employment, and education) and August 2, 2028 for Annex I systems embedded in regulated products. A new November 2, 2026 deadline for machine-readable watermarking of AI-generated audio, image, video, and text content has been introduced and is not subject to the same extension. The Parliament’s position also introduces a new prohibited practice targeting AI systems capable of generating non-consensual explicit imagery of identifiable individuals, and expands the AI Office’s supervisory authority over general-purpose AI models.

Organizations subject to the EU AI Act must not treat the proposed deadline extensions as confirmed. The current August 2, 2026 deadlines remain in force until a final trilogue text is adopted and published. Compliance preparation should continue, with particular attention to the November 2026 watermarking obligation, which is not covered by the proposed extension.

Source: https://www.onetrust.com/blog/how-the-eu-digital-omnibus-reshapes-ai-act-timelines-and-governance-in-2026/


Tags: News | United States Date: April 1, 2026

Commerce Department Opens 90-Day Proposal Window for Full-Stack AI Export Packages

The U.S. Department of Commerce began accepting proposals today from industry-led consortia to export full-stack AI technology packages under President Trump’s American AI Exports Program. Eligible packages must bundle AI-optimized hardware, data center storage, models, cybersecurity measures, and sector-specific applications. The proposal window runs 90 days under the AI Action Plan and Executive Order 14320.

Commerce is accepting two consortium types: pre-set groups offering standardized full-stack packages for ongoing global deployment, and on-demand consortia formed to address specific export opportunities with a targeted technology stack. Proposals are evaluated in consultation with the Secretaries of State and Energy and the Director of the Office of Science and Technology Policy. Approved consortia receive priority export control license reviews, access to federal credit programs, and dedicated government-to-government diplomatic advocacy through the Economic Diplomacy Action Group.

OSTP Director Michael Kratsios has characterized the initiative as a direct instrument for countering Chinese AI infrastructure exports to allied nations. Organizations with AI hardware, software, or services capabilities relevant to international deployment should assess whether consortium participation aligns with their export control obligations and national security compliance posture before submitting proposals.

Source: https://www.trade.gov/press-release/department-commerce-announces-new-american-ai-exports-program-phase


AI INDUSTRY


Tags: News | Security | Industry Date: March 31, 2026

Anthropic Leaks Claude Code Source Code via npm Packaging Error; Concurrent Supply-Chain Attack Identified

Anthropic confirmed on March 31 that version 2.1.88 of its Claude Code npm package was published with a debug source map file pointing to a publicly accessible archive on the company’s Cloudflare R2 storage. The archive contained approximately 512,000 lines of TypeScript source code across nearly 1,900 files. Anthropic attributed the incident to a release packaging error caused by human error and stated that no customer data or credentials were involved. The affected package version was pulled from the npm registry within hours of public disclosure.

The leaked code represents Claude Code’s agentic harness: the orchestration layer that wraps the underlying Claude model and directs its use of tools, file management, and multi-agent workflows. Security researchers reviewing the exposed codebase identified architectural details including a three-layer memory management system, references to a background daemon mode feature, and multiple mentions of an unreleased model codenamed “Capybara.” A separate, unrelated supply-chain attack on the axios npm package occurred during the same window, distributing a remote access trojan to users who installed or updated Claude Code between 00:21 and 03:29 UTC on March 31. Organizations that updated during that period should audit their dependency lockfiles for axios versions 1.14.1 or 0.30.4 and rotate any secrets accessible from those environments.

The incident follows a separate disclosure earlier in the week in which approximately 3,000 Anthropic internal files were inadvertently made publicly accessible through the company’s content management system. Taken together, these events illustrate that build pipeline configuration and release hygiene represent material security controls for AI development organizations; not only for protecting intellectual property, but for maintaining the integrity of the software supply chain on which enterprise customers depend.

Source: https://www.cnbc.com/2026/03/31/anthropic-leak-claude-code-internal-source.html


Tags: News | United States | Industry Date: March 20, 2026

Democrats Introduce GUARDRAILS Act to Preserve State AI Authority as White House Presses Federal Preemption

On March 20, Rep. Beyer and four Democratic colleagues introduced the Guaranteeing and Upholding Americans’ Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act, legislation that would repeal the Trump administration’s December 2025 executive order establishing a national AI policy framework and block any federal moratorium on state-level AI regulation. Senator Schatz is expected to introduce a companion bill in the Senate.

The bill reflects a broader Democratic position consolidated around four principles: preserving the authority of states such as California, Colorado, and Texas to enforce their existing AI laws; establishing stronger federal oversight mechanisms with meaningful enforcement authority; enacting workforce protections against AI-driven displacement; and creating baseline liability standards for harmful AI deployment. These positions differ substantively from the White House framework, which opposes new federal AI regulators, resists open-ended liability regimes, and frames state regulation as the primary source of compliance burden for the industry.

The practical compliance outlook remains unchanged for organizations operating under active state AI statutes. Neither the White House framework nor the GUARDRAILS Act is enacted law. California’s Transparency in Frontier AI Act, Texas’s Responsible AI Governance Act, Colorado’s AI Act (effective June 30, 2026), and related state laws remain valid and enforceable. Organizations should monitor the DOJ’s AI Litigation Task Force for any direct legal challenges to specific statutes as the more proximate near-term risk to state compliance frameworks.

Source: https://www.hklaw.com/en/insights/publications/2026/03/white-house-releases-a-national-policy-framework-for-artificial

About Assessed Intelligence

Assessed Intelligence is an advisory firm specializing in AI risk, cybersecurity, and continuous assurance. The firm serves organizations that require senior-level expertise without the overhead of full-time executive hires. Its proprietary ARISE Framework™ provides a seven-domain model for assessing and managing AI and cyber risk across the enterprise. Forged by Experience. Driven by Purpose. Built to Endure.

Media Contact

Info@assessedintelligence.com

assessedintelligence.com