Your Secure and Responsible Technology Partner

Across the Industry Brief – Issue 4

Across The Industry April 20, 2026 · Policy, Regulation & AI Industry Developments


POLICY & REGULATION


Tags: Guidance | Security | United States
Date: April 15, 2026

NIST scales back CVE enrichment in the National Vulnerability Database, reshaping patch prioritization for most organizations

The National Institute of Standards and Technology implemented a new CVE prioritization model for the National Vulnerability Database on April 15. Under the revised process, full enrichment is reserved for three categories: CVEs listed in the Cybersecurity and Infrastructure Security Agency’s Known Exploited Vulnerabilities catalog, CVEs affecting software used within the federal government, and CVEs for critical software as defined by Executive Order 14028. CVEs that fall outside these categories will still be listed but marked as lower priority. NIST attributed the change to a 263% surge in vulnerability submissions between 2020 and 2025, with first-quarter 2026 submissions running nearly a third higher than the same period a year earlier.

NIST also stated it will no longer automatically provide a separate severity score for CVEs already scored by the submitting CVE Numbering Authority, and will reanalyze previously enriched CVEs only when a material modification is brought to its attention. Backlogged CVEs published before March 1, 2026 have been moved into a “Not Scheduled” category and will be considered for enrichment only if they meet the new criteria. A user-requested review system will remain available for lower-priority CVEs. The practical effect is that NVD CVSS base scores will no longer be produced automatically for most newly published vulnerabilities, and vendor-assigned scores, which are not consistently accurate, will not be independently verified by NIST.

Organizations that cite NVD severity as the primary input to patch prioritization should treat this as a catalyst to formalize a layered vulnerability intelligence model. Prioritization policies should reference multiple signals, including CISA KEV listings, Exploit Prediction Scoring System data, vendor advisories, and asset-specific exposure context. Security and compliance leaders should confirm that regulatory frameworks they report against, including PCI DSS, HIPAA, and NIST SP 800-53, continue to be satisfied by the updated decision model. Organizations that operate on limited security resources should document the change and its impact on service-level commitments before the next audit cycle.

Source: https://www.computerweekly.com/news/366641916/Surging-CVE-disclosures-force-NIST-to-shake-up-workflows


Tags: News | United States
Date: April 10, 2026

xAI sues Colorado to block enforcement of the Colorado Artificial Intelligence Act

xAI filed a federal lawsuit on April 10 in the United States District Court for the District of Colorado seeking to block enforcement of Senate Bill 24-205, the Colorado Artificial Intelligence Act. The complaint names Colorado Attorney General Phil Weiser as defendant and asks the court to declare the statute unconstitutional and to issue a preliminary injunction before the law’s June 30, 2026 effective date. xAI argues that the Act is unconstitutionally vague, invites arbitrary enforcement, and violates the First Amendment by compelling its Grok model to promote what the complaint describes as the state’s ideological views rather than an objective pursuit of truth. The filing follows a five-month delay of the Act’s original February 1 effective date, enacted by the Colorado legislature in August 2025.

The Colorado AI Act imposes duties on developers and deployers of high-risk AI systems used in consequential decisions involving education, employment, financial services, government services, healthcare, housing, insurance, and legal services. Covered actors must exercise reasonable care to protect consumers from algorithmic discrimination, conduct impact assessments, maintain a risk management program, and disclose to consumers when AI drives a consequential decision. The xAI filing arrives while the Colorado legislature considers a third round of amendments to the Act in its current session, which is scheduled to end May 13, and against the backdrop of a December 2025 federal executive order that directs the Commerce Department to review state AI laws and consider restricting federal broadband funding to states with laws the administration views as overly burdensome. The Colorado Attorney General’s Office declined to comment on the litigation.

Organizations that develop or deploy AI systems touching Colorado residents should continue compliance preparation on the assumption that the Act takes effect on June 30. No preliminary injunction has been issued, and the substantive obligations remain the operative legal standard. Compliance programs should be structured to survive multiple outcomes: continued state enforcement if the suit fails, amended obligations if the legislature acts before May 13, and rapid adaptation to a federal framework if preemption advances. Monitoring the docket, the Colorado Attorney General’s rulemaking, and any parallel state litigation should be a standing item on the governance calendar through the summer.

Source: https://coloradosun.com/2026/04/10/elon-musk-colorado-ai-law-federal-court-lawsuit/


Tags: Analysis | United States
Date: April 8, 2026

White House National Policy Framework for AI draws legal analysis as preemption pressure builds on state regimes

Legal commentators published detailed analyses on April 8 of the National Policy Framework for Artificial Intelligence released by the White House on March 20. The Framework is not itself legally binding; it is a set of legislative recommendations directed at Congress across seven policy areas. Those areas include protections for children, community safeguards around AI infrastructure, intellectual property rights for individuals depicted in AI-generated replicas, prevention of federally directed censorship of AI content, enabling innovation through regulatory sandboxes, workforce and education programs, and preemption of state AI laws that impose undue burdens. The Framework also recommends against creating a new federal AI rulemaking body, calling instead for governance through existing agencies with subject-matter expertise.

The preemption recommendation carves out limited areas where states would retain authority, including generally applicable laws, zoning, and state use of AI in public services. Outside those carve-outs, the Framework recommends that states not be permitted to regulate AI development, penalize AI developers for third-party unlawful conduct involving their models, or burden AI-enabled activities that would otherwise be lawful. The Framework’s practical impact depends on congressional action, which has historically been slow on comprehensive technology legislation. In the interim, the policy direction gives litigants a federal position to cite when challenging state AI statutes, as xAI has done in its suit against Colorado, and gives federal agencies latitude to reshape enforcement priorities under existing authorities. State laws including the Colorado AI Act, California’s AI transparency statutes, and active bills in Massachusetts and New York remain enforceable on their stated timelines.

Organizations should treat the Framework as a directional signal, not a compliance trigger. Compliance programs must continue to meet binding state and sectoral obligations on their actual effective dates. Legal, compliance, and policy teams should map the Framework’s seven policy areas against the organization’s current AI inventory to identify provisions that would alter risk posture if enacted, and should monitor congressional markup activity through the summer for any bill that materially advances the preemption recommendation.

Source: https://www.consumerfinancemonitor.com/2026/04/08/the-white-houses-national-policy-framework-for-artificial-intelligence-what-it-means-and-what-comes-next/


AI INDUSTRY


Tags: Alert | Industry | Security
Date: April 19, 2026

Vercel discloses breach originating from compromised Google Workspace OAuth grant tied to third-party AI tool

Vercel publicly disclosed on April 19 that an attacker gained unauthorized access to certain internal Vercel systems through a compromised Google Workspace OAuth application belonging to a third-party AI tool used by a Vercel employee. Vercel stated that the same OAuth app was used by hundreds of organizations and was subject to a broader compromise of which Vercel is one downstream victim. The attacker inherited access to the employee’s Vercel Google Workspace account through the compromised OAuth grant and pivoted from that account into Vercel environments. Vercel disclosed the specific OAuth client ID involved, engaged incident response experts, notified law enforcement, and is contacting the limited subset of customers it assesses to be directly impacted. In a separate public statement on April 19, Vercel CEO Guillermo Rauch identified the AI tool as Context.ai.

Vercel’s architecture stores all customer environment variables encrypted at rest, but customers can designate variables as either sensitive or non-sensitive. Sensitive variables cannot be read back through the dashboard or REST API once set; non-sensitive variables remain readable through those interfaces. The attacker accessed non-sensitive environment variables through enumeration. Rauch stated that sensitive variables were not accessed. A threat actor using the ShinyHunters handle on a hacking forum offered data claimed to be from Vercel for sale, though the actual ShinyHunters group denied involvement when contacted by security press, and the data-for-sale listing has not been independently verified. Vercel’s April 20 bulletin update added three further recommendations: review recent deployments for unexpected activity and delete any that are in doubt, set Deployment Protection to Standard at minimum, and rotate Deployment Protection tokens if configured.

Organizations that use Vercel should assume their integrations are in scope until Vercel confirms otherwise. Immediate actions should include rotating any secrets stored as non-sensitive environment variables and enabling the sensitive flag on replacements, auditing deployment history for unexpected activity, rotating Deployment Protection bypass tokens, and reviewing Google Workspace OAuth grants for the disclosed client ID. Beyond the immediate response, organizations should inventory all Google Workspace OAuth grants held against third-party AI tools and apply the same scrutiny to those grants that they apply to privileged access in core SaaS platforms. The proliferation of AI tool OAuth grants across employee accounts represents a growing initial-access surface; the structural answer is a default-deny posture on third-party app access, not incident-by-incident response.

Source: https://vercel.com/kb/bulletin/vercel-april-2026-security-incident


Tags: News | Industry
Date: April 8, 2026

Anthropic releases Mythos Preview under restricted access as AI vulnerability discovery capabilities accelerate

Anthropic announced on April 8 that it had developed a new model, Mythos Preview, which the company believes could reshape cybersecurity. The model was able to find high-severity vulnerabilities in every major operating system and web browser during testing, and demonstrated improved ability to devise methods of exploiting the vulnerabilities it identified. Anthropic is not releasing the model to the general public. Access is limited to approximately 50 select organizations as part of a collaboration the company calls Project Glasswing, with the stated intent of securing widely used critical software. Anthropic has said it intends to release other related models while keeping the most capable variant inside the restricted program.

The release comes as AI-assisted vulnerability discovery has improved substantially. Open-source maintainers report that the quality of AI-generated vulnerability reports shifted from a flood of false positives in 2024 and 2025 to largely valid findings in the first quarter of 2026. The cURL project has identified and patched more vulnerabilities in the first three months of 2026 than in either of the two prior years. Security researchers have used current-generation commercial models to identify critical vulnerabilities in long-standing open-source projects, including the Linux kernel. The restricted release of Mythos Preview reflects a judgment by Anthropic that the model’s combination of discovery and exploit-generation capability presents sufficient dual-use risk to warrant controlled distribution, even as comparable capability is likely to emerge in open-weight models within months.

Organizations evaluating AI-assisted vulnerability management, secure code review, or security operations should engage directly with Anthropic on qualification for Project Glasswing if their software footprint is in scope. Governance teams should document how restricted-access AI tools are procured, authorized, and supervised, and should update third-party risk assessments to address the specific conditions attached to such releases. Organizations should also assume that offensive capability of comparable strength will emerge outside any restricted-access envelope on a timeline measured in months, and should accelerate detection engineering for AI-accelerated vulnerability discovery and exploitation, along with patch velocity improvements that narrow the window between disclosure and remediation.

Source: https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-project-glasswing-ai-cybersecurity-mythos-preview