Author: Graeme Rudd, COO, Assessed Intelligence
We need to approach everything from tackling Shadow AI to inefficient token utilization rates with the same ingenuity and creativity we’re seeing on the battlefield now. Governance is about building capability, and in the AI race of today organizations that take GRC seriously will have a lasting competitive advantage.
AI Governance Is Not a Technology Problem
There is a governance crisis building across organizations of every size, and most do not see it coming until something breaks because of the ad hoc approach their organization has adopted. The Lovable incident is a recent, visible example. Apparently, the vulnerability had been sitting there for a while. Researchers stated they had reported through proper disclosure channels, and the reports were closed without escalation.1That is not a technology failure. That was an organizational failure, and Loveable’s own statement on the matter acknowledged as much.2 These types of incidents are happening everywhere, in different forms, at different scales, because organizations are deploying AI reactively. I consistently see organizations confuse technical knowledge and prowess with implementation capabilities; they are related but separate skill sets. In military parlance, most organizations seem to assume that tactical capabilities can be translated into strategic ones. There are many high-functioning professionals who may understand the technical aspects of AI and need to be part of the conversation; their skills don’t necessarily translate to leading the broader implementation of AI.
The Ad Hoc Approach
What does ad hoc implementation actually look like? It looks like an organization that is using AI tools but has little to no written policies governing their use. No grounding strategy document. No clear rationale for why they adopted the tools they adopted or what they are even adopting, because in many cases, employees are using things the organization never approved in the first place. It looks like developers building internal tools on platforms they found on their own, pasting business logic, error logs, and credentials into AI chat windows, because no one told them they should not. It looks like a CISO who finds out about a tool their team has been using for six months only after something goes wrong.
The deeper problem is not any single misconfiguration or missed patch. It is the absence of a unified framework that gets compliance, HR, legal, product, and security speaking the same language before deployment happens, not after. These conversations need to happen. They are not happening. What I see consistently, in organizations large and small, is highly competent people working in isolation from each other, each one making reasonable decisions within their own lane, and the gaps between those lanes becoming the exposure.
Learning from the Government
The conventional wisdom is that the government should learn from business about how to move fast. For AI adoption, I think that is exactly backward.
The military has been deploying dangerous, novel, frontier technology since we first picked up clubs and rocks millennia ago. Militaries throughout history have wrestled with new technology as it was deployed in combat. The US Military learned from those failures and built frameworks specifically designed to handle the conditions we are all now facing with AI: a rapidly evolving capability, an unclear threat surface, high stakes, time pressure, and the need to make decisions before you have complete information. Two concepts from that world deserve a lot more attention in the enterprise right now. We recently watched the US Military conduct an incredibly sophisticated operation to capture the former leader of Venezuela Nicolas Maduro, that operation was enabled by and challenged by many new types of technology (AI being one of them). Despite these new and novel threats and challenges, the operation was a success; it was a clear example of how well the US military can adapt to change. The lessons from the military approach warrant examination, two concepts in particular.
The Task Force
The term originated in the U.S. Navy, where it was used as early as World War II to assemble cross-functional groups of ships and personnel around a specific operational objective, then dissolve them when that objective was met. By the end of the war, the Navy had formed roughly 100 task forces. The underlying idea was simple: when you face a new challenge that does not fit neatly into your existing command structure, you build a dedicated, organized response to it rather than trying to absorb it into business as usual. Task forces have since been used for everything from counter-drug operations to complex intelligence problems. The common thread is a recognition that some challenges are too important and too different to be handled by committee or delegated to whoever has bandwidth.
The Fusion Cell
Fusion cells as a formal structure emerged from the early years of the Global War on Terror, when interoperability failures between defense agencies made clear that having the right information was not enough if the right people could not act on it together. The idea was to flatten authority, centralize the chain of command, and put the right subject matter experts in the same room with the authorities they needed to actually move. Not a standing committee. Not a steering group. A cell with a specific mission, clear parameters, and the ability to make decisions without routing everything through a bureaucratic process that would kill the response time.
What made these structures effective then, and what makes them relevant now, is that they are built for exactly the conditions AI adoption creates. The threat environment is constantly evolving. The technology itself is changing. Your own organization’s use of it is changing. And you need a way to ensure that everyone who has a stake in how this goes is operating from the same picture of reality, with clear lines on what can be done, by whom, and under what authority.
Looking to the Skies for Inspiration
The drone warfare analogy is worth sitting with for a moment. In Ukraine, drones that cost a few hundred dollars have been used to take out armored vehicles worth millions. Cheap drones, commercial electronics, AI, and satellite navigation have flattened the technology pyramid that used to favor well-funded militaries.3 Relying on expensive interceptors to destroy cheap drones is economically unsustainable. CEPA That asymmetry is forcing every level of the defense establishment, from combatant commanders to policymakers to the engineers building countermeasures, into a unified conversation they cannot afford to have in silos.4 The lesson for organizations deploying AI is the same: the risks are asymmetric, the threat surface is wider than most people realize, and the cost of a fragmented response will be higher than anyone budgeted for.
Bringing it back to Earth
The business world tends to view how the government handles technology adoption and sees bureaucracy, slowness, and inefficiency. In many contexts, that criticism is fair. But when the military decides to authorize and deploy a new high-risk technology, they do not let individual contributors make that call on their own, nor do they discover the exposure three months after deployment. They run a structured process with clear ownership, formal authorization, documented risk acceptance, and ongoing monitoring. FedRAMP, the Authority to Operate process, and the NIST AI Risk Management Framework are not perfect, but they exist because someone learned the hard way what happens when you skip the structure.
Translating this for the enterprise does not require building a defense department. It requires acknowledging that AI is a high-risk technology, treating it accordingly, and giving the people responsible for governance the same standing as those who want to deploy it quickly.
In practical terms, that means standing up an AI task force before deployment, not after an incident. It means putting the right people in the room, not just the ones who are enthusiastic about the technology, but legal, security, compliance, HR, and line of business leadership who understand what the organization is trying to accomplish. It means producing a common operational picture: written policies, documented decisions, clear parameters on what agents and tools are authorized to do, and a process for handling exceptions that does not rely on individual judgment at the moment. And it means building a function, something closer to the fusion cell model, that is responsible for ongoing monitoring and threat assessment as the environment changes. Because it will change.
The organizations that will be in the best position three years from now are not necessarily the ones that moved fastest. They are the ones that organized around a plan and a team, updated that plan as they learned, and took the governance issues seriously upfront rather than treating them as a tax on moving fast. That discipline is not the enemy of progress. At this stage of AI adoption, it is the only thing that enables real progress.
Endnotes
1. Goel, Shubhangi, and Thomas Fuller. “Lovable’s Security Stumble Shows Risks of Using AI to Code.” Business Insider, 21 April 2026, https://www.businessinsider.com/lovable-security-access-vibe-coding-projects-risk-2026-4. Accessed 29 April 2026.
2. Anton, Fabian. “Our response to the April 2026 incident.” Lovable, 22 April 2026, https://lovable.dev/blog/our-response-to-the-april-2026-incident. Accessed 29 April 2026.
3. Kremidas, Chris. “The new economics of warfare.” European Policy Centre (EPC), 9 March 2026, https://www.epc.eu/publication/the-new-economics-of-warfare/. Accessed 29 April 2026.
4. Odarchenko, Kateryna. “Ukraine and the Future of War.” CEPA, 21 April 2026, https://cepa.org/article/ukraine-and-the-future-of-war/. Accessed 29 April 2026.

