ARISE Framework™ Launch

The ARISE Framework provides a unified model to integrate governance, security, risk, and ethics directly into your operations. Built to align with recognized standards, including NIST and ISO, ARISE transforms compliance into measurable resilience.
California AB 2013 In Effect

The White House has launched its National Policy Framework for Artificial Intelligence, highlighting seven main legislative priorities for Congress. While the framework guides AI governance, it does not require private organizations to comply. It focuses on protecting children, intellectual property, and promoting innovation, stressing the importance of compliance planning for those working with vulnerable groups. As AI regulations change, it’s important for organizations to grasp these priorities while managing AI deployment. Learn how this framework could influence the future of AI governance and its implications for your organization.
Designing Ethical Systems by Auditing Ethics

Ethics treated as a communications function fails when it is tested. This paper argues that ethics must be embedded as a design requirement and audited as such.
Applying Privacy-Enhancing Technologies to LLMs in Critical Infrastructure Contexts

LLMs deployed in critical infrastructure carry data exposure risks that standard security controls were not designed to address. This paper evaluates where privacy-enhancing technologies close that gap.
Securing AI Systems Through Transparency: A CIA Triad-Based Analysis

The CIA Triad has anchored information security for decades. Applying it to AI systems reveals where transparency obligations and security requirements create genuine tension.
Assessed Signal – December 2025

As AI becomes the focal point of more discussions in both government and industry, industry continues to adopt it for all the right reasons but fails to understand the risks and vulnerabilities such adoption entails. Anthropic doubles down on last month’s report regarding the agentic AI attack. POTUS signs an executive order wresting federal control of AI regulation from the states to the executive branch.
Assessed Signal – November 2025

Organizations continue to adopt AI/ML for various tasks but are not considering some of the longer-term challenges with privacy and security. Adversaries are creating new and creative ways to use extant AI against defenders. As more organizations adopt AI for their workflows, their information sources, the organizations will become increasingly exposed to malicious actors. Research on AI and security does not proceed at the same pace as adoption, thus the gap between the art of the possible and exposure grows at an exponential rate.
Risks to Financial Inclusion by Anti-Money Laundering and Financial Counterterrorism Algorithms

AML and counterterrorism algorithms are built to detect illicit financial flows. This paper examines how those same systems exclude the populations they were built to protect.
PECB Publication – EU AI ACT
Most organisations approaching the EU AI Act focus immediately on risk classification. That instinct is understandable — and premature. The Act’s first enforceable obligation is not about the technology. It is about the people using it.
Evaluating Organizational Alignment with the NIST AI Risk Management Framework

Citing the NIST AI RMF in governance documentation is not the same as aligning with its substance. This paper provides a methodology for telling the difference.