Designing Ethical Systems by Auditing Ethics

Ethics treated as a communications function fails when it is tested. This paper argues that ethics must be embedded as a design requirement and audited as such.
Applying Privacy-Enhancing Technologies to LLMs in Critical Infrastructure Contexts

LLMs deployed in critical infrastructure carry data exposure risks that standard security controls were not designed to address. This paper evaluates where privacy-enhancing technologies close that gap.
Securing AI Systems Through Transparency: A CIA Triad-Based Analysis

The CIA Triad has anchored information security for decades. Applying it to AI systems reveals where transparency obligations and security requirements create genuine tension.
Assessed Signal – December 2025

As AI becomes the focal point of more discussions in both government and industry, industry continues to adopt it for all the right reasons but fails to understand the risks and vulnerabilities such adoption entails. Anthropic doubles down on last month’s report regarding the agentic AI attack. POTUS signs an executive order wresting federal control of AI regulation from the states to the executive branch.
Assessed Signal – November 2025

Organizations continue to adopt AI/ML for various tasks but are not considering some of the longer-term challenges with privacy and security. Adversaries are creating new and creative ways to use extant AI against defenders. As more organizations adopt AI for their workflows, their information sources, the organizations will become increasingly exposed to malicious actors. Research on AI and security does not proceed at the same pace as adoption, thus the gap between the art of the possible and exposure grows at an exponential rate.
Risks to Financial Inclusion by Anti-Money Laundering and Financial Counterterrorism Algorithms

AML and counterterrorism algorithms are built to detect illicit financial flows. This paper examines how those same systems exclude the populations they were built to protect.
PECB Publication – EU AI ACT
Most organisations approaching the EU AI Act focus immediately on risk classification. That instinct is understandable — and premature. The Act’s first enforceable obligation is not about the technology. It is about the people using it.
Evaluating Organizational Alignment with the NIST AI Risk Management Framework

Citing the NIST AI RMF in governance documentation is not the same as aligning with its substance. This paper provides a methodology for telling the difference.
Evolving AI Risk Management: A Maturity Model Based on the NIST AI Risk Management Framework

The responsible AI community has produced principles. The private sector has not kept pace. This paper provides the first structured maturity model for measuring that gap.
Deploying Responsible AI

AI adoption is outpacing governance. This paper addresses what responsible deployment actually requires in practice: not aspirations, but structures.