Assessed Intelligence | Your Secure and Responsible Technology Partner
Featured Publication — IEEE · 2025

Securing AI Systems Through Transparency: A CIA Triad-Based Analysis

The CIA Triad has anchored information security for decades. Applying it to AI systems reveals where transparency obligations and security requirements create genuine tension.

Confidentiality, Integrity, and Availability take on additional complexity in AI systems. Confidentiality must be balanced against the right to explanation. Integrity extends to training data and model behaviour over time. Availability intersects with the accountability obligations that transparency requirements create.

The authors examine how transparency mandates under the EU AI Act and GDPR interact with each pillar. Transparency about model architecture can create confidentiality risks. Explainability requirements can expose integrity vulnerabilities. Compliance overhead affects operational availability.

“Transparency is not the opposite of security. Applied with precision, it is the mechanism by which AI security can be verified and improved.”

The paper proposes a structured approach to transparency design that satisfies both security and accountability requirements, demonstrating that responsible AI and secure AI are not competing objectives.

Authors
Esther Y. Chung

Chief Privacy & Risk Officer, Assessed Intelligence

Laura Morgan

Chief Ethics Officer, Assessed Intelligence

Katie Grillaert

Chief Strategy Officer, Assessed Intelligence

Joshua Scarpino

CEO & Founder, Assessed Intelligence

Publication
IEEE
2025
Published Research

Read the Full Article

Securing AI Systems Through Transparency: A CIA Triad-Based Analysis, published in IEEE, 2025.