Assessed Intelligence | Your Secure and Responsible Technology Partner
Featured Publication — IEEE · 2025

Applying Privacy-Enhancing Technologies to LLMs in Critical Infrastructure Contexts

LLMs deployed in critical infrastructure carry data exposure risks that standard security controls were not designed to address. This paper evaluates where privacy-enhancing technologies close that gap.

Large language models are being evaluated for deployment across critical infrastructure: power grid management, healthcare systems, financial market operations. LLMs trained or fine-tuned on sensitive operational data create novel data exposure vectors that existing security frameworks do not fully address.

Privacy-enhancing technologies, including differential privacy, federated learning, and homomorphic encryption, reduce data exposure at the model level rather than solely at the perimeter. This paper examines how each PET class applies to LLM deployment in critical infrastructure, with attention to the performance trade-offs that accompany privacy guarantees.

“The question is not whether LLMs belong in critical infrastructure. It is whether the privacy architecture surrounding them is adequate for the stakes involved.”

The authors provide an evaluation framework organisations can use to align PET selection with their threat model, regulatory obligations, and operational constraints.

Authors
Katie Grillaert

Chief Strategy Officer, Assessed Intelligence

Ed Vocke

Assessed Intelligence

Joshua Scarpino

CEO & Founder, Assessed Intelligence

Esther Y. Chung

Chief Privacy & Risk Officer, Assessed Intelligence

Tom Winston

Assessed Intelligence

Publication
IEEE
2025
Published Research

Read the Full Article

Applying Privacy-Enhancing Technologies to LLMs in Critical Infrastructure Contexts, published in IEEE, 2025.