An Explainable Zero-Trust Identity Framework for Secure and Accountable Industrial and Agentic AI Ecosystems

Authors

  • Dr. Elena Márquez Department of Computer Science, University of Barcelona, Spain Author

Keywords:

Zero Trust, Explainable AI, Industrial Control Systems, SPIFFE

Abstract

Background: As industrial control systems (ICS), operational technology (OT) environments, and agentic AI systems grow increasingly interconnected, conventional perimeter-based security models prove insufficient. Attack surfaces expand through IT/OT convergence, autonomous agents, and opaque machine-learned components, producing risks to availability, integrity, and identity assurance (Krotofil & Schmidt, 2018; Gao & Shaver, 2022). Additionally, the need for auditability and understandable decisions from AI-driven identity and access controls has become central to trust and regulatory compliance (Adadi & Berrada, 2018; Guidotti et al., 2018).

Objective: This paper develops a comprehensive, explainable zero-trust identity framework tailored for heterogeneous industrial, IIoT, and agentic AI ecosystems, addressing identity lifecycle, conditional policy enforcement, signed auditability, explainable decisioning, and compatibility with emerging standards such as SPIFFE/SPIRE and modern enclave-based roots-of-trust. The design aims to reconcile strict availability requirements of ICS with fine-grained, explainable identity controls that reduce false alarms and support operational continuity (Bhattacharya et al., 2019; Haque & Al-Sultan, 2020).

Methods: We synthesize cross-disciplinary literature on XAI, zero trust, credential lifecycle management, identity logging, and ICS security; analyze requirements derived from regulatory and operational guidance; and propose a layered, descriptive architecture combining cryptographically signed identities and logs, policy-driven conditional access, workload identity frameworks, and XAI modules for decision explanation (Guidotti et al., 2018; Reyes & Nakamoto, 2025; SPIFFE Working Group, 2024). We evaluate the framework qualitatively against threat scenarios and operational metrics widely discussed in the field (CISA, 2023; Conti et al., 2023).

 Results: The proposed framework integrates: (1) ephemeral workload identities and mutual attestation through SPIFFE/SPIRE-style SVIDs (SPIFFE Working Group, 2024; SPIRE Project, 2024); (2) cryptographically signed, tamper-evident audit logs for identity events to enable non-repudiation and forensic fidelity (Reyes & Nakamoto, 2025); (3) contextual conditional access policies incorporating device posture, intent signals, and environmental constraints (Microsoft, 2024; Okta, 2024); and (4) XAI modules that produce human-interpretable rationales for access decisions and anomalous detections to support operators and regulators (Adadi & Berrada, 2018; Guidotti et al., 2018). We describe how the architecture mitigates common ICS threats while maintaining availability.

Conclusions: An explainable zero-trust identity approach can substantially raise the bar against identity-based attacks in ICS and agentic AI settings while providing the transparency necessary for operational decision-making and compliance. Practical adoption will require careful integration into legacy systems, attention to audit scale, and policies to avoid overwhelming operators with false positives (Bhattacharya et al., 2019; Elastic, 2024; Haque & Al-Sultan, 2020). The paper outlines a research agenda for empirical validation, standards alignment, and human factors studies to refine XAI explanations for security operations (NSA, 2025).

Downloads

Published

2025-10-31

How to Cite

An Explainable Zero-Trust Identity Framework for Secure and Accountable Industrial and Agentic AI Ecosystems . (2025). SciQuest Research Database, 5(10), 87-96. https://sciencebring.org/index.php/sqrd/article/view/5

Similar Articles

11-20 of 20

You may also start an advanced similarity search for this article.