CSA AI Controls Matrix: A Sneak Preview
If you've been around the cloud security arena over the past decade, you're likely familiar with the Cloud Controls Matrix (CCM) from Cloud Security Alliance (CSA), a cybersecurity control framework for cloud computing. It is designed for general cloud security domains like infrastructure security, identity management, encryption and key management etc. Given the rapid rise in the adoption of artificial intelligence (AI) technologies, driven mostly by large language models (LLMs) and generative AI, CSA launched an initiative focused specifically on AI controls in Q4 2023. This project - the AI Controls Matrix - recently completed peer review and is expected to be publicly available in June 2025.
The AI Controls Matrix (or AICM for short), is designed to help organisations securely develop, implement, and use AI technologies. It is built on the CCM, but extends it for AI-specific risks. The first revision will contain 242 controls across 18 security domains, covering everything from model security to governance and compliance. The AICM works alongside frameworks like NIST AI RMF, ISO/IEC 42001:2023, and the EU AI Act, but provides specific and actionable guidance for organisations, as well as concrete steps for implementation and auditing. The AICM will focus on key AI security controls for model scanning, adversarial attack analysis, model poisoning mitigation, data poisoning prevention and detection, identity and access management, and more.
Keeping pace with the frequent changes in the AI industry is no easy feat (agentic AI anyone?), and the AICM will definitely have to undergo periodic revisions to stay up-to-date. But, it is a fantastic initiative to define terminology, establish a common taxonomy, distil subject matter expertise into actionable controls, and honestly balance 100% coverage with launching in a timely manner.