[フレーム]





Chapters Events Blog

Working Group

AI Controls Framework

This committee aligns with NIST Cybersecurity Framework to establish a robust, flexible, and multi-layered framework.
View Current Projects
AI Controls Matrix

Download

Working Group
AI Controls Framework
The CSA AI Controls Framework Working Group’s goal is to define a framework of control objectives to support organizations in their secure and responsible development, management, and use of AI technologies. The framework will assist in evaluating risks and defining controls related to Generative AI (GenAI). The control objectives will cover aspects related to cybersecurity. Additionally, it will cover aspects related to safety, privacy, transparency, accountability, and explainability as far as they relate to cybersecurity.

Working Group Leadership

Marina Bregkou

Marina Bregkou

Principal Research Analyst, Associate VP, CSA

Daniele Catteddu

Daniele Catteddu

Chief Technology Officer, CSA

Daniele Catteddu is an information security and risk management practitioner, technologies expert and privacy evangelist with over 15 of experience. He worked in several senior roles both in the private and public sector. He is member of various national and international security expert groups and committees on cyber-security and privacy, keynote speaker at several conferences and author of numerous studies and papers on risk management, ...

Read more

Working Group Co-Chairs

Ken Huang

Ken Huang

CEO & Chief AI Officer, DistributedApps.ai

Ken Huang is an acclaimed author of 8 books on AI and Web3. He is the Co-Chair of the AI Organizational Responsibility Working Group and AI Control Framework at the Cloud Security Alliance. Additionally, Huang serves as Chief AI Officer of DistributedApps.ai, which provides training and consulting services for Generative AI Security.

In addition, Huang contributed extensively to key initiatives in the space. He is a core contribut...

Read more

Betina Tagle, PhD

Betina Tagle, PhD

Dr. Betina Tagle's doctoral research built an AI agent on existing computing infrastructure as an inexpensive tool for the identification of insider threats under the Design Science Research methodology, in which she presented and published at international conferences. She published a Design Science Research Guide for university students to promote the use of DSR within cybersecurity. She is a university adjunct focused on courses for ...

Read more

Faisal Khan

Faisal Khan

Model Security Engineering Lead at Protect AI

Faisal Khan leads model security engineering at Protect AI, where he develops innovative cybersecurity solutions for AI and ML systems. As a founding engineer, he has been instrumental in defining product strategy and building the technical expertise that drives the company's success.

Prior to Protect AI, Faisal worked as a research software engineer at Argonne National Laboratory. There, he pioneered secure exper...

Read more

Previous Co-Chairs

Siah Burke

Siah Burke

Marco Capotondi

Marco Capotondi

Agency for National Cybersecurity, Italy

Marco Capotondi is an Engineer specialized in applied AI, with a focus on AI Security and AI applied to Autonomous Systems. Bachelor’s degree in Physics and Master’s degree in AI Engineering, he got a doctoral degree through a research around Bayesian Learning techniques applied to Autonomous Systems, on which he published many papers. His actual focus is helping the community to define and manage risks associated with Artificial Intelligen...

Read more

Alessandro Greco

Publications in ReviewOpen Until
SSCF-CAIQ Dec 25, 2025
SSCF Implementation Guidelines Dec 25, 2025
DLP and DSPM inHealthcare: AI-EnhancedSecurity and Privacy Dec 26, 2025
AICM to AIUC-1 Mapping Dec 28, 2025
View all
Who can join?

Anyone can join a working group, whether you have years of experience or want to just participate as a fly on the wall.

What is the time commitment?

The time commitment for this group varies depending on the project. You can spend a 15 minutes helping review a publication that's nearly finished or help author a publication from start to finish.

Virtual Meetings

Attend our next meeting. You can just listen in to decide if this group is a good for you or you can choose to actively participate. During these calls we discuss current projects, and well as share ideas for new projects. This is a good way to meet the other members of the group. You can view all research meetings here .

Open Peer Reviews

Peer reviews allow security professionals from around the world to provide feedback on CSA research before it is published.

Learn how to participate in a peer review here.

SSCF-CAIQ

Open Until: 12/25/2025

The Cloud Security Alliance (CSA), in collaboration with the SaaS Security Capability Framework (SSCF) Working Group, is pl...

SSCF Implementation Guidelines

Open Until: 12/25/2025

The Cloud Security Alliance (CSA), in collaboration with the SaaS Security Capability Framework (SSCF) Working Group, is pl...

DLP and DSPM inHealthcare: AI-EnhancedSecurity and Privacy

Open Until: 12/26/2025

Healthcare organizations face growing risks of data exposure as they adopt cloud platforms, AI tools, and connected technol...

AICM to AIUC-1 Mapping

Open Until: 12/28/2025

This document is an addendum to the 'AICM' that contains controls mapping between the CSA's AI Controls Matrix v1.0 and 'AI...

Premier AI Safety Ambassadors

Premier AI Safety Ambassadors play a leading role in promoting AI safety within their organization, advocating for responsible AI practices and promoting pragmatic solutions to manage AI risks. Contact [email protected] to learn how your organization could participate and take a seat at the forefront of AI safety best practices.

AltStyle によって変換されたページ (->オリジナル) /