Nicole Nichols is a Distinguished Engineer in Machine Learning Security at Palo Alto Networks. She previously held senior roles at Apple, Microsoft and has contributed to both academia and industry advancements in adversarial machine learning and security. She has published at numerous ACM, IEEE, and CVPR workshops, and was co-chair of ICML-ML4Cyber workshop. She has a PhD in Electrical Engineering from the University of Washington.
AI agent systems, capable of complex planning and autonomous action in real world environments, present profound and novel cybersecurity challenges. Current cybersecurity paradigms are too brittle to address the unique vulnerabilities stemming from dynamic generative agents with opaque interpretability, new protocols connecting tools and data, and the unpredictable dynamics of multi agent interactions. Prior work has identified a range of security gaps in AI agents. However, it is essential to move beyond reiterating concerns and toward a collaborative, action oriented agenda to mitigate these risks. An international group of leading industrial and academic researchers were gathered by Schmidt Sciences, RAND, and Palo Alto Networks, to contextualize the fragmented cross domain expertise and insights needed to produce solutions that reflect the full landscape interconnected challenges that uniquely arise in the setting of LLM driven AI Agents. This report distills the collective insights from this gathering and contributes: 1) A flexible definition of the functional properties of AI agents, 2) A description of how these AI agent properties create novel implications for security, and 3) An open roadmap to producing interconnected comprehensive solutions.
Eugene Bagdasarian is an Assistant Professor at University of Massachusetts Amherst and a Researcher at Google. His work focuses on studying attack vectors in AI systems deployed in real life and proposing new designs that mitigate these attacks. Previously, he received the Distinguished Paper Award at USENIX Security and Apple AI/ML PhD Fellowship.
New AI agents integrate with complex systems and users’ data, thus opening new attack vectors. Worse, security designs struggle with the versatility of agents: booking a trip requires different controls than responding to an email. In this talk, I propose to ground agentic privacy and security in the theory of Contextual Integrity, which defines privacy as appropriate information flows under contextual norms. We use language models to infer the current trusted context and synthesize restrictions on tools and data, then develop a policy engine to deterministically enforce them, helping to isolate attacks that abuse agentic capabilities and data access. While promising, this design raises new questions: from establishing trusted context and improving policy generation to collecting social norms and resolving context ambiguity.
Katherine is a researcher at OpenAI. Her work has provided essential empirical evidence and measurement for grounding discussions around concerns that language models infringe copyright, and about how language models can respect an individuals’ right to privacy and control of their data. Additionally, she has developed large language models (T5), developed methods of reducing memorization, and studied the impact of data curation on model development. Her work has been highly awarded at venues like: NeurIPS, ICML, ICLR, and USENIX.
Abstract To Be Announced
The following times are in TST (Taiwan Standard Time) UTC/GMT +8 hours.
Recent years have seen a dramatic increase in applications of Artificial Intelligence (AI), Machine Learning (ML), and data mining to security and privacy problems. The analytic tools and intelligent behavior provided by these techniques make AI and ML increasingly important for autonomous real-time analysis and decision making in domains with a wealth of data or that require quick reactions to constantly changing situations. The use of learning methods in security-sensitive domains, in which adversaries may attempt to mislead or evade intelligent machines, creates new frontiers for security research. The recent widespread adoption of "deep learning" techniques, whose security properties are difficult to reason about directly, has only added to the importance of this research. In addition, data mining and machine learning techniques create a wealth of privacy issues, due to the abundance and accessibility of data. The AISec workshop provides a venue for presenting and discussing new developments in the intersection of security and privacy with AI and ML.
Topics of interest include (but are not limited to):
Theoretical topics related to security
Security applications
Security-related AI problems
We invite the following types of papers:
Papers not following the following guidelines will be desk-rejected. Submissions must be in English and properly anonymized. The papers should be at most 10 pages in double-column ACM format, excluding the bibliography and well-marked appendices, and at most 12 pages overall. Papers should be in LaTeX and striclty with the ACM format. This format is also required for the camera-ready version. Please follow the main CCS formatting instructions (except with page limits as described above). In particular, we recommend using the sigconf template, which can be downloaded from https://www.acm.org/publications/proceedings-template . The authors can specify the paper type in the submission form. Accepted papers will be published by the ACM Digital Library and/or ACM Press. Committee members are not required to read the appendices, so the paper should be intelligible without them.
Submission link: https://aisec25.hotcrp.com .
All accepted submissions will be presented at the workshop as posters. Accepted papers will be selected for presentation as spotlights based on their review score and novelty. Nonetheless, all accepted papers should be considered as having equal importance and will be included in the ACM workshop proceedings.
One author of each accepted paper is required to attend the workshop and present the paper for it to be included in the proceedings.
Important notice: Please note that traveling to Taiwan may require a visa. Depending on the participants' nationalities, the visa application process may need to be initiated early to avoid last-minute travel disruptions. Please, check the CCS instructions for visa at https://www.sigsac.org/ccs/CCS2025/visa/ .
For any questions, please contact one the workshop organizers at [email protected]
Thanks for those who contacted us to help with the reviews!