Tech leaders join forces to secure AI
- July 24, 2024
- Steve Rogerson
Some of the biggest technology companies, including Google, Intel, Microsoft and Nvidia, have formed the Coalition for Secure AI (CoSAI) to create secure-by-design AI systems.
CoSAI’s founding premier sponsors are Google, IBM, Intel, Microsoft, Nvidia and PayPal. Additional founding sponsors include Amazon, Anthropic, Cisco, Chainguard, Cohere, GenLab, OpenAI and Wiz.
CoSAI was announced last week at the Aspen Security Forum (www.aspensecurityforum.org) in Colorado. Hosted by the Oasis (www.oasis-open.org) global standards body, CoSAI is an open-source initiative designed to give all practitioners and developers the guidance and tools they need to create secure-by-design AI systems. CoSAI will foster a collaborative ecosystem to share open-source methodologies, standardised frameworks and tools.
It brings together a diverse range of stakeholders, including industry leaders, academics and other experts, to address the fragmented landscape of AI security.
CoSAI’s scope includes securely building, integrating, deploying and operating AI systems, focusing on mitigating risks such as model theft, data poisoning, prompt injection, scaled abuse and inference attacks. The project aims to develop security measures that address AI systems’ classical and unique risks.
Artificial intelligence (AI) is rapidly transforming the world and holds immense potential to solve complex problems. To ensure trust in AI and drive responsible development, it is critical to develop and share methodologies that keep security at the forefront, identify and mitigate potential vulnerabilities in AI systems, and lead to the creation of systems that are secure-by-design.
Currently, securing AI and AI applications and services is a fragmented endeavour. Developers grapple with a patchwork of guidelines and standards that are often inconsistent and siloed. Assessing and mitigating AI-specific and prevalent risks without clear best practices and standard approaches is a significant challenge for even the most experienced organisations.
CoSAI says it is poised to make significant strides in establishing standard practices that enhance AI security and build trust among stakeholders globally.
“CoSAI’s establishment was rooted in the necessity of democratising the knowledge and advancements essential for the secure integration and deployment of AI,” said David LaBianca from Google, the CoSAI governing board co-chair. “With the help of Oasis Open, we’re looking forward to continuing this work and collaboration among leading companies, experts and academia.”
Omar Santos from Cisco, the other CoSAI governing board co-chair, added: “We are committed to collaborating with organisations at the forefront of responsible and secure AI technology. Our goal is to eliminate redundancy and amplify our collective impact through key partnerships that focus on critical topics. At CoSAI, we will harness our combined expertise and resources to fast-track the development of robust AI security standards and practices that will benefit the entire industry.”
To start, CoSAI will form three workstreams, with plans to add more over time:
- Software supply chain security for AI systems: enhancing composition and provenance tracking to secure AI applications.
- Preparing defenders for a changing cyber-security landscape: addressing investments and integration challenges in AI and classical systems.
- AI security governance: developing best practices and risk assessment frameworks for AI security.
Everyone is welcome to contribute technically as part of the CoSAI (www.coalitionforsecureai.org) open-source community. The CoSAI charter can be found at: www.coalitionforsecureai.org/wp-content/uploads/2024/07/CoSAI-Charter-FINAL.pdf.