With artificial intelligence (AI) rapidly transforming our world, developers and adopters face the challenge of securing AI technology while navigating guidelines and standards that are often inconsistent and siloed. As developers work through these challenges, it’s critical to develop and share practices that keep security at the forefront. The future of security requires collective action, and AI is no exception.
At Intel, we’re no stranger to driving new technology adoption across every industry. We see the opportunities and share in the challenges our customers and partners face. And we know the importance of rapidly developing standards and best practices to simplify the use of new innovations. Security is an essential element across our entire product portfolio, and we look forward to bringing our industry-leading security assurance expertise to help everyone improve new AI solutions.
At Intel Vision 2024, CEO Pat Gelsinger outlined a strategy for open scalable AI systems, including hardware, software, frameworks and tools. Taking another step forward in that journey, Intel has joined the Coalition for Secure AI (CoSAI) as a founding member alongside Google, IBM and other organizations. CoSAI, hosted by open source global standards body OASIS Open, is an initiative designed to give all practitioners and developers the guidance and tools they need to create AI systems that are secure-by-design.
This is a crucial collaborative effort for the industry, bringing together a diverse global group of leaders across companies, academia and other relevant fields who will work together to develop and share holistic approaches, best practices, tools and methodologies for secure AI development and deployment. Initially in this effort, CoSAI’s contributors will collaborate on three key work streams:
- Software supply chain security for AI systems: enhancing composition and provenance tracking to secure AI applications.
- Preparing defenders for a changing cybersecurity landscape: addressing investments and integration challenges in AI and classical systems.
- AI security governance: developing best practices and risk assessment frameworks for AI security.
As part of Intel’s commitment to advancing AI technology responsibly, we will continue to collaborate with industry partners on innovative approaches to address security, transparency and trust. CoSAI also complements the recent introduction of the Linux Foundation AI & Data’s latest Sandbox Project: the Open Platform for Enterprise AI (OPEA). In addition to CoSAI, Intel is a founding member of OPEA, which is designed to help accelerate secure, cost-effective generative AI (GenAI) deployments for businesses by driving interoperability across a diverse and heterogeneous ecosystem, starting with retrieval-augmented generation (RAG).
As the market responds to the insatiable demand for AI, technology vendors must remain committed to open solutions that provide choice while also driving standards for security that best protects users. And at Intel, we will continue delivering and improving on the product security needed to help secure the development and deployment of AI.
More: View the full announcement's news release. | For more information, visit the CoSAI website.
Dhinesh Manoharan is vice president and general manager of Security for AI & Security Research at Intel Corporation.