Responsible AI Principles
Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology. This is especially true with the development of AI technology, for which we remain committed to evolving best methods, principles, and tools to ensure responsible practices in our product use, development, and deployment. With a foundation built on Intel’s long-standing Code of Conduct and Global Human Rights Principles, we are approaching Responsible AI through a comprehensive strategy centered around people, processes, systems, data, and algorithms, with the aim of lowering risks while optimizing benefits for our society.
Intel has identified the following list of seven areas of ethical inquiry that we have integrated into Intel’s AI product lifecycle processes. These principles serve as a strong foundation for considering the risks associated with AI products and projects and provide a north star that we put into action through our Ethical Impact Assessment process. We will continue to improve our approach based on learnings and evolve Intel’s orientation towards the responsible use, design, and development of AI capabilities.
Respect Human Rights
Development, use and deployment of AI should proactively contemplate and respect the human rights of all relevant rightsholders. Intel approaches this in alignment with our Global Human Rights Principles and Approach and relevant international frameworks such as the United Nations Guiding Principles on Business and Human Rights and Organization for Economic Cooperation and Development Guidelines for Multinational Enterprises on Responsible Business Conduct.
Enable Human Oversight
To enable human controls throughout the life cycle of an AI system, the needs of various stakeholders should be supported. Developers should seek to train AI systems with the appropriate feedback and oversight, while users and interested parties should be able to assess the outputs of these systems and intervene when needed.
Enable Transparency and Explainability
To promote responsible development and use throughout the AI value chain, developers should strive to provide comprehensive information, including things like: recommended uses, potential harms, how AI systems were trained and tested, training sets and the results of bias testing. AI systems and their supporting materials should provide all stakeholders (e.g., downstream developers, users) with the best possible explanations of system behavior and access to resources to further address their concerns.
Advance Security, Safety, and Reliability
Intel prioritizes security, safety, resistance to tampering, and reliability in the development of AI products. We strive to limit the application of Intel AI products to their intended use. Intel utilizes “security by design” development principles consistent with our Security First Pledge and Cybersecurity Public Policy in addition to the “safety by design” development principles consistent with our commitment to Product Quality and Reliability.
Design for Privacy
AI applications utilize large amounts of data, so respecting and safeguarding privacy and data rights throughout the lifecycle must be prioritized. Consistent with Intel’s Privacy Notice, Intel supports privacy rights by designing our technology with those rights in mind, including being transparent about the need for any personal data collection, allowing user choice and control; and, to design, develop and deploy our products with appropriate guardrails to protect personal data.
Promote Equity and Inclusion
Whether building stand-alone products or working with customers and partners to bring new AI capabilities into the world, Intel is committed to building inclusivity into every step in the value chain. Intel strives to look at all aspects of AI with an inclusion lens, from the diverse backgrounds of developers in accordance with our Diversity and Inclusion policy, to datasets, to models, to intended and unintended uses. We take action to mitigate potential biases and communicate with stakeholders to make it easier for them to do the same.
Protect the Environment
AI can consume significant amounts of energy and require substantial use of materials. Intel strives to develop, deploy, and use AI consistent with Intel’s environmental stewardship commitments by considering the decarbonization and efficiency of AI solutions throughout their lifecycle, and focusing on hardware and software development that accelerates the transition towards a low-carbon, low-waste future.