Responsible AI
Enabling ethical and equitable AI requires a comprehensive approach around people, processes, systems, data, and algorithms. Intel is designing AI to lower risks and optimize benefits for our society.
Our Perspective
Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology. This is especially true with the development of AI technology, for which we remain committed to evolving best methods, principles, and tools to ensure responsible practices in our product development and deployment.
Responsible AI Pillars
Intel is committed to advancing AI technology responsibly. We do this by utilizing rigorous, multidisciplinary review processes throughout the development lifecycle, establishing diverse development teams to reduce biases, and collaborating with industry partners to mitigate potentially harmful uses of AI. We are committed to implementing leading processes founded on international standards and industry best practices. AI has come a long way but there is still so much more to be discovered, as technology evolves. We are continuously finding ways to use this technology to drive positive change and better mitigate risks. We continue to collaborate with academia and industry partners to advance research in this area while also evolving our platforms to make responsible AI solutions computationally tractable and efficient.
Internal and External Governance
Internally, our multidisciplinary advisory councils review various development activities through the lens of seven principles: respect human rights; enable human oversight; enable transparency and explainability; advance security, safety, and reliability; design for privacy; promote equity and inclusion; and protect the environment. Externally, consistent with Intel’s Global Human Rights Principles, when we become aware of a concern that Intel products are being used in a way that violates these Principles, Intel will take appropriate action to mitigate that abuse up to and including restricting or ceasing Intel’s business with the business partner until and unless we have high confidence that Intel’s products are not being used to violate human rights.
Read more about Intel’s Corporate Social Responsibility Efforts
Research and Collaboration
We collaborate with academic partners across the world to conduct research in privacy, security, human/AI collaboration, trust in media, AI sustainability, explainability and transparency. Additionally, we continue to spearhead publications, collaborations, and partnerships industrywide to better understand AI implications and solve global challenges.
Read more about the Private AI Collaborative Research Institute
Products and Solutions
We develop platforms and solutions to make responsible AI computationally tractable. We create software tools to ease the burden of responsible AI development and explore different algorithmic approaches to improve privacy, security and transparency and to reduce bias. We do this by conducting ethnographic research to understand pain points and address those appropriately.
Read more about the Intel® Homomorphic Encryption Toolkit
Read more about Intel® Trust Authority
Read more about Intel’s Real-Time Deepfake Detector
Read more about Intel’s approach to Generative AI
Read more about Intel® Explainable AI Tools
Read more about the CAM-Visualizer Toolkit
Read more about the OpenVINO™ Automatic Model Manifest Add-On Toolkit
Inclusive AI
We understand the need for equity, inclusion and cultural sensitivity in the development and deployment of AI. We strive to ensure that the teams working on these technologies are diverse and inclusive. For example, through Intel’s digital readiness programs we engage students to drive awareness about responsible AI, AI ethical principles and methods to develop responsible AI solutions. The AI technology domain should be developed and informed by diverse populations, perspectives, voices and experiences. That is why we actively engage with community colleges, which offer the chance to democratize AI technology. In the U.S. higher education system, these schools attract students from a broad variety of backgrounds and expertise. We continually seek new ways of engaging with people from all walks of life who are affected by new technologies like AI.
Initiatives
We believe that technology should enrich the life of every person on Earth. AI can create global change, provide us with powerful tools, and enable a responsible, inclusive, and sustainable future. We are harnessing the power of AI to tackle critical global challenges like pandemics, natural disasters, and global public health. We are developing AI capabilities and solutions to amplify human potential, enhance inclusion and improve accessibility for people with disabilities.
Enhancing Accessibility
For many individuals with disabilities, independence and autonomy can be a challenge. AI is helping to change that by creating products that offer alternative solutions to everyday barriers.
Expanding Access to Education
Intel is dedicated to responding to the global AI skill gap with programs like AI for Youth and AI for Future Workforce, preparing students for the digital revolution.
Improving Safety
From enabling automated vehicles to drive successfully to reducing child exploitation, AI technology is helping make society safer.
Creating Environmental Solutions
Using AI technology, researchers can better understand how our environment works and develop solutions to build a better future.
Advancing Healthcare
AI is now commonly used in healthcare and life sciences, from improving patient care to developing preventive disease research.
Understanding and Reducing Impact of Radiation
In collaboration with NASA and Mayo Clinic, Intel is using federated learning to understand the effect of cosmic radiation and improve astronaut health.