Responsible AI
Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology. This is especially true with the development of AI technology, for which we remain committed to evolving best methods, principles, and tools to ensure responsible practices in our product development and deployment.
Responsible AI Research
Intel Labs has been conducting research in Responsible AI and collaborating with academia to advance the state of art in the areas of privacy, security, human/AI collaboration, fairness and robustness, trusted media and sustainability. See below a sample of our publications and engagements.
Security & Privacy
Object Sensing and Cognition for Adversarial Robustness (OSCAR)
Developed in collaboration with Georgia Tech supported by DARPA GARD.
ShapeShifter: Robust Physical Adversarial Attack on Faster R-CNN Object Detector
Written in collaboration with Georgia Tech.
An Open-Source Framework For Federated Learning
Open Federated Learning (OpenFL) is an easily learnable and flexible tool for data scientists.
The Federated Tumor Segmentation (FeTS) Initiative
The largest international federation of healthcare institutions.
Fairness & Transparency
Mitigating Sampling Bias and Improving Robustness in Active Learning
Human-in-the-loop learning workshop at International Conference on Machine Learning (ICML 2021).
Limits and Possibilities for “Ethical AI” in Open Source: A Study of Deepfakes
Exploring transparency and accountability in the open source community.
Uncertainty as a Form of Transparency: Measuring, Using, and Communicating Uncertainty
AAAI/ACM conference on Artificial Intelligence, Ethics and Society (AIES-2021).
Human AI Collaboration
Few-shot Prompting Towards Controllable Response Generation (ArXiv pre-print, June 2022)
Written in collaboration with NTU, Taiwan.
Human in the Loop Approaches in Multi-modal Conversational Task Guidance System Development
Search in conversational AI workshop (SCAI) at COLING 2022.
CueBot: Cue-Controlled Response Generation for Assistive Interaction Usages
Ninth Workshop on Speech and Language Processing for Assistive Technologies (SLPAT-2022), ACL 2022.
Semi-supervised Interactive Intent Labeling
Workshop on Data Science with Human-in-the-loop: Language Advances (DaSH-LA), NAACL 2021.
Trusted Media
FakeCatcher: Detection of Synthetic Portrait Videos Using Biological Signals
Transactions on Pattern Analysis and Machine Intelligence (2020).
How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals
2020 IEEE International Joint Conference on Biometrics (IJCB).
Where Do Deep Fakes Look? Synthetic Face Detection via Gaze Tracking
ACM Symposium on Eye Tracking Research and Applications, ETRA 2021.
Adversarial Deepfake Generation for Detector Misclassification
Tenth Women In Computer Vision Workshop (WiCV) at CVPR 2022.