AI Policy @🤗: Comments on U.S. National AI Research Resource Interim Report
In late June 2022, Hugging Face submitted a response to the White House Office of Science and Technology Policy and National Science Foundation’s Request for Information on a roadmap for implementing the National Artificial Intelligence Research Resource (NAIRR) Task Force’s interim report findings. As a platform working to democratize machine learning by empowering all backgrounds to contribute to AI, we strongly support NAIRR’s efforts.
In our response, we encourage the Task Force to:
Appoint Technical and Ethical Experts as Advisors
- Technical experts with a track record of ethical innovation should be prioritized as advisors; they can calibrate NAIRR on not only what is technically feasible, implementable, and necessary for AI systems, but also on how to avoid exacerbating harmful biases and other malicious uses of AI systems. Dr. Margaret Mitchell, one of the most prominent technical experts and ethics practitioners in the AI field and Hugging Face’s Chief Ethics Scientist, is a natural example of an external advisor.
Resource (Model and Data) Documentation Standards
- NAIRR-provided standards and templates for system and dataset documentation will ease accessibility and function as a checklist. This standardization should ensure readability across audiences and backgrounds. Model Cards are a vastly adopted structure for documentation that can be a strong template for AI models.
Make ML Accessible to Interdisciplinary, Non-Technical Experts
- NAIRR should provide education resources as well as easily understandable interfaces and low- or no-code tools for all relevant experts to conduct complex tasks, such as training an AI model. For example, Hugging Face’s AutoTrain empowers anyone regardless of technical skill to train, evaluate, and deploy a natural language processing (NLP) model.
Monitor for Open-Source and Open-Science for High Misuse and Malicious Use Potential
- Harm must be defined by NAIRR and advisors and continually updated, but should encompass egregious and harmful biases, political disinformation, and hate speech. NAIRR should also invest in legal expertise to craft Responsible AI Licenses to take action should an actor misuse resources.
Empower Diverse Researcher Perspectives via Accessible Tooling and Resources
- Tooling and resources must be available and accessible to different disciplines as well as the many languages and perspectives needed to drive responsible innovation. This means at minimum providing resources in multiple languages, which can be based on the most spoken languages in the U.S. The BigScience Research Workshop, a community of over 1000 researchers from different disciplines hosted by Hugging Face and the French government, is a good example of empowering perspectives from over 60 countries to build one of the most powerful open-source multilingual language models.
Our memo goes into further detail for each recommendation. We are eager for more resources to make AI broadly accessible in a responsible manner.