NIST Launches Artificial Intelligence Safety Institute Consortium
From Nvidia:
NVIDIA is now part of the new U.S. Artificial Intelligence Safety Institute Consortium, aiming to advance safe, secure, and trustworthy AI. The institute will create tools and standards for safe AI development. NVIDIA has a history of working with governments and researchers to ensure AI safety. They also endorsed the Biden Administration’s voluntary AI safety commitments. Additionally, NVIDIA invested $30 million in the National Artificial Intelligence Research Resource pilot program.
The consortium, which includes over 200 leading AI creators, researchers, and organizations, will focus on knowledge sharing and applied research to accelerate trustworthy AI innovation. NVIDIA will participate in working groups and leverage computing resources and best practices for implementing AI risk-management frameworks and AI model transparency.
Through their NeMo Guardrails initiative, open-source software for ensuring language model responses are accurate, appropriate, on topic, and secure, NVIDIA actively works toward AI safety. They plan to leverage computing resources and implement AI risk-management frameworks and model transparency best practices.
Read more: NIST Launches Artificial Intelligence Safety Institute Consortium