More than 200 top AI stakeholders have signed up to a consortium to agree to the responsible development and deployment of artificial intelligence.
Secretary of Commerce Gina Raimondo announced the creation of the US AI Safety Institute Consortium (AISIC), which is set to bring AI creators and users, academics, government and industry researchers, and civil society organizations together.
Overseen by the US AI Safety Institute (USAISI), the AISIC will play a vital role in implementing the actions outlined in President Biden’s Executive Order.
Top tech firms agree to develop responsible AI
Raimondo commented: “The US government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence.”
At the time of the announcement on February 8, the consortium included 228 companies and organizations that have already invested heavily in AI technologies. They include giants like Google, Microsoft, Apple, Amazon, and Meta, as well as many other popular service providers like Canva, Workday and Adobe.
The consortium will address key aspects such as red-teaming, capability evaluations, risk management, safety and security, and watermarking AI-generated content.
Raimondo continued: “…by working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
White House Deputy Chief of Staff, Bruce Reed, highlighted the urgency to align efforts across the government, organizations and research: “To keep pace with AI, we have to move fast and make sure everyone.”
The AISIC also collaborates with state and local governments, non-profits, and organizations from “like-minded nations” in order to ensure safe and responsible AI globally.