BABL AI joins US consortium to promote safer AI systems
BABL AI, a key player in the provision of AI auditing and algorithmic governance solutions, has announced its involvement in the US AI Safety Institute Consortium (AISIC). Created by the U.S. Department of Commerce's National Institute of Standards and Technology (NIST), the Consortium aims to advance the development of trustworthy and safe AI systems.
The AISIC is the largest collective of AI developers, users, researchers and other impacted groups on a global scale. It brings together diverse participants, from Fortune 500 companies and academic teams to non-profit organisations and U.S. Government agencies. These participants all share a commitment to the research necessary for the development of safe and dependable AI.
As a vital element of the NIST-led U.S. Artificial Intelligence Safety Institute, the Consortium will work closely with stakeholders to incorporate leading-edge research and testing into the wider AI safety community. The chief mission of the Consortium is to ensure that AI systems align with societal norms and values whilst prioritising public safety.
Dr Shea Brown, CEO of BABL AI, stated, "The inauguration of the AISIC marks a significant step in the evolution of strong and evidence-based approaches to managing AI risk, and we take pride in the fact that BABL AI is able to contribute to this endeavour."
Dr Brown further highlighted, "As an organisation that audits AI and algorithmic systems for bias, safety, ethical risk, and effective governance, it is our belief that the Institute's mission of developing a system for evaluating these technologies aligns with our own mission to promote human flourishing in the age of AI."
Gina Raimondo, Secretary of Commerce, emphasised the crucial role of the U.S. government in establishing standards and tools to manage the risks and opportunities presented by artificial intelligence (AI) effectively. She underscored President Biden's directive to prioritise two key objectives: establishing safety standards and safeguarding the innovation ecosystem. Raimondo highlights the formation of the U.S. AI Safety Institute Consortium as a strategic initiative aimed at achieving these goals.
Raimondo said, "The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence. President Biden directed us to pull every lever to accomplish two key goals: set safety standards and protect our innovation ecosystem. That's precisely what the U.S. AI Safety Institute Consortium is set up to help us do."
Since 2018, BABL AI has been auditing and certifying AI systems, consulting on responsible AI best practices and offering online education on related topics. BABL AI's overall mission is to ensure that all algorithms are developed, deployed, and governed in ways that prioritise human flourishing.
BABL AI is one of over 200 leading AI stakeholders participating in the AISIC initiative. This assorted group is comprised of companies, academic institutions, civil society organisations, and government agencies, all driven to promote the development and implementation of trustworthy AI.