In an ambitious move towards implementing responsible Artificial Intelligence (AI), tech giant Google has joined forces with the University of Cambridge in a multi-year research partnership. Announced on January 2nd, their collaboration's overarching goal is to explore how AI technology can create beneficial societal impacts, including potential measures to combat climate change.
The landmark agreement will see Google teaming up with the University's Centre for Human-Inspired Artificial Intelligence (CHIA). This will entail a deep focus on key AI research initiatives in shared areas of interest and equally important issues encompassing AI ethics and safety.
Outlined by Google, the research projects will largely revolve around responsible AI, human-centred robotics, human-machine interaction, healthcare, economic sustainability, and climate change, all accented with a pledge to operate the technology with vigilance and accountability.
Matt Brittin, President of Google EMEA, spoke favourably of the joint venture, underlining the importance of this collaboration in shaping the responsible development and adoption of AI technology. He said, "By collaborating with one of our world-leading British academic institutions, we can enable AI research that is bold, responsible and designed to meet the needs of people across the country.”
The partnership met with praise from industry peers. Margo Waldorf, CEO of Change Awards, applauded the effort. She said, “The safety and ethics of using AI are critical subjects across the industry, so it is great to see Google and the University of Cambridge pioneer the research in the responsible use of the technology.” Waldorf also noted that the pioneering partnership is leading the way for discussions about ethics and safety and promoting the beneficial use of technology for all of humanity.
Tom Dunning, CEO and Founder of Ad Signal, offered his viewpoint on the partnership's significance. Arguing that the principal downfall of AI is an unchecked adoption by businesses eager to tap into the latest trends, Dunning emphasised the need for careful assessment of AI as the appropriate tool. He noted the required materials for model training and powering AI components are carbon-heavy, which escalates cooling needs and network traffic.
Dunning felt strongly that industry and academia must be the frontrunners in ensuring the responsible development and adoption of AI. He said, “Organisations such as Google and the University of Cambridge have the capacity and responsibility to shift the market towards less carbon-intensive solutions, while also reducing the carbon output of AI.” Through this partnership, Dunning expressed his hope the industry will soon take a new approach to AI, giving more careful consideration to its environmental impact.