CA Technologies is participating in scientific research to discover how internet of things (IoT) applications can use deep learning to imitate human decisions.
The research will also explore ways of ensuring that AI-based decisions are not producing biased results.
This three-year research project is named ALOHA (adaptive and secure deep learning on heterogeneous architectures), is funded by the European Union as part of the Horizon 2020 research and innovation programme, and is coordinated by the University of Cagliari in Italy.
“The future of all technologies will include AI and deep learning in some way,” says CA Technologies chief technology officer Otto Berkes.
“The expansion of complex, multi-layered IoT systems bring both security and software development challenges that AI and autonomous computing are uniquely positioned to address.
“ALOHA aims to better understand how applications running on IoT devices with growing computational power can learn from experience and react autonomously to what happens in a surrounding environment,” says CA Technologies strategic research vice president Victor Muntés.
“We will bring our security expertise to avoid data poisoning risks that could lead to bias in AI-based decisions, while our agile expertise will help to efficiently embed the use of deep learning in the software development process.
Until now, deep learning AI algorithmic processing has largely been limited to expensive, high-performance servers.
ALOHA will study the use of these deep learning algorithms on small, low-power consumption devices such as video cameras, sensors and mobile devices.
This will enable them to learn, recognise and classify images, videos, sounds and sequences quickly and with high precision.
The ability of small devices to make smart decisions thanks to deep learning applications will be extremely useful in situations where human expertise is not available. For example, an IoT application could automatically provide a diagnosis for a medical CT scan image in a remote location.
The ALOHA research into preventing data poisoning could be applied to help solve AI bias issues in IoT applications, and other application contexts, to avoid situations as chatbot communicating a racist remark or a translation application advocating sexism.
CA Technologies will be responsible for the development and security of the underlying deep learning platform, focusing its research on the following areas:
Security - CA research will include the development of new tools that can analyse data and detect bias. These tools will be extended to detect data positioning risks and suggest mitigation actions.
Agility - CA will explore how agile methodologies can be applied to the deep learning arena to align strategy and execution, track and manage delivery in a predictable cadence, and leverage key data to measure performance.