ChannelLife Australia - Industry insider news for technology resellers
Story image
Research reveals the state of AI in Australian businesses
Mon, 27th May 2019
FYI, this story is more than a year old

While most Australian businesses agree with the principles of the Australian Government-funded discussion paper, ‘Artificial Intelligence: Australia's Ethics Framework', little is being done to implement ethical AI.

LivePerson, a NASDAQ listed global tech company that specialises in asynchronous communication, AI and automation, surveyed almost 600 Australian IT, customer experience and digital decision-makers, revealing just two in five Australian businesses have AI standards or guidelines in place.

Most businesses surveyed (40 per cent) are looking to the Australian Government for leadership around AI regulation and enforcement of standards.

It is widely recognised that AI has the potential to increase Australians' well-being, lift the economy and improve society.

The findings reflect this, with two-thirds of digital leaders surveyed actively incorporating AI into their businesses to drive positive outcomes for their organisations, employees and customers.

In fact, Australian businesses achieving broad AI usage are predicting 12 per cent growth in revenue, or an average increase in revenue of $1,875,000, over the next 12 months.

Further, a significant number of Australian businesses broadly using AI say the technology has had a positive impact on customer satisfaction (65 per cent) and customer retention (61 per cent), employee satisfaction (59 per cent) and employee productivity (69 per cent).

Australian businesses' top concerns with AI

At the same time, the powerful outcomes that can be delivered by AI have given rise to new ethical considerations about the technology's potential impact.

Clear concerns emerged in the research about AI's potential to negatively affect society, particularly when it comes to privacy and personal information. The most common concerns held by Australian businesses are the potential for AI technologies to fall into the wrong hands (85 per cent), loss of privacy (85 per cent) and unauthorised access to data (84 per cent).

Australian businesses surveyed view the core principles of AI set out in the Ethics Framework as important, including the principles of privacy protection, regulatory and legal compliance and that AI must not be designed to harm or deceive people—with 91 per cent of respondents stating these principles are ‘somewhat' or ‘extremely' important to them.

Attitudes toward AI not reflected in practice

While Australian organisations are concerned about the impact of AI on society, the research suggests Australian businesses could be doing more to minimise the potential risks.

Steps businesses are taking to mitigate the risk of negative outcomes primarily include: conducting risk assessments (36 per cent); monitoring industry standards (31 per cent); conducting ethics training for employees (31 per cent); developing best practice guidelines (30 per cent); and conducting impact assessments (30 per cent).

Australian businesses look to the Australian Government for AI leadership

There is a mixed sentiment among Australian digital leaders on whether accountability for AI should lie with those developing (46 per cent) or deploying the AI (46 per cent).

Within their organisations, respondents reported company leadership, including C-suites (34 per cent) and boards of directors (34 per cent), are most likely to have ultimate accountability for the decisions made by AI systems.

From a regulatory perspective, two in five businesses believe an Australian government body should be responsible for setting and enforcing AI ethics. However, interestingly, only half (49 per cent) of businesses think it's ‘extremely important' that AI systems comply with all relevant international and Australian