Stop Robot Abuse: AI businesses have to start thinking about ethics
Link copied to
Robot dog builder Boston Dynamics’ Stop Robot Abuse site is a parody — not a PETA campaign as many believe — but it did spark a global conversation about how humans treat robots and, eventually, truly artificially intelligent entities.
But right now on the ASX that deeper conversation could be lacking: the main issue is how to use it to replace mundane human tasks — not the deeper ethical issues of “bias and supremacy”.
The former is the bias people bring when they develop algorithms or analyse results, whereas the latter relates to philosophical fears of handing over too much information to AI before having a solid understanding of what he can ultimately do, says a recent report from consultant KPMG.
KPMG head of law Stuart Fuller tells Stockhead that even businesses using AI for rote tasks need to be aware of the broader ethics.
“Datasets need data and data has a particular source, so how do you ensure that is as unbiased as possible, so that you don’t have the unconscious bias of a developer,” he says.
“You need to ensure you have the right mix of what the machine does and what the human oversight of that machine does.”
He says all business using machine learning or elements of AI, whether it’s automated drones or processing energy bills, have an obligation to consider the deeper ethics.
“Some can’t see the forest for the trees — we know that it can provide huge advantages for businesses, but we also need to be looking at the impact AI can have on the wider society.”
>>Scroll down for a table of stocks with exposure to AI
Energy bill monitor BidEnergy (ASX:BID) uses machine learning in an automated system that monitors a business’s energy spending. The software manages both invoices and meter data and offers that data up for auction where energy retailers bid for companies’ energy spend.
“Someone sitting at a desk and analysing bills is a fairly monotonous task for a human to do,” says BidEnergy managing director Guy Maine.
He directed Stockhead to the company’s CTO for comment on the ethical side of their algorithms, who did not respond before publication.
Mr Maine sees AI as a way to free humans to do more interesting things.
“[Bill analysis] the perfect role for a robotic workforce, meaning employees can focus on other things rather than repetitive, monotonous tasks,” he said.
“How much automation and how intelligent we want these things to be, that’s a hot potato and a big leap forward from energy bills, where our robotic process automation is making things easier and more efficient.”
The KPMG report puts the onus on business leaders to drive the demand and integration of ethical AI, via good governance and processes.
This is a fact that Dr. Ralph Highnam, chief of Volpara Health Technologies (ASX:VHT), is increasingly aware of.
“We are ISO27001 certified, which signifies our commitment to keeping patient data safe, we also only send de-identified data to the cloud for processing, only on-site can anyone actually re-identify data,” Dr Highnam said.
“It leads you to think about and engage in information security right from the start of any product development. What risks are there to patient info? And how can we mitigate them?”
Volpara uses machine learning in its software in order to more accurately diagnose breast cancer from breast density measurements.
“Doctors are mostly concerned about getting good results in a way and form that they can trust, and they don’t generally fear the machines taking their job,” he told Stockhead.
“we don’t hear much in the way of concerns from patients and doctors in terms of privacy, as long as the results are reliable and can be trusted.”
Ethics concerns people have expressed to them are around the unauthorised use of data for marketing purposes, not how authorised data is used to build new algorithms.
Agreements with clinics in the US and in the EU already specify what they can and cannot do with the data they collect.
The KPMG report touches on the fact that we don’t know much about how these systems will evolve and what, therefore, they will be able to do with our data — some of it very personal.
“AI means so many different things to so many different people, and so the definition is pretty imprecise,” KPMG’s Mr Fuller explains.
“It can range from chat bots to machine learning algorithms to the AI that is yet to come. And with so many different views on the subject what is needed is an agreed upon, common approach across industry, business and governments to set the right frameworks and make sure we get it right.”
He says there are two key risks that will develop if businesses continue to miss the forest for their green, green trees.
“Firstly issues of bias and supremacy can create insufficient public trust that the technology can be used for positive purposes, which will put businesses back behind the eight ball,” he says.
“And secondly if businesses don’t give the broader ethics attention, then it risks government coming in and imposing legislation on the industry.
“Their level of knowledge of the technology would be greater than the governments, so the time is now for business to play a role in influencing the policy.”
Discussions around the future of AI shouldn’t be “limited by fear and apprehension” because the benefits of the technology far outstrip the risks, the KPMG report proselytises.
“AI is set to enhance process efficiency in workplaces and make way for deeper intellectual engagement,” the report says.
The broader “discussion” should be about a global best practice for AI models to ensure ethical considerations are a core focus, while at the same time allowing Australian AI to be commercially competitive.
Mr Fuller says there are many examples of the technology creating fair, unbiased, transparent offerings across a wide array of sectors, such as the healthcare industry.
Volpara’s Dr Highnam says AI and machine learning is “not here to replace, but here to help” and is getting smarter.
“An issue we ran into in breast screening was occasionally a pacemaker would show up in the imaging and that would throw off the algorithm,” he says.
“But we now have enough images with pacemakers, and so we have been able to train the machines to identify when a pacemaker has appeared and so it is no longer thrown off when doing the calculations.
“Machine learning is really coming into its own.”