Governments must face the challenges to our ethical, economic and legal frameworks already posed by artificial intelligence if we’re to fully realise the technology’s benefits.
This was the conclusion from the “Re-engineering Industries with Artificial Intelligence & the Social Contract” luncheon panel of experts assembled by the ACS (Australian Computer Society) in association with IJCAI, the International Joint Conference on Artificial Intelligence in Melbourne.
ACS President Anthony Wong warned that the regulatory frameworks have not been updated to account for robots and artificial intelligence producing works of value or causing harm and damage to individuals, businesses and the community.
“If a robot or artificial intelligence autonomously creates some work like a painting, music or book, some of which has already happened, who owns that? A robot is a machine it’s not a legal entity,” said Mr Wong.
“If a robot or intelligence kills someone – we had a Tesla car on autopilot plough into a truck because the sensors couldn’t pick up the truck – who’s responsible? The manufacturer, the person controlling it or the robot or artificial intelligence?”
Fellow panellist Marita Cheng, founder and CEO of aubot, RoboGals Global and 2012 Young Australian of the Year, said governments need to look at the implications of robots producing goods that displace large numbers of workers.
“That’s why people like Bill Gates are talking about taxing robots and artificial intelligence, so that this money can be distributed throughout societies.”
Ms Cheng also noted the universal basic income experiments in the Netherlands, Finland and Canada as a strategy to mediate this.
Professor Liz Bacon, Deputy Pro-Vice-Chancellor at the University of Greenwich in London, noted that universities appear to be broadly very good at teaching the tools and techniques of AI, but fall short when it comes to ethical issues and the potential impact on society.
“We need students to debate these issues and understand how a career in AI may change in their lifetime,” said Ms Bacon.
“If you’ve got a student studying a course in AI today, they’re going to be in the workforce for at least the next 40 years. Indeed they could even author the program that ends up becoming their boss or their co-worker.”
However, panellists also expressed scepticism towards the worst doomsday scenarios about mass job displacement with no job creation.
“There’s a big problem at the moment with the media. They blur the distinction between AI, autonomy, automation and algorithms,” said Mike Hinchey, President of IFIP (International Federation for Information Processing).
Mr Hinchey said he doubts that we’ll get to the frequently cited threshold of artificial general intelligence, certainly not in our lifetime.
“It’s not very helpful having people like Elon Musk and Stephen Hawking with those big reputations scaring people that robots will take over the world.”
Indeed, all panellists expressed a sense of optimism about the opportunities and benefits that artificial intelligence will bring, if we get the right frameworks in place.
“Personally, I feel I live in a very exciting time. I’m not driven by fear, I’m driven by opportunities,” said Mr Wong.