If humans are to ever fully trust driverless cars to make decisions for them, these autonomous systems need to be able to understand how humans problem-solve in ways that sometimes defies machine logic.
Professor Peter Bruza from QUT’s School of Information Systems will lead a team on a two-year challenge to develop and test quantum theory-based models that better explain and predict human decisions on trust.
His Contextual models of information fusion project has just received US$241,000 in funding from the Tokyo-based Asian Office of Aerospace Research and Development.
- Humans and autonomous systems have completely different decision-making processes
- By machine standards, humans are irrational – our decision making regularly defies laws of probability
- Humans and robots must be able to successfully collaborate
- Explaining human decision-making is the rationale the missing link to developing greater human trust in shared decision making with machines
- QUT team to identify quantum cognition models for autonomous systems using online platforms like Amazon’s Mechanical Turk to survey thousands of people to study their decision-making rationale.
“In the future, we’re not going to trust our driverless car if we ask it to park while we run to a meeting and later find it in a tow-away zone,” Professor Bruza said.
“If it explained first that it would park in that zone if the hours permitted, we’re more likely to trust and rely on its decision making capability.”
Professor Bruza said humans and robots may need to collaboratively make decisions under extreme and uncertain conditions, such as on the battlefield or in the wake of a disaster.
“The classic Schrödinger’s cat thought experiment is a good example of the differences between how humans and machines think. An autonomous system has no hope of knowing how any given human will decide whether the cat is alive or dead,” he said.
“The plethora of sources that need to be processed in order to make a decision is known as ‘information fusion’.
“As humans, we are often comfortable with a decision if we think all sources combined are collectively trustworthy.
“However, our decision-making can defy the laws of probability used by machines to make decisions and therefore be considered irrational.
“According to probability standards, the order in which information is received doesn’t matter. The decision is the same whether receiving information source A before B, or the other way around.
“Humans don’t always think that way. The order in which we receive information, inferences and the context in which we make a decision can sway our thinking.”
Professor Bruza said quantum cognition models provided a better account of human thinking than traditional probabilistic models.
“Quantum cognition explains context – the interference a first judgement can have on subsequent judgements,” he said.
“Machines must understand what a human would do in context and explain that rationale before taking action.
“Trust in our machines will quickly erode if we can’t rely on them to make suggestions, recommendations or decisions we deem useful, whether it’s in relation to online shopping or autonomous vehicles.
“Amazon is already spending a lot on improving recommendation algorithms because even the slightest increase in accuracy translates to huge gains in profit.”