Saturday, 20 October 2018
Financial Times/Richard Waters: Artificial intelligence: when humans coexist with robots
Financial Times
myFT
Financial Times
Sign In
The Big Read Support services
Artificial intelligence: when humans coexist with robots
Despite fears of a machine takeover, brainpower will still be necessary. Such ‘hybrid systems’ require careful design
Richard Waters in San Francisco October 9, 2018
Print this page
124
The human race is not on the scrapheap after all. Or at least not yet. There has been no shortage of predictions in recent years about how advances in artificial intelligence and robotics will see humans replaced in all kinds of jobs.
But most AI experts see a less drastic outcome. In this version of the future, people will still have a role working alongside smart systems: either the technology will not be good enough to take over completely, or the decisions will have human consequences that are too important to hand over completely to a machine.
This hybrid decision-making should produce better results than either working alone, according to David Mindell, a professor at Massachusetts Institute of Technology and author of Our Robots, Ourselves. There’s just one problem: when humans and semi-intelligent systems try to work together, things do not always turn out well.
A catastrophic demonstration took place on the streets of Tempe, Arizona, this year. An Uber test car equipped with the company’s latest self-driving technology struck and killed a person crossing the road. Like almost all of today’s autonomous cars, a back-up driver was there to step in if the software failed. But an analysis by local police concluded that the driver was distracted at the time — and may have been watching a TV show on a smartphone.
The Uber vehicle was relying on a degree of autonomy which is due to be launched more widely next year. The so-called Level 3 system is designed to drive itself in most situations but hand control back to a human when confronted by situations it cannot handle.
Investigators examine an Uber self-driving vehicle that was involved in a road traffic accident in which a woman was killed in Tempe, Arizona, earlier this year © Reuters
A system that is meant to be fully autonomous but suddenly deviates puts unrealistic demands on humans, say critics. “If you’re only needed for a minute a day, it won’t work,” says Stefan Heck, chief executive of Nauto, a US start-up whose technology is used to prevent professional drivers from becoming distracted. “It’s neither fish nor fowl.”
The failure points to a predicament with the adoption of AI that reaches well beyond driverless cars. Without careful design, the intelligent systems making their way into the world could provoke a backlash against the technology.
Once people come to understand how limited today’s machine learning systems are, the exaggerated hopes they have aroused will evaporate quickly, warns Roger Schank, an AI expert who specialises in the psychology of learning. The result, he predicts, will be a new “AI winter” — a reference to the period in the late 1980s when disappointment over the progress of the technology led to a retreat from the field.
Preventing that will require more realistic expectations of the new autonomous systems, as well as careful design to make sure they mesh with the human world. But the technology itself presents a serious barrier.
“The way AI works, and the way it fails, are foreign to us,” says Illah Nourbakhsh, a professor of robotics at Carnegie Mellon University. “Does the AI make us feel more involved — or is it like dealing with an alien species?”
A facial recognition system on show last year in Washington DC. Such systems could pick suspects from a crowd but a human may be needed to weed out false positives © AFP
The semi-driverless car is a particularly stark example of a near-autonomous system that relies on close co-operation with people. But as AI advances, hybrid systems such as these are creeping into many different situations.
Machine learning— the type of AI that is behind the most dramatic recent progress in the field — is an advanced form of pattern recognition. It has already proved itself superior to people in tasks such as identifying the images in photographs or recognising speech.
But it is less effective when it has to make judgments based on the specific data on which it has been trained. In the real world, people often make decisions about situations they have not previously faced.
Recommended
Artificial intelligence in real workplaces
Can a robot do your job
The problem lies in systems that can match data but not understand its significance. “They are powerful things, but they don’t have a sense of the world,” says Vishal Sikka, a former top SAP and Infosys executive who specialises in AI.
The new forms of human/machine co-operation are taking root in three main ways. First, there are scenarios where humans act as a back-up for the robots, taking over when the machines reach the limits of their abilities. Many work processes are being redesigned in this way — such as automated call centres, where language-understanding systems try to handle callers’ queries, only defaulting to a human operator when the technology is confused.
The Uber crash was an extreme example of what can go wrong. Research from Stanford University has shown that it takes at least six seconds for a human driver to recover their awareness and take back control, says Mr Heck. But even when there is enough time for human attention to be restored, the person stepping into a situation may see things differently from the machine, making the handover far from seamless.
Podcast
The Big Read
Artificial intelligence: can humans and robots work together?
“We need to work on a shared meaning between software systems and people — this is a very difficult problem,” says Mr Sikka. The use of language highlights the difficulty. Humans can convey meaning by using few words: the shared understanding of context between speaker and listener invests those words with meaning, adds Mr Sikka. Computer scientists have not yet worked out how to create that shared understanding in machines, he adds.
A second type of human/machine co-operation is designed to make sure that a sensitive task always depends on a person — even in situations where an automated system has done all the preparatory work and would be quite capable of completing the task itself.
Military drones, where human “pilots”, often based thousands of miles away, are called on to make the decision to fire at a target, are one example. Facial recognition systems— used to help immigration officers identify suspect travellers — are another. Both show how AI can make humans far more effective without robbing them of control, says Mr Heck.
One criticism of semi-autonomous weapons such as drones, however, is that there are no technical barriers to turning them into fully autonomous systems. Current procedures and safeguards can quickly be changed.
Drone operators prepare to launch an unmanned plane as part of US operations against Islamic State in Iraq and Syria © Getty
According to Stuart Russell, an AI professor at the University of California, Berkeley, it would be a short and easy step in a national emergency to remove the human drone operator from the loop, precipitating an era of robot weapons that make their own decisions about when to kill people. “You can’t say the technology itself can only be used in a defensive way and under human control. It’s not true,” he says.
A final type of “human in the loop” system involves the use of AI that is not capable of handling a task entirely on its own but is used as an aid to human decision-making. Algorithms that crunch data and make recommendations, or direct people in which step to take next, are creeping into everyday life.
The algorithms, though, are only as good as the data they are trained on — and they are not good at dealing with new situations. People required to trust these systems are often also required to take them on faith.
Mr Schank points to the role of algorithms in baseball. Analysing the strengths and weaknesses of each batter has led to new ways of setting the field that baseball traditionalists would balk at. The outcome of these computer-aided decisions may well end up being worse than those based on purely human analysis, he says.
A bug in the app used by Uber drivers in San Francisco sent them to an airport cargo site rather than the passenger terminal. “Sometimes people will blindly follow the machine, other times people will say: ‘Hang on, that doesn't look right.’ It’s like a lot of other technologies, people will adapt,” says Tim O’Reilly, a technology author.
Recommended
The Big Read
Driverless cars: mapping the trouble ahead
These may be relatively harmless cases where little damage is done from being led astray by the machine. But what happens when the stakes are higher?
IBM made medical diagnostics one of the main goals for Watson, the system first created to win a TV game show and then repurposed to become what it calls a more general “cognitive” system.
Such systems are designed to leave the ultimate decision with an expert. IBM maintains that humans will always have the final say. But how easy would it be for a doctor to override a recommendation being offered by a computer that, by definition, has analysed more comparable situations and crunched more data than they have?
Rejecting the technology might be even harder if it has insurance or other financial consequences. “Doctors are put in a position where they feel subservient to the system,” says Mr Nourbakhsh. “Simply saying they’ll still make the decisions doesn’t make it so.”
Similar worries surfaced in the 1980s, when the field of AI was dominated by “expert systems” designed to guide their human users through a “decision tree” to reach the correct answer in any situation. It turned out to be too hard to anticipate all the unforeseen factors that complicate real-world decisions.
Artificial intelligence may be able to clean up scans and spot anomalies faster than the human eye, but algorithms can be fallible
But the latest AI, based on machine learning, looks set to become far more widely adopted, and it may be harder to second-guess. Thanks to their success in narrow fields such as image recognition, expectations for these systems have been soaring. Their creators have been more than happy to feed the hype.
“We’re getting out-of-control marketing departments,” says Mr Schank. He singles out IBM in particular, arguing that the company heavily over-promised when it came to Watson — a criticism frequently heard in AI circles.
Dario Gil, chief operating officer of IBM’s research effort, defends the decision to launch a big initiative around Watson nearly eight years ago, arguing that no other tech companies were according such a central role to AI at the time. But, he adds: “We were not clear enough about the difference between general . . . and specific [AI].”
Assessing the quality of an AI system’s recommendations raises other challenges. Non-experts may feel reluctant to second-guess a machine whose workings they do not understand.
It is not a new dilemma. More than 30 years ago, a software glitch in a radiation therapy machine called Therac-25 led to some patients being given massive overdoses. Technicians had no way of identifying the flaw and the machine stayed in use much longer as a result, says Mr Nourbakhsh.
That’s the odd irony of artificial intelligence — the best systems happen to be the ones that are least explainable today
The technology used in the most advanced machine learning systems, known as neural networks, present additional challenges. They are modelled on a theory about how the human brain operates, passing data through layers of artificial neurons until an identifiable pattern emerges. Unlike the logic circuits employed in a traditional software program, there is no way of tracking this process to identify exactly why a computer comes up with a particular answer. This is a big hindrance in the adoption of neural networks.
“That’s the odd irony of AI — the best systems happen to be the ones that are least explainable today,” says Mr Nourbakhsh.
Some experts, however, say headway is being made and that it will not be long before machine learning systems are able to point to the factors that led them to a particular decision. “It’s not impossible — you can look inside and see what signals it’s picking up,” says Mr Heck.
Like many working in the field, he expresses optimism that humans and machines, working together, will achieve far more than either could have done alone. But there are some serious design challenges to solve before that rosy future arrives.
AI still has ‘a long way to go’ in dealing with people
There has long been a sure-fire way to get people and machines to work more effectively together: make the humans themselves act more like the machines.
Since the earliest days of mass manufacturing, it has been easier to fit people into the carefully structured world of automated processes than it has been to unleash the systems to work in the messy human world.
Some computer science experts say a more creative relationship between man and machine will take new forms of AI
What applied to early motor vehicle assembly lines is equally relevant in the age of artificial intelligence. But it comes with limitations. People often accept — and act on — the output of such systems without questioning them, says Roger Schank, an expert in the psychology of learning.
Pointing to the risk that doctors will blindly follow the recommendations of intelligent diagnostics systems, even when they are wrong, he adds: “There are always going to be doctors who are robotic. This problem has existed forever.”
Breaking this cycle and replacing it with a more creative relationship between man and machine will take new forms of AI, some computer science experts say. The systems need a wider understanding of the world in order to fit their recommendations into a more human context, says Vishal Sikka, the former chief executive of Infosys.
Mr Schank says that computer scientists should stop studying machine intelligence and turn their attention instead on the human variety. He points out that some of the founders of the field of AI were also psychologists. Only a better understanding of how humans learn from their experiences and apply their knowledge to new situations will bring the necessary breakthrough, he says.
David Mindell, a Massachusetts Institute of Technology professor who has written about the challenges of getting humans and robots to interact effectively, puts it most succinctly: “The computer science world still has a long way to go before it has a clue about how to deal with people.”
Letter in response to this article:
AI cannot replace diligent bedside assessment / From FD Skidmore, London, UK
FT Global Pharmaceutical and Biotechnology Conference
London
05 November - 06 November 2018
Embracing Disruption for a New Era in Health
Presented by FT live
Get alerts on Support services when a new story is published
Copyright The Financial Times Limited 2018. All rights reserved.
Latest on Support services
Analysis Cushman & Wakefield Inc
Cushman v the cleaner: the fight over non-competes
IPOs
Self-storage group Shurgard floats at bottom of price range
Lex
PageGroup: nice job Premium
Matthew Vincent
Opening Quote: Waiting for Whitbread
Analysis Support services
Royal Mail struggles to deliver new dawn
Support services
Royal Mail staff face losses of £2,500 on shares
Companies
Knorr-Bremse accelerates its IPO
Follow the topics in this article
Support services
Artificial intelligence
Technology sector
Drones
Audio articles
Cookies on FT Sites
We use cookies for a number of reasons, such as keeping FT Sites reliable and secure, personalising content and ads, providing social media features and to analyse how our Sites are used.
Manage cookies
Support
View Site Tips
Help Centre
About Us
Accessibility
myFT Tour
Legal & Privacy
Terms & Conditions
Privacy
Cookies
Copyright
Slavery Statement & Policies
Services
FT Live
Share News Tips Securely
Individual Subscriptions
Group Subscriptions
Republishing
Contracts & Tenders
Executive Job Search
Advertise with the FT
Follow the FT on Twitter
FT Transact
Secondary Schools
Tools
Portfolio
Today's Newspaper (ePaper)
Alerts Hub
Lexicon
MBA Rankings
Economic Calendar
News feed
Newsletters
Currency Converter
More from the FT Group
Markets data delayed by at least 15 minutes. © THE FINANCIAL TIMES LTD 2018. FT and ‘Financial Times’ are trademarks of The Financial Times Ltd.
The Financial Times and its journalism are subject to a self-regulation regime under the FT Editorial Code of Practice.
Financial Times
International Edition
Search the FT
Switch to UK Edition
Top sections
Home
World
US
Companies
Tech
Markets
Opinion
Work & Careers
Life & Arts
Personal Finance
Science
Special Reports
FT recommends
Lex
Alphaville
EM Squared
Lunch with the FT
FT Confidential Research
Video
Podcasts
News feed
Newsletters
myFT
Portfolio
Today's Newspaper (ePaper)
Crossword
Our Apps
Help Centre
Subscribe
Sign In
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment