Will the promise of artificial intelligence really outweigh the risks?
Global risksArticleDecember 18, 2017
There is a lot of hype around artificial intelligence but can it live up to its promise and how do businesses manage the risks?
Investment in artificial intelligence (AI) and the hype surrounding the technology are rising exponentially, but can the technology truly live up to its billing and how can businesses manage the contingent risks?
Forbes magazine recently claimed that, “More progress has been achieved on artificial intelligence.in the past five years than in the past five decades.”
“I believe we are seeing the acceleration of possibilities and the acceleration of risk [in AI], Ulrich Homann, a Distinguished Architect in the Cloud and Enterprise business at Microsoft, told guests at Zurich Insurance Group’s annual Global Risk Managers Summit in Edinburgh in September 2017. “Both are something we have to understand and really start to harness, both for benefits and for risk management.”
The opportunities certainly seem boundless, as automation opens the door for self-driving vehicles, digital health care, robotic companions, and a host of as yet unconceived of applications and possibilities.
PWC projects that AI could contribute as much as USD 15.7 trillion to the global economy in 2030, more than the combined output of China and India today. That figure includes a USD 6.6 trillion increase in productivity and USD 9.1 rise in consumption.
“Our research also shows that 45% of total economic gains by 2030 will come from product enhancements, stimulating consumer demand,” PWC said in the report. “This is because AI will drive greater product variety, with increased personalization, attractiveness and affordability over time.”
Jobs at risk
Those gains are likely to come at a price, however. According to a report published by the McKinsey Global Institute in January 2017, “Given currently demonstrated technologies, very few occupations—less than 5 percent—are candidates for full automation. However, almost every occupation has partial automation potential, as a proportion of its activities could be automated. We estimate that about half of all the activities people are paid to do in the world’s workforce could potentially be automated by adapting currently demonstrated technologies. That amounts to almost USD 15 trillion in wages.”
The Global Risks Report 2017, published by the World Economic Forum in collaboration with Zurich Insurance Group and other stakeholders, ranked unemployment and underemployment as the most important interconnected risk and artificial intelligence and robotics as the emerging technology with the greatest potential for negative consequences over the coming decade.
Those finding were reconfirmed this year in the World Economic Forum’s proprietary Executive Opinion Survey (EOS) of 12,411 executives across 136 countries conducted between February and June 2017, once again ranked unemployment and under-employment as the most severe interconnected risk, while respondents in North America, East Asia and the Pacific ranked cyber as the greatest potential risk to their operations.
Doomsday machines
Nor are jobs the only area of concern. Tesla CEO Elon Musk has warned that AI poses “the biggest existential threat” to humanity and has said he believes that stronger international regulatory oversight is needed “to make sure that we don’t do something very foolish.” He is on record as saying, “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
Unsurprisingly, many technologists believe that Musk’s view are overblown and extreme, but while we may not be at immediate risk from the kind of machine overlords seen in science fiction movies, there is a real risk that the pace of adoption may well exceed the capabilities of the technology.
Recently, doctors in the UK have been raising questions over mass public trials of digital health advisory services powered by AI and machine learning. Their concern is that many of the claims made on behalf of the technology have not been supported by clinical trials. Dr. Margaret McCartney, a Glasgow-based GP and author, wrote in the British Medical Journal, “New technology should be treated like any other medical intervention capable of benefit and harm: it should be tested in high quality trials capable of finding unintended harm as well as benefit.”
The issue is particularly pressing because the country’s National Health Service’s budget is under pressure as the government tries to rein in public spending to reduce the national debt at a time when medical costs are rising. Chris Simon, the Chief Executive of the NHS, recently warned that, “We are under-funding our health services by GBP 20 billion to GBP 30 billion a year.”
Programmed to fail
Another risk is that, even armed with machine learning, some systems will potentially have unforeseen risk factors baked in. In a recent paper published by the University of Edinburgh, Nick Oliver, Thomas Calvard and Kristina Potocnik applied the concept of organizational limits to the “paradox of almost totally safe systems”, the idea that systems that are safe under most conditions could be peculiarly vulnerable under unusual ones. This is an idea that is particularly applicable to AI systems that operate independently in real time, far faster than humans can effectively monitor them.
Using the example of the loss of Air France flight 477, the paper argues that, “The same measures that make a system safe and predictable may introduce restrictions on cognition, which over time, inhibit or erode the disturbance-handling capability of the actors involved. We also note limits to cognition in system design processes that make it difficult to foresee complex interactions.”
“For organization science researchers, AF447 is a salutary reminder of how our capacity as humans to create highly complex systems is not always matched by our ability to organize and control them in the face of most conceivable conditions, let alone inconceivable ones,” the report concludes. “As organizations and systems grow in scale and complexity, the issue of how we develop our organizations—and ourselves as actors—to handle unexpected and extreme events grows ever more pressing.”
System risks
A further layer of risk is that all AI systems will rely heavily on connected devices through the internet of things and will draw on vast pools of human data. As a result, there are vast associated connectivity and privacy risks. Additional risk factors are around biases, unintended interdependencies and blackbox decision making.
There are also ethical concerns where machines make decisions that have a direct impact on human lives. For example, in a modern equivalent of the Trolley Problem, automated cars my one day have to make a decision on whether to put the life of a passenger or pedestrian at risk, with consequent liabilities.
Mitigating these risks is particularly challenging because the technology is so new and developing so rapidly that it is hard to predict all the potential implications and consequences, and even harder for regulators to keep up with the pace of technical change.
Governments clearly have a role to play in driving discussion around AI and paving the way for uniform, globally applicable legislation that protects human rights without limiting the potential of the technology. Companies, however, also have a role to play. Microsoft, for example, is calling for a “Digital Geneva Convention”, an agreement that sets international safety and ethical standards for development.
As the Edinburgh University study also demonstrates, companies also need to ensure that they maintain their ability to respond to unforeseen circumstances once systems have been automated. This can only truly be achieved by implementing risk management protocols, that:
- Take a holistic approach to risk that integrates risk management across the entire enterprise from the board room and C-suite to the factory floor and sales room
- Ensures that expertise relating to all automated functions is retained in-house
- The minimum guidelines for cyber resilience, such as those published by the World Economic Forum with the support of Zurich Insurance Group, are followed.