Smart Technology & Ethics
How would you feel if your Autonomous Vehicle decided it was best for you to die?
The June 23, 2016 edition of Science magazine, a prestigious journal of general science including in recent years the science of cities, carries an article entitled “The social dilemma of autonomous vehicles“. This explores, in the context of Autonomous Vehicles, the familiar ethical debate about a decision to save the lives of several people, probably strangers, by taking an action that will cost the life of an otherwise safe person, possibly a family member of oneself. The article describes research on how individuals would respond to such challenges depending on the size of the at-risk group, the individual’s relationship to these people, and the individual’s relationship to alternate victim.
Such dramatic events are rare in individual lives, but with vehicle populations of tens or hundreds of millions they will not be infrequent. Humans deal with them in various, often heroic, ways. In the future, such real-time decision-making will often be in the hands of a smart system, such as an Autonomous Vehicle. Depending on the ethical model built into the vehicle it might decide that the best outcome is for the passenger or the driver to be killed. The article reports on surveys conducted to explore both attitudes on how such vehicles should behave and also on the responder’s propensity to buy vehicles having different ethical policies.
While such life and death scenarios are rare, smart systems for managing traffic, water supplies, building temperatures, and so forth are increasingly common and, in lesser ways, they embed implicit ethical models for their decision-making. I began my work in smart cities, almost ten years ago, starting from the premise that by applying machine analytics and modeling (we did not then refer to it as Artificial Intelligence) to the operation of city infrastructure and systems, we could help the city to work “better” in a utilitarian fashion, causing the least harm and doing the most good.
Here’s the problem with that. When confronted on the BBC programme, The Brains Trust, with a question of the form “What would be the best….?”, Professor Jacob Bronowski would immediately respond: “First you have to tell me what “best” means.” Our first scenario of this type, devised by Michael Kehoe and Perry Hartswick, considered how to allocate stored energy, both electricity and chilled water at the City of Masdar in Abu Dhabi during an extended sandstorm. Our suggestion for the “best” policy was to prioritise air conditioning and electrical power over transportation and desalination of water.
But there may always be exceptions. Suppose that a citizen of such a city had a heart attack and needed to be transported urgently to a hospital. Then at least some part of the transportation system would have to be operated. In that first scenario, at least, we did not think about such exceptions. And this seems to be a key difference between deciding ethical problems using human judgment and embedding those decisions in machine logic. The generality of human intelligence allows us in most cases to find, sometimes instantaneously , if not the “best” decision, at least a “fair” decision without having to pre-define all possible exceptions. But we may fear the rigidity of machine intelligence in reaching such decisions; John Thomas has written on this in his book, “Turing’s Nightmares” and his blog. And the list of exceptions may be very long and complex; suppose the person at risk were a pregnant woman, or an escaped murderer, or a senior politician or business person….. In the vehicle scenario above even human intelligence will not be able to deal effectively with every possible exception and hence some outcomes will be unfair.
Such an issue arose during the recent Meeting of the Minds Design Forum, in which one speaker proposed to designate for-fee priority lanes for people who wanted to drive more quickly on congested roads. Objections were raised on grounds that this would disadvantage those who could not afford to pay the priority fee and thus increase inequality.
As humans we recognise that life is often not fair over large ranges of spatio-, temporal-, and moral scales. At the trivial end of such scales we have evolved protocols that generally enable us to resolve such unfairness. As a foreigner living now many years in the United States, I have always been astonished that hyper-competitive Americans behave so politely at four-way road junctions, calmly waiting their turn based on time of arrival and resolving simultaneous arrival conflicts by resort to “priorité à droit“. And so we fear the Kafkaesque horrors of unyielding bureaucracies and AI.
Even as humans, our search for the “best” decision may focus too much on avoiding harm and thereby miss some of the dimensions of doing good. I am reminded of a workshop I ran in the spring of 2012 to consider how “smart” might benefit the resilience of cities. One of our speakers was the Chief Risk Officer of Swiss Re in the United States. He began by describing the horrors of tenement living in American cities such as Baltimore, Chicago, and San Francisco and in similar cities throughout the world in the 19th and early 20th centuries. He noted the many tens of thousands of deaths that occurred each year in the massive fires that repeatedly broke out in such wooden buildings.
And then in the late 19th century, a miracle material was introduced into building construction. A material that is naturally occurring, that has been mined for several thousand years, and that has the valuable property of being completely fire-resistant. That material is, of course, asbestos. In the latter part of the 20th century it became notorious as one of the major causes of silicosis, a respiratory disease that leads ultimately to an unpleasant death, and its earlier use was the subject of great acrimony.
Yet the Chief Risk Officer’s point was: “Yes, silicosis was a terrible outcome from the use of asbestos in buildings. But consider also the tens of thousands of people who did not die terrible deaths by fire as a result. Would it have been “better” at that time to decide not to use asbestos?”
Technologies change society, as I am fond of reminding this community. We introduce them for their ability to solve specific problems and make our lives “better” and often only later do we discover the new problems they have created for us. These emergent problems are often very important and in many cases they raise ethical problems that engineers and planners are ill-equipped to resolve. As we begin to embed ethical decisions, such as those of Autonomous Vehicles, into our engineering, we must seek out the voices of those who are trained in this profession.
Leave your comment below, or reply to others.
Read more from the Meeting of the Minds Blog
Spotlighting innovations in urban sustainability and connected technology
We found that EV owners are white (85%), male (75%), well educated, affluent (80% >$100,000 household income), older, urban/suburban oriented, and environmentally conscious; they charge at home and use the EV to commute to work (similar to findings in other areas of the country). “Environmental concerns” is the most important factor for purchasing and driving an EV; “price and status” is the second most important factor; “efficiency and performance” of the EV is the third most important. EV owners with lower household income (<$100,000), the remaining 20%, are younger, exurban/rural oriented, and concerned about price and status of the EV. Government at state and federal levels has been subsidizing mostly affluent households to purchase new EVs, which opens up a huge equity issue.
A study of more than 20 national and sub-national road-infrastructure delivery systems across the world was undertaken, to uncover root causes and improvement pathways. In consultation with leading industry experts, we developed a diagnostic for the full infrastructure delivery system across five key areas.
The Remix team brings a multidisciplinary approach to their change management work, which helps them complement municipal government clients, whose stakeholders tend to be siloed into separate departments. “We’re fairly unique in the software industry, because our team is blended,” Tiffany explains. One half of their team is comprised of transportation practitioners and policy experts, and the other half is made up of software developers and designers. “We bring to transportation planning the culture of co-creation and fast iteration that is typically found in the software industry,” she says, “so, we go into a room having both those muscles to flex.”