Smart Technology & Ethics
How would you feel if your Autonomous Vehicle decided it was best for you to die?
The June 23, 2016 edition of Science magazine, a prestigious journal of general science including in recent years the science of cities, carries an article entitled “The social dilemma of autonomous vehicles“. This explores, in the context of Autonomous Vehicles, the familiar ethical debate about a decision to save the lives of several people, probably strangers, by taking an action that will cost the life of an otherwise safe person, possibly a family member of oneself. The article describes research on how individuals would respond to such challenges depending on the size of the at-risk group, the individual’s relationship to these people, and the individual’s relationship to alternate victim.
Such dramatic events are rare in individual lives, but with vehicle populations of tens or hundreds of millions they will not be infrequent. Humans deal with them in various, often heroic, ways. In the future, such real-time decision-making will often be in the hands of a smart system, such as an Autonomous Vehicle. Depending on the ethical model built into the vehicle it might decide that the best outcome is for the passenger or the driver to be killed. The article reports on surveys conducted to explore both attitudes on how such vehicles should behave and also on the responder’s propensity to buy vehicles having different ethical policies.
While such life and death scenarios are rare, smart systems for managing traffic, water supplies, building temperatures, and so forth are increasingly common and, in lesser ways, they embed implicit ethical models for their decision-making. I began my work in smart cities, almost ten years ago, starting from the premise that by applying machine analytics and modeling (we did not then refer to it as Artificial Intelligence) to the operation of city infrastructure and systems, we could help the city to work “better” in a utilitarian fashion, causing the least harm and doing the most good.
Here’s the problem with that. When confronted on the BBC programme, The Brains Trust, with a question of the form “What would be the best….?”, Professor Jacob Bronowski would immediately respond: “First you have to tell me what “best” means.” Our first scenario of this type, devised by Michael Kehoe and Perry Hartswick, considered how to allocate stored energy, both electricity and chilled water at the City of Masdar in Abu Dhabi during an extended sandstorm. Our suggestion for the “best” policy was to prioritise air conditioning and electrical power over transportation and desalination of water.
But there may always be exceptions. Suppose that a citizen of such a city had a heart attack and needed to be transported urgently to a hospital. Then at least some part of the transportation system would have to be operated. In that first scenario, at least, we did not think about such exceptions. And this seems to be a key difference between deciding ethical problems using human judgment and embedding those decisions in machine logic. The generality of human intelligence allows us in most cases to find, sometimes instantaneously , if not the “best” decision, at least a “fair” decision without having to pre-define all possible exceptions. But we may fear the rigidity of machine intelligence in reaching such decisions; John Thomas has written on this in his book, “Turing’s Nightmares” and his blog. And the list of exceptions may be very long and complex; suppose the person at risk were a pregnant woman, or an escaped murderer, or a senior politician or business person….. In the vehicle scenario above even human intelligence will not be able to deal effectively with every possible exception and hence some outcomes will be unfair.
Such an issue arose during the recent Meeting of the Minds Design Forum, in which one speaker proposed to designate for-fee priority lanes for people who wanted to drive more quickly on congested roads. Objections were raised on grounds that this would disadvantage those who could not afford to pay the priority fee and thus increase inequality.
As humans we recognise that life is often not fair over large ranges of spatio-, temporal-, and moral scales. At the trivial end of such scales we have evolved protocols that generally enable us to resolve such unfairness. As a foreigner living now many years in the United States, I have always been astonished that hyper-competitive Americans behave so politely at four-way road junctions, calmly waiting their turn based on time of arrival and resolving simultaneous arrival conflicts by resort to “priorité à droit“. And so we fear the Kafkaesque horrors of unyielding bureaucracies and AI.
Even as humans, our search for the “best” decision may focus too much on avoiding harm and thereby miss some of the dimensions of doing good. I am reminded of a workshop I ran in the spring of 2012 to consider how “smart” might benefit the resilience of cities. One of our speakers was the Chief Risk Officer of Swiss Re in the United States. He began by describing the horrors of tenement living in American cities such as Baltimore, Chicago, and San Francisco and in similar cities throughout the world in the 19th and early 20th centuries. He noted the many tens of thousands of deaths that occurred each year in the massive fires that repeatedly broke out in such wooden buildings.
And then in the late 19th century, a miracle material was introduced into building construction. A material that is naturally occurring, that has been mined for several thousand years, and that has the valuable property of being completely fire-resistant. That material is, of course, asbestos. In the latter part of the 20th century it became notorious as one of the major causes of silicosis, a respiratory disease that leads ultimately to an unpleasant death, and its earlier use was the subject of great acrimony.
Yet the Chief Risk Officer’s point was: “Yes, silicosis was a terrible outcome from the use of asbestos in buildings. But consider also the tens of thousands of people who did not die terrible deaths by fire as a result. Would it have been “better” at that time to decide not to use asbestos?”
Technologies change society, as I am fond of reminding this community. We introduce them for their ability to solve specific problems and make our lives “better” and often only later do we discover the new problems they have created for us. These emergent problems are often very important and in many cases they raise ethical problems that engineers and planners are ill-equipped to resolve. As we begin to embed ethical decisions, such as those of Autonomous Vehicles, into our engineering, we must seek out the voices of those who are trained in this profession.
Leave your comment below, or reply to others.
Read more from the Meeting of the Minds Blog
Spotlighting innovations in urban sustainability and connected technology
To plan for the transition to automated vehicles, cities and county governments should develop building and zoning codes that not only accommodate adaptable parking but encourage it by design. This can include amending building codes to require infrastructure that makes transforming garages into inhabitable buildings possible. As automated vehicles begin to enter the marketplace, cities should consider incentives and other programs to begin the conversion of ground level parking to commercial uses.
For much of the twentieth century, transportation planning focused on moving cars as efficiently as possible. This resulted in streets that are designed for cars, with little room for transit vehicles, pedestrians and cyclists. Agencies in charge of roads, signals, parking, taxis and transit need to collaborate more closely to focus on moving people, not just vehicles, as efficiently as possible.
Focusing on all the elements that matters to people not just travel time – It is clear that people travelling across the region have high expectations and want to have consistent, reliable, convenient, clean and low-cost travel options regardless of their preferred mode and what municipal boundaries they cross. People care little about what system they are on or who operates it—they simply want to get where they are going as quickly, comfortably and reliably as possible.
Driving into a town with a boarded-up Main Street or a row of abandoned factories make it look like the community has been the victim of a destructive economic process. In truth, the devastation that is apparent on the surface is really a symptom of deeper social and institutional problems that have been going on for a very long time. I have four strategies for you to make your rural redevelopment projects successful.