Smart Cities and Artificial Intelligence: Balancing Opportunity and Risk
Steven Hawking recently commented that artificial intelligence (AI) would be “either the best thing or the worst thing ever to happen to humanity”. He was referring to the opportunity that AI offers to improve mankind’s situation, set alongside the risks that it also presents. These same competing possibilities apply no less when AI is considered in the context of smart cities and the planet’s growing urbanization. With smart cities, though, this is not just some abstract balance: there is a genuine choice of path to be made as smart cities and AI evolve together. This article explores the choice.
Accentuate the positive…
Smart cities are hard to define. They have different objectives for their “smarts”, relating variously to efficiency, service levels, economic vitality, social equity, the environment, livability, or some combination of these. One generalized definition might be that smart cities represent the intersection of the Internet of Things (IOT) and analytics – and increasingly, AI – with public infrastructure, public services and city life, in pursuit of whichever of the above objectives the city may have adopted.
Like smart cities, AI can be hard to pin down. It spans many techniques including some things that computing has achieved for 20 years or more. Once again one is forced into a generalization: AI may come from existing models and tools but it increasingly comes from computing technologies that integrate vast amounts of structured and unstructured data, identify patterns, reason about what they have identified, learn and improve their conclusions with experience, and in some cases interact with humans in natural language.
However, even at this generalized level, the potential for AI and smart cities should be clear. According to Gartner, cities are the fastest growing area of the IOT, with 3.3 billion devices due to be connected by 2018. Something has to analyze that data: it is increasingly clear that AI will be that something, enabling multiple systems to be optimized together, detecting emergent patterns, and providing wholly new capabilities in ways that traditional analytics tools cannot.
Two examples illustrate the potential power of AI in cities:
- Autonomous vehicles are clearly approaching the threshold of mass adoption, enabled by AI in several forms. Autonomous vehicles promise to provide radical solutions to traffic and its resulting pollution by enabling much more efficient use of road space, while significantly improving safety and enabling mobility for those who cannot drive.
- My employer, IBM has created an AI-powered tool, “Green Horizons” to enable high resolution predictions of air pollution, based on weather (wind, humidity and temperature), topology, traffic and industrial activity. The tool, deployed first in Beijing and now in Johannesburg and Delhi, enables “hotspots” in particulates, smog and other pollutants to be predicted on a 1km2 grid, 24 hours ahead, in time to allow curtailment or adjustment of polluting activities. Like all AI systems it learns as it goes, so its predictions get better over time. It will also enable urban design solutions to be evaluated; and a similar tool is also used to predict output from groups of wind turbines within a wind farm.
And so on! Clearly, there is no one size fits all with AI and smart cities, and the applications of it that work best will depend what each city is trying to achieve. A smart city focused on efficiency, for example, might use AI for asset optimization; one focused on sustainability or neighborhood empowerment might encourage the use of AI to enable microgrids and community energy schemes to be balanced with the main grid; or a smart city focused on livability might use AI for crime prediction and prevention.
On this more positive path, therefore, AI is absolutely a force for good. But now let’s look at the other, more negative path – and how to avoid taking it.
…eliminate the negative
For all the opportunities that AI offers to improve the city functioning and city life, there are at least five traps that it might fall into when applied to smart cities. These could discredit both AI and smart cities, unless solutions are implemented as AI tools are deployed.
- The first trap comes from the propensity of smart cities for integrating different systems within the city’s “system-of-systems”. For example, using water pumping for demand response integrates water and energy; using cellphone location signals to manage traffic integrates communications and roads; and so on. Ordinarily this type of integration delivers additional value for the city that would not be possible if each system remained separate. But it can also bring risk, be expanding the impact of failures: failure in one system can propagate to another in a cascading failure, or “failure chain”. Sometimes, as with the major energy blackout in the US North East and parts of Canada in 2003, many of the systems involved may be operating as designed – it’s just that the existence and nature of the interconnections may not be known.
Add AI to this picture – a tool whose raison d’etre is often enabling integration of data and systems in wholly new ways to deliver yet further value – and in so doing we may be adding to the possibility of emergent cascading failures, by accidentally creating single points of failure with wider impact. If those connections are embedded deep in the logic of an AI tool, they may not be apparent until too late.
The solution, however, is readily identifiable: systematic analysis of the interconnections, spanning all relevant systems and their owners. This might perhaps use a methodology such as Failure Modes Effects and Criticality Analysis (FMECA), that the military has used for some years for complex military hardware like aircraft carriers – which rival a city in their technological complexity and interconnectedness, even if not their size. When the existence of a failure chains is exposed, it can either be mitigated in advance of a failure, or at least planned for.
- The second trap is an adjunct of the first – the possibility of emergent privacy risks that may also come from integrating data and systems in novel ways. Two data files held separately on different systems may be innocuous in themselves, and yet have privacy implications when combined, for example if they allow a citizen’s health status to be inferred. Alternatively, they may be held in appropriate conditions (for example under HIPAA regulations) on their “home” systems, but become exposed when integrated in an AI tool that does not have the same protections. As with cascading failures, to the extent that the power of AI encourages system and data integration, it may also exacerbate the risk of privacy failure. However, as a solution it may be possible to regard privacy breaches as possible failure modes that can be identified and mitigated or planned for in advance like any other failures.
- The third trap is that an AI-enabled tools might be found to be biased or discriminating in some way against some segment of the city’s population – the inhabitants of a neighborhood, or specific ethnic groups or genders, say, – simply as an accidental result of the assumptions embodied in their creation. For example, suppose a tool designed to improve nutrition recommends pork to someone of a religion that does not eat it? Though regrettable, that might be manageable in its impact, but suppose an AI crime prediction tool was trained (as AI tools need to be) with data from disproportionately from a neighborhood where there is a high proportion of a given ethnic minority, and where, therefore, criminals are inherently more likely to be from that minority. Without corrective action, this might mean that the tool was then predisposed in some way to suggest that anyone of that minority, wherever they were encountered, had a higher risk of being a criminal. That would potentially be catastrophic in its adverse impact.
The solution here is algorithmic transparency. It must be clear why an AI tool has produced a given conclusion or recommendation. With AI, however, that may be easier to say than do – with many of the machine and deep learning techniques now becoming available that use different types of neural networks, for example, it can be very hard to tease out exactly how the tool learned what it did. Perhaps the best news here is that applications of AI in other areas have the same issue, so there is a shared interest in solving it. It is difficult to see, for example, how AI could be permitted for safety critical applications like driving a car or flying a plane without enabling the required level of algorithmic transparency. It is also difficult to see how potential cascading failures, as described above, could be detected without this transparency.
- The fourth trap is that AI in smart cities realizes the fears of those who believe that it will destroy jobs – and that it is used to replace city workers as opposed to enabling them to be more productive. Clearly, AI is highly liable to change the way in which the city’s work is done, especially in combination with the automation potential of the IOT. But cities that want to avoid antagonizing their workforces (both blue and white-collar) will do well to define policies in advance for how the productivity released by AI is going to be redeployed.
- The fifth and final trap is alienation. One of the criticisms of smart cities is that they are technocratic to the point of disenfranchising the population and being anti-democratic or even repressive (read Adam Greenfield’s book, “Against the Smart City”). And truthfully, while I think the argument is willfully over-blown, some implementations of smart cities have deserved the criticism because they have focused more-or-less exclusively on the “top down”, government-to-citizen, aspects of city management – the “pay this”, “apply here”, “don’t do that” functions. Applying a technology like AI, of which people may already be suspicious, to merely enabling the “top down” to happen better, faster and cheaper, is a recipe for increasing those criticisms and undermining the legitimacy of would-be smart cities that try it.
In part, this will be a case of AI taking the blame for the political climate that led to the top down approach in the first place, but fairly or not it could stymie the adoption of AI and smart city technologies generally. As I have argued in other blogs, smart cities need to adopt what I called a “U-shaped model”, where the top down applications of technology (one side of the “U”) are balanced by “side to side”, citizen-to-citizen applications (the bottom of the “U”); and “bottom up”, citizen-to-government feedback and communication (the other side of the “U”).
The same may be said of AI. It needs to be applied to enabling citizen-to-citizen activity (perhaps, for example, a neighborhood tool for assessing the suitability of soil in vacant lots for urban gardening and identifying what might best grow there; or enabling energy trading between energy generators and energy users on neighborhood microgrids). AI also needs to be applied to bottom up applications, for example in correlating citizen feedback with crime or sickness patterns.
Leave your comment below, or reply to others.
Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.
Read more from the Meeting of the Minds Blog
Spotlighting innovations in urban sustainability and connected technology
In New Zealand, persistent, concentrated advocacy and legal cases advanced by Māori people are inspiring biocentric policies; that is, those which recognize that people and nature, including living and non-living elements, are part of an interconnected whole. Along the way, tribal leaders and advocates are successfully making the case that nature; whole systems of rivers, lakes, forests, mountains, and more, deserves legal standing to ensure its protection. An early legislative “win” granted personhood status to the Te Urewera forest in 2014, which codified into law these moving lines:
“Te Urewera is ancient and enduring, a fortress of nature, alive with history; its scenery is abundant with mystery, adventure, and remote beauty … Te Urewera has an identity in and of itself, inspiring people to commit to its care.”
The Te Urewera Act of 2014 did more than redefine how a forest would be managed, it pushed forward the practical expression of a new policy paradigm.
Can U.S. cities transform to overcome extreme car dependency?
In summer 2019, two values driven agencies came together to see if they could incentivize change in five cities with the Made to Move Grant program. This innovative, unique, and inspirational partnership between Degree and Blue Zones is awarding $100,000 dollars to each city to redesign their neighborhoods and city-centers for active, healthy lives. The program aims to create model practices and projects that gain the attention of other cities and inspire evolutionary changes to once again focus on places for people, and design accordingly.
Nearly a million people in California receive low quality drinking water from underperforming water systems, which are challenged by drought, overdrafting, and emerging contaminants. Root causes of poor water quality can include inadequate treatment technology, operational issues, and insufficient personnel and financial capacity.
By focusing on small water systems that do have multiple violations, there is opportunity for significant positive impact. Nearly 700,000 Californians are served by small public water systems with one or more water quality violations in the last five years.
Improving water quality is more than choosing a technical solution. Community alignment and support, and political willingness are critical elements that need to be combined with technical solutions to allow systems to thrive.