Smart Cities and Artificial Intelligence: Balancing Opportunity and Risk
Who will you meet?
Cities are innovating, companies are pivoting, and start-ups are growing. Like you, every urban practitioner has a remarkable story of insight and challenge from the past year.
Meet these peers and discuss the future of cities in the new Meeting of the Minds Executive Cohort Program. Replace boring virtual summits with facilitated, online, small-group discussions where you can make real connections with extraordinary, like-minded people.
Steven Hawking recently commented that artificial intelligence (AI) would be “either the best thing or the worst thing ever to happen to humanity”. He was referring to the opportunity that AI offers to improve mankind’s situation, set alongside the risks that it also presents. These same competing possibilities apply no less when AI is considered in the context of smart cities and the planet’s growing urbanization. With smart cities, though, this is not just some abstract balance: there is a genuine choice of path to be made as smart cities and AI evolve together. This article explores the choice.
Accentuate the positive…
Smart cities are hard to define. They have different objectives for their “smarts”, relating variously to efficiency, service levels, economic vitality, social equity, the environment, livability, or some combination of these. One generalized definition might be that smart cities represent the intersection of the Internet of Things (IOT) and analytics – and increasingly, AI – with public infrastructure, public services and city life, in pursuit of whichever of the above objectives the city may have adopted.
Like smart cities, AI can be hard to pin down. It spans many techniques including some things that computing has achieved for 20 years or more. Once again one is forced into a generalization: AI may come from existing models and tools but it increasingly comes from computing technologies that integrate vast amounts of structured and unstructured data, identify patterns, reason about what they have identified, learn and improve their conclusions with experience, and in some cases interact with humans in natural language.
However, even at this generalized level, the potential for AI and smart cities should be clear. According to Gartner, cities are the fastest growing area of the IOT, with 3.3 billion devices due to be connected by 2018. Something has to analyze that data: it is increasingly clear that AI will be that something, enabling multiple systems to be optimized together, detecting emergent patterns, and providing wholly new capabilities in ways that traditional analytics tools cannot.
Two examples illustrate the potential power of AI in cities:
- Autonomous vehicles are clearly approaching the threshold of mass adoption, enabled by AI in several forms. Autonomous vehicles promise to provide radical solutions to traffic and its resulting pollution by enabling much more efficient use of road space, while significantly improving safety and enabling mobility for those who cannot drive.
- My employer, IBM has created an AI-powered tool, “Green Horizons” to enable high resolution predictions of air pollution, based on weather (wind, humidity and temperature), topology, traffic and industrial activity. The tool, deployed first in Beijing and now in Johannesburg and Delhi, enables “hotspots” in particulates, smog and other pollutants to be predicted on a 1km2 grid, 24 hours ahead, in time to allow curtailment or adjustment of polluting activities. Like all AI systems it learns as it goes, so its predictions get better over time. It will also enable urban design solutions to be evaluated; and a similar tool is also used to predict output from groups of wind turbines within a wind farm.
And so on! Clearly, there is no one size fits all with AI and smart cities, and the applications of it that work best will depend what each city is trying to achieve. A smart city focused on efficiency, for example, might use AI for asset optimization; one focused on sustainability or neighborhood empowerment might encourage the use of AI to enable microgrids and community energy schemes to be balanced with the main grid; or a smart city focused on livability might use AI for crime prediction and prevention.
On this more positive path, therefore, AI is absolutely a force for good. But now let’s look at the other, more negative path – and how to avoid taking it.
…eliminate the negative
For all the opportunities that AI offers to improve the city functioning and city life, there are at least five traps that it might fall into when applied to smart cities. These could discredit both AI and smart cities, unless solutions are implemented as AI tools are deployed.
- The first trap comes from the propensity of smart cities for integrating different systems within the city’s “system-of-systems”. For example, using water pumping for demand response integrates water and energy; using cellphone location signals to manage traffic integrates communications and roads; and so on. Ordinarily this type of integration delivers additional value for the city that would not be possible if each system remained separate. But it can also bring risk, be expanding the impact of failures: failure in one system can propagate to another in a cascading failure, or “failure chain”. Sometimes, as with the major energy blackout in the US North East and parts of Canada in 2003, many of the systems involved may be operating as designed – it’s just that the existence and nature of the interconnections may not be known.
Add AI to this picture – a tool whose raison d’etre is often enabling integration of data and systems in wholly new ways to deliver yet further value – and in so doing we may be adding to the possibility of emergent cascading failures, by accidentally creating single points of failure with wider impact. If those connections are embedded deep in the logic of an AI tool, they may not be apparent until too late.
The solution, however, is readily identifiable: systematic analysis of the interconnections, spanning all relevant systems and their owners. This might perhaps use a methodology such as Failure Modes Effects and Criticality Analysis (FMECA), that the military has used for some years for complex military hardware like aircraft carriers – which rival a city in their technological complexity and interconnectedness, even if not their size. When the existence of a failure chains is exposed, it can either be mitigated in advance of a failure, or at least planned for.
- The second trap is an adjunct of the first – the possibility of emergent privacy risks that may also come from integrating data and systems in novel ways. Two data files held separately on different systems may be innocuous in themselves, and yet have privacy implications when combined, for example if they allow a citizen’s health status to be inferred. Alternatively, they may be held in appropriate conditions (for example under HIPAA regulations) on their “home” systems, but become exposed when integrated in an AI tool that does not have the same protections. As with cascading failures, to the extent that the power of AI encourages system and data integration, it may also exacerbate the risk of privacy failure. However, as a solution it may be possible to regard privacy breaches as possible failure modes that can be identified and mitigated or planned for in advance like any other failures.
- The third trap is that an AI-enabled tools might be found to be biased or discriminating in some way against some segment of the city’s population – the inhabitants of a neighborhood, or specific ethnic groups or genders, say, – simply as an accidental result of the assumptions embodied in their creation. For example, suppose a tool designed to improve nutrition recommends pork to someone of a religion that does not eat it? Though regrettable, that might be manageable in its impact, but suppose an AI crime prediction tool was trained (as AI tools need to be) with data from disproportionately from a neighborhood where there is a high proportion of a given ethnic minority, and where, therefore, criminals are inherently more likely to be from that minority. Without corrective action, this might mean that the tool was then predisposed in some way to suggest that anyone of that minority, wherever they were encountered, had a higher risk of being a criminal. That would potentially be catastrophic in its adverse impact.
The solution here is algorithmic transparency. It must be clear why an AI tool has produced a given conclusion or recommendation. With AI, however, that may be easier to say than do – with many of the machine and deep learning techniques now becoming available that use different types of neural networks, for example, it can be very hard to tease out exactly how the tool learned what it did. Perhaps the best news here is that applications of AI in other areas have the same issue, so there is a shared interest in solving it. It is difficult to see, for example, how AI could be permitted for safety critical applications like driving a car or flying a plane without enabling the required level of algorithmic transparency. It is also difficult to see how potential cascading failures, as described above, could be detected without this transparency.
- The fourth trap is that AI in smart cities realizes the fears of those who believe that it will destroy jobs – and that it is used to replace city workers as opposed to enabling them to be more productive. Clearly, AI is highly liable to change the way in which the city’s work is done, especially in combination with the automation potential of the IOT. But cities that want to avoid antagonizing their workforces (both blue and white-collar) will do well to define policies in advance for how the productivity released by AI is going to be redeployed.
- The fifth and final trap is alienation. One of the criticisms of smart cities is that they are technocratic to the point of disenfranchising the population and being anti-democratic or even repressive (read Adam Greenfield’s book, “Against the Smart City”). And truthfully, while I think the argument is willfully over-blown, some implementations of smart cities have deserved the criticism because they have focused more-or-less exclusively on the “top down”, government-to-citizen, aspects of city management – the “pay this”, “apply here”, “don’t do that” functions. Applying a technology like AI, of which people may already be suspicious, to merely enabling the “top down” to happen better, faster and cheaper, is a recipe for increasing those criticisms and undermining the legitimacy of would-be smart cities that try it.
In part, this will be a case of AI taking the blame for the political climate that led to the top down approach in the first place, but fairly or not it could stymie the adoption of AI and smart city technologies generally. As I have argued in other blogs, smart cities need to adopt what I called a “U-shaped model”, where the top down applications of technology (one side of the “U”) are balanced by “side to side”, citizen-to-citizen applications (the bottom of the “U”); and “bottom up”, citizen-to-government feedback and communication (the other side of the “U”).
The same may be said of AI. It needs to be applied to enabling citizen-to-citizen activity (perhaps, for example, a neighborhood tool for assessing the suitability of soil in vacant lots for urban gardening and identifying what might best grow there; or enabling energy trading between energy generators and energy users on neighborhood microgrids). AI also needs to be applied to bottom up applications, for example in correlating citizen feedback with crime or sickness patterns.
Leave your comment below, or reply to others.
Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.
Read more from MeetingoftheMinds.org
Spotlighting innovations in urban sustainability and connected technology
In my business, we’d rather not be right. What gets a climate change expert out of bed in the morning is the desire to provide decision-makers with the best available science, and at the end of the day we go to bed hoping things won’t actually get as bad as our science tells us. That’s true whether you’re a physical or a social scientist.
Well, I’m one of the latter and Meeting of the Minds thought it would be valuable to republish an article I penned in January 2020. In that ancient past, only the most studious of news observers had heard of a virus in Wuhan, China, that was causing a lethal disease. Two months later we were in lockdown, all over the world, and while things have improved a lot in the US since November 2020, in many cities and nations around the world this is not the case. India is living through a COVID nightmare of untold proportions as we speak, and many nations have gone through wave after wave of this pandemic. The end is not in sight. It is not over. Not by a longshot.
And while the pandemic is raging, sea level continues to rise, heatwaves are killing people in one hemisphere or the other, droughts have devastated farmers, floods sent people fleeing to disaster shelters that are not the save havens we once thought them to be, wildfires consumed forests and all too many homes, and emissions dipped temporarily only to shoot up again as we try to go “back to normal.”
So, I’ll say another one of those things I wish I’ll be wrong about, but probably won’t: there is no “back to normal.” Not with climate change in an interdependent world.
I caught up with Steph Stoppenhagen from Black & Veatch the other day about their work on critical infrastructure in Las Vegas. In particular, we talked about the new Bleutech Park project which touts itself as an eco-entertainment park. They are deploying new technologies and materials to integrate water, energy, mobility, housing, and climate-smart solutions as they anticipate full-time residents and park visitors. Hear more from Steph about this new $7.5B high-tech biome in the desert.
Planning for new, shared modes of transit that will rival private vehicles in access and convenience requires a paradigm shift in the planning process. Rather than using traditional methods, we need to capture individual behavior while interacting with the systems in questions. An increasing number of studies show that combining agent-based simulation with activity-based travel demand modeling is a good approach. This approach creates a digital twin of the population of the city, with similar characteristics as their real-world counterparts. These synthetic individuals have activities to perform through the course of the day, and need to make mobility decisions to travel between activity locations. The entire transportation infrastructure of the city is replicated on a virtual platform that simulates real life scenarios. If individual behavior and the governing laws of the digital reality are accurately reproduced, large-scale mobility demand emerges from the bottom-up, reflecting the real-world incidences.