The 7 Forces of Artificial Intelligence in Cities
Much attention has been given to the algorithmic salvation of Artificial Intelligence and how it will transform our world and how we live. Nowhere is the potential more evident than in our cities, where AI potentially impacts life in multiple ways: autonomous vehicles, smart utilization of utilities, and even in the distribution of social services. The primary focus has been on research and developing ever more sophisticated algorithms and applications. But to be honest it is becoming increasingly difficult to see how yet another superior gameplaying AI system beating a human world champion will help in creating safer and more sustainable cities.
That said, AI has enormous potential to improve the lives of billions of people living in cities and facing a multitude of challenges. However, a blind focus on the technological issues is not sufficient. We are already starting to see a moderation of the technocentric view of algorithmic salvation in New York City, which is the first city in the world to appoint a chief algorithm officer.
Working for New York City’s Department of IT and Telecommunication, I have been peripherally involved in the work surrounding the task force that led to the recommendations to appoint a Chief Algorithm Office. I feel that we can take the work even further by developing a holistic framework for cities to use when creating a people-focused set of AI systems.
The Seven Forces of AI
It’s one thing to design and implement a technological solution that seemingly solves a problem, and another for it to succeed in the real world where people live and act. I believe we need to consider seven forces that critically impact the success of AI based technologies.
How we respond to and interact with artificially intelligent systems is not straight forward and logical. For example, you might expect that better, faster, and more efficient solutions are always good. However, that is not always the case. Elsewhere, I have called this the optimization paradox; sometimes making a system more efficient does not make it more successful.
This is true when the solution violates our stochastic liberty. This refers to the fact that in some cases we want to have freedom of chance rather than the tyranny of efficiency. This is the case in traffic monitoring systems. They could be incredibly efficient and autonomously dispense tickets for the slightest traffic infraction. Another example is genetic testing; some people don’t want that information forced upon them. They want the freedom of chance.
These are just a few examples of how human nature intersects with AI systems, the wider field of behavioral economics has unearthed a plethora of surprising biases and behavior patterns that will also interact with AI technology. If we don’t understand how human nature will interact with AI, we will not have a full understanding.
Understanding the embedded logic in how AI makes decisions is critical for the perceived fairness of the system. For example, let’s say we develop a system that decides how much credit people should have. The husband gets a credit limit of $10,000 and the wife $100. A real life example of society’s reaction to this scenario has played out in the Apple credit card sex discrimination debacle. Transparency is important for the success of an AI solution.
There is always the chance that political realities will interfere with an AI solution. One example is the COMPAS system, which was used to rate the probability of recidivism. Criticism that it was biased spurred political backlash that halted use of the system in many counties.
Political reality runs its own system of logic and the dynamics are never straight forward. While it is still early days for AI in cities, the dynamic is already present for other new technologies. A current case in point is the 5G network technology that will increase cell phone bandwidth significantly. Huawei has the majority of patents in 5G tech, and is able to deliver the cheapest and fully sufficient solution, but still they are barred from implementing the system in parts of the world. This is not due to technical reasons, but political realities. Given today’s political and business climate, the same dynamics could very likely be in place for future urban AI solutions.
The ethical responsibility of AI systems is a topic of intense debate. For example, there is a focus on eliminating bias in AI algorithmic systems, and who is designing and creating the logic behind the algorithms. Who decides how an autonomous vehicle should act in a situation where a child runs out in front of it? Should it evade and drive to the right thereby potentially killing a couple of senior citizens on the sidewalk, or drive to the left, potentially killing those in an approaching vehicle, or continue and hit the child that ran out in front of it? There is no right answer to this, and we will never agree on the decision because moral choices are not universal.
The technical capabilities of AI systems are another factor of great importance. Great advances have already been made and continue to be made in hardware and software. Universities and research institutions dedicate significant resources to discover new and optimize existing algorithms. Ever deeper and faster neural networks are produced. Private companies invest heavily in implementing AI in their solutions and develop new ones. The technical boundaries are being pushed every day making AI faster and more precise. This is a pattern we see for any new technology. Think of Moore’s law in relation to the processing. It states that processing speeds will double every two years. It also seems to hold for memory capacity. We should expect the same optimizations to occur for AI.
Los Angeles has installed an Intelligent Transportation System consisting of road surface sensors and traffic cameras that can warn about congestion and signal malfunction. Other examples of how AI is used in a city context include: spotting potholes and cracks in the pavement, identifying garbage lying on the ground, license plate reader technology to find stolen cars, and predictive policing in order to reduce crime.
Most technology systems don’t exist in isolation. At the very least they need to be supported, upgraded and maintained by someone. This means that there will be one or more organizations invested in and related to the function of the system. It will also often have technical interfaces with other systems, which allows inputs and outputs. There is an ecology defining the dynamics of the system. If we have a solution using AI, we also need to understand who can implement it, who can support and upgrade it. Some vendors may have a rich partner network while others may not. We also need to understand how we can connect it to other systems.
For example, we might have a decision optimization system that needs data and need a technical interface for that. If there is no one to help implement and upgrade the system or we can’t interact with it then it limits the utility of the system. The ecosystem is just as important for the function and value of the AI system as the capabilities of the system itself.
Most AI systems are complex and notoriously difficult to handle. Complex systems are characterized as non-linear, and are therefore often unpredictable.
An urban AI system that would potentially exhibit complex behavior is traffic regulation. If all traffic lights were connected to an AI system determining the optimal sequence of green and red across the city, new dynamics would arise as people would take the changes into account and change their driving patterns.
AI solutions enter into and become part of complex political and social systems, and may also introduce completely new system dynamics. This was the case with the “Flash Crash” in 2010, when algorithmic trading systems decided to sell, thereby creating a trillion dollar stock market crash. This has become so normal that we are now talking about quant quakes in the stock market as a recurring phenomenon.
Building AI for The People
There are 7 primary forces determining the success of AI, of which technology is just one. Cities must realize that AI is not the quick technological fix that vendors sell. Not everything will be improved by creating more algorithms and technical prowess. We need to develop a more holistic approach to implementing AI in cities in order to harness the immense potential. We need to create a way to consider each of the seven forces when cities plan for the use of AI.
The framework from the NYC algorithmic task force is a good starting point. For each particular AI solution all seven forces should be analyzed and used for risk assessment. There is a need to develop policies and governance structures that take the seven forces into account. These will have to include different agencies and cannot be seen as just a technology issue. Perhaps all procurement of AI would have to go through an AI governance board that considered the holistic impact of the system across the seven identified forces.
By building holistic AI solutions that work for urban residents we can create smarter, safer, more sustainable, and equitable cities. This will require focus from city governments, and courage to formulate and implement adequate policies.
Leave your comment below, or reply to others.
Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.
Read more from MeetingoftheMinds.org
Spotlighting innovations in urban sustainability and connected technology
At Connect the Dots, it is our mission to build better cities, towns, and neighborhoods through inclusive, insight-driven stakeholder engagement. We help community, private, and public sector partners to develop creative solutions that move projects and cities forward. Engagement is at the heart of this pursuit, which is why we are sharing our practices with you.
When you decide to take your engagement activities online, we encourage using tools that are functional on a wide range of devices including basic smartphones, tablets, laptops, and desktop computers. We have also developed remote but non-virtual options to bridge the digital divide.
As cities continue to fight against COVID-19, citizens are changing their commuting preferences to adjust to a new way of life. Cities across the globe have experienced significant increases in the number of pedestrians, cyclists, and private cars on the roads as a result of public transport restrictions and social distancing requirements. This has created many new challenges, as cities previously dependent on public transport must now adapt to accommodate more vulnerable road users, such as pedestrians and cyclists.
It is critical to pause, reflect, and recognize that cities who are not equitable will always be in recovery mode. Inequity is a noted stress in the language of resilience shocks and stresses. It increases the probability and severity of shocks – like social uprisings and the civil unrest we have seen unfold. This holds true for a vast range of other natural and man-made shocks.