The Limits of Data
My friend J. is dying. Or perhaps not. We don’t know. Two months ago J. was a healthy and active retiree. He looked after his grand children and took care of a large garden. Then one day he fell at home and was admitted to hospital. Suddenly this fit, former Navy pilot was weak. He slept almost the entire day. He had no appetite for food. It was a mystery.
So began a continuing regime of testing. By now J. must have been tested for every known medical condition except rabies. Endless blood tests, two MRI scans, and a miniature camera that he swallowed and that passed through his entire alimentary canal. We must have more medical data on J. than almost any other person in history. He has made some progress towards recovery, but we still do not know the cause of his weakness.
The human body is a complex system of systems at spatial scales from cells to the whole body. Until thirty or forty years ago, we had little understanding of how these systems work in principle nor of how they are working in a specific case. For centuries doctors had treated symptoms – fever, chills, fatigue, physical injuries – with increasingly sophisticated drugs and procedures and often with surprising success considering the lack of understanding on which these treatments were based.
Beginning in the late 19th century clinical biology became an analytical science, rapidly diversified into a myriad specialties, and produced powerful new diagnostic tools, drugs, and other treatments. In some areas, such as the cellular level, we have detailed – though still incomplete – knowledge of what different kinds of cells do and how they achieve this at the molecular level. But at the level of an actual, whole body, such as my friend J., we still cannot explain cause and effect as these systems interact.
More data does not help without more understanding.
When the idea of smart cities was born, some ten to fifteen years ago, engineers, including me, saw it primarily as a control system problem with the goal of improving efficiency, specifically the sustainability of the city. Indeed, the source of much of the early technology was the process industry, which was a pioneer in applying intelligent control to chemical plants, oil refineries, and power stations. Such plants superficially resemble cities: spatial scales from meters to kilometers, temporal scales from seconds to days, similar scales of energy and material inputs, and thousands of sensing and control points.
It would be impossible to operate a process plant without smart controls. Such plants manage the reactions of precisely balanced mixtures of raw materials under carefully controlled temperatures and pressures, sometimes in the presence of catalysts that dramatically speed up a reaction. Further, such plants are not running a single process, but are in fact chains or networks of processes, in which the secondary outputs from one process become primary inputs into another process, thereby improving efficiency. Beginning in the 1970s, distributed real-time process control systems were developed to keep these processes under close control in response to changes in the quality of raw materials, the decline of catalysts, and variations in ambient temperature, air pressure, air humidity, and solar heat gain or loss.
So it seemed quite natural to extend such sophisticated control systems to the management of cities. The ability to collect vast amounts of data – even in those pre-smart phone days – about what goes on in cities and to apply analytics to past, present, and future states of the city seemed to offer significant opportunities for improving efficiency and resilience. Moreover, unlike tightly-integrated process plants, cities seemed to decompose naturally into relatively independent sub-systems: transportation, building management, water supply, electricity supply, waste management, and so forth. Smart meters for electricity, gas, and water were being installed. GPS devices were being imbedded in vehicles and mobile telephones. Building controls were gaining intelligence. Cities were a major source for Big Data. With all this information available, what could go wrong?
Indeed this approach has shown modest success. Total energy and water consumption can be reduced by 10-15%. Peak demands can be smoothed even more effectively. Adaptive tolls can reduce inner city congestion. Bus arrival times can be predicted and communicated to passengers. Leaks in water mains can be located. Crime can be managed, if not reduced. Valuable as these and other achievements are, they are not yet the spectacular results for which we hoped.
Reflecting on this, I feel that we are in the position of the doctors trying to help my friend J. Like them, we have vastly more information available about the patient, but we still have only limited understanding of how these systems of systems actually function. We have data, but we lack theories that provide understanding across multiple scales.
Further, a process plant is only a partial model for a city. For all their complexity, the networks of reactions are deterministic within well-established process windows. Cities too have predictable behaviours over certain temporal and spatial scales, but these macro-behaviours are emergent and not determined by any physical laws. These emergent behaviours result from the natural and technological infrastructures of the city and from the myriad decisions of imbedded intelligent beings – people – on how to exploit the city’s systems and sub-systems. Moreover, these people have individual views of how the city’s sub-systems should be used. These conscious or unconscious decisions constitute a natural control system for the city that is far more powerful than our technology.
To be sure, many of the smart city solutions attempt to intervene within that natural control system. For example, providing user feedback on electricity or water consumption can – at least for a time – influence consumption. But like my friend’s doctors, we have poor knowledge of how those systems of systems interact below the level of major organs – how they attempt to re-establish physiological normalcy, how big their process windows are, beyond what limits will they tip over into a new state, how reversible such states are, how and to what will they respond to external physical or psychological stimuli.
Treating the symptom is a medical practice dating back centuries if not millennia. In many cases it can be remarkably effective and, in combination with chemical and bio-chemical science , it has produced dramatic improvements in human life-expectancy. To achieve the next level of impact on how cities work, we need to go beyond nudging the symptoms and to understand the life of the city as an ecosystem.
Today a multitude of researchers and clinicians all over the world is studying these systems with myriad hyper-specialties. Yet it is hard, perhaps impossible, to integrate this medical Tower of Babel into an overall theoretical framework for the entire body owing to differences of scale, of terminology, of methodology.
The study of cities suffers from a similar diversity of specialties. Ecologists, environmentalists, geographers, architects, planners, engineers, economists, sociologists, anthropologists, political scientists, and still others all produce profound work concerning the city, yet we have no way to see the wood for the trees. As in medicine, the study of cities lacks an overall theoretical framework. As Richard Saul Wurman observed to me some years ago: “We cannot even agree on the definition of a city.”
Yet out of this rather pessimistic view of cities and technology gleams a ray of hope. While I am deeply skeptical of the more grandiose claims of Artificial Intelligence (AI), I am strongly in favour of Augmented Intelligence or Intelligence Amplification (IA) – the collaboration between machine intelligence and human specialists. Soon after IBM’s Watson achieved success in the Jeopardy! quiz game, the project was applied to seek understanding from clinical and biological data in order to design optimal treatment plans for individual patients. Similar data-intensive approaches have been applied to trying to understand diseases in terms of genetic patterns.
These experiments proved much harder than IBM anticipated. Analysis of Big Data can provide important clues – finding needles in haystacks – but it seems to require humans to assemble those needles into theoretical frameworks. Machine intelligence is likely also to be the vehicle for integrating the human intelligence from the many disciplines that study cities.
So while process control may have been only partially successful in applying machine intelligence to cities, I remain confident that through Augmented Intelligence we will develop an overall theory of cities that will provide far deeper insights into how technology can help cities achieve their goals.
Leave your comment below, or reply to others.
Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.
Read more from MeetingoftheMinds.org
Spotlighting innovations in urban sustainability and connected technology
Dedicated anti-trafficking actors across the nation are trying to build better systems in big jurisdictions like New York, San Francisco, and Los Angeles, and in smaller but scrappy jurisdictions like Waco, Texas and Boaz, Alabama. They all share the same need, for stronger interconnectedness as an anti-trafficking field, and more collaboration.
The Forging Freedom Portal is a one-stop shop where a police officer planning a victim-centered operation can connect with their law enforcement counterparts, and the right service providers ahead of time, collaborating to make sure they’re planning for the language skills, social services, and legal support that victims may need. The portal is a place where the people who care most about ending human trafficking, who are doing the hard work every day on the ground, can learn from each other and share best practices to raise the collective standard of this work.
Maximizing both the mobility and safety of road users at urban and suburban intersections is of utmost importance to city leaders and citizens today. Trends such as micromobility, connected and automated vehicles, and an explosion of available data, coupled with increasing numbers of bikes and pedestrians on our streets, result in both challenges and opportunities.
The increasing ability to provide intersection connectivity, edge computing and cloud storage, along with growing tool sets, such as Signal Performance Measures (SPM) and advanced video detection, provide new and exciting opportunities to traffic engineers. Possible combinations of Vision Zero intersection solutions, Near-Miss analyses, and the ability to make real-time operating decisions at our intersections can be overwhelming. Still, they must be embraced to ensure public officials are accountable to the traveling public.
I caught up with Joe Bergera – CEO of Iteris – recently and we discussed a cloud-first strategy for cities and the benefits, particularly during the pandemic. Organizations, cities and companies that have replicated some of their business processes in the cloud have navigated the pandemic quite well. We discuss why that is, and what can be done to help other cities during this time.