The Limits of Data
Who will you meet?
Cities are innovating, companies are pivoting, and start-ups are growing. Like you, every urban practitioner has a remarkable story of insight and challenge from the past year.
Meet these peers and discuss the future of cities in the new Meeting of the Minds Executive Cohort Program. Replace boring virtual summits with facilitated, online, small-group discussions where you can make real connections with extraordinary, like-minded people.
My friend J. is dying. Or perhaps not. We don’t know. Two months ago J. was a healthy and active retiree. He looked after his grand children and took care of a large garden. Then one day he fell at home and was admitted to hospital. Suddenly this fit, former Navy pilot was weak. He slept almost the entire day. He had no appetite for food. It was a mystery.
So began a continuing regime of testing. By now J. must have been tested for every known medical condition except rabies. Endless blood tests, two MRI scans, and a miniature camera that he swallowed and that passed through his entire alimentary canal. We must have more medical data on J. than almost any other person in history. He has made some progress towards recovery, but we still do not know the cause of his weakness.
The human body is a complex system of systems at spatial scales from cells to the whole body. Until thirty or forty years ago, we had little understanding of how these systems work in principle nor of how they are working in a specific case. For centuries doctors had treated symptoms – fever, chills, fatigue, physical injuries – with increasingly sophisticated drugs and procedures and often with surprising success considering the lack of understanding on which these treatments were based.
Beginning in the late 19th century clinical biology became an analytical science, rapidly diversified into a myriad specialties, and produced powerful new diagnostic tools, drugs, and other treatments. In some areas, such as the cellular level, we have detailed – though still incomplete – knowledge of what different kinds of cells do and how they achieve this at the molecular level. But at the level of an actual, whole body, such as my friend J., we still cannot explain cause and effect as these systems interact.
More data does not help without more understanding.
When the idea of smart cities was born, some ten to fifteen years ago, engineers, including me, saw it primarily as a control system problem with the goal of improving efficiency, specifically the sustainability of the city. Indeed, the source of much of the early technology was the process industry, which was a pioneer in applying intelligent control to chemical plants, oil refineries, and power stations. Such plants superficially resemble cities: spatial scales from meters to kilometers, temporal scales from seconds to days, similar scales of energy and material inputs, and thousands of sensing and control points.
It would be impossible to operate a process plant without smart controls. Such plants manage the reactions of precisely balanced mixtures of raw materials under carefully controlled temperatures and pressures, sometimes in the presence of catalysts that dramatically speed up a reaction. Further, such plants are not running a single process, but are in fact chains or networks of processes, in which the secondary outputs from one process become primary inputs into another process, thereby improving efficiency. Beginning in the 1970s, distributed real-time process control systems were developed to keep these processes under close control in response to changes in the quality of raw materials, the decline of catalysts, and variations in ambient temperature, air pressure, air humidity, and solar heat gain or loss.
So it seemed quite natural to extend such sophisticated control systems to the management of cities. The ability to collect vast amounts of data – even in those pre-smart phone days – about what goes on in cities and to apply analytics to past, present, and future states of the city seemed to offer significant opportunities for improving efficiency and resilience. Moreover, unlike tightly-integrated process plants, cities seemed to decompose naturally into relatively independent sub-systems: transportation, building management, water supply, electricity supply, waste management, and so forth. Smart meters for electricity, gas, and water were being installed. GPS devices were being imbedded in vehicles and mobile telephones. Building controls were gaining intelligence. Cities were a major source for Big Data. With all this information available, what could go wrong?
Indeed this approach has shown modest success. Total energy and water consumption can be reduced by 10-15%. Peak demands can be smoothed even more effectively. Adaptive tolls can reduce inner city congestion. Bus arrival times can be predicted and communicated to passengers. Leaks in water mains can be located. Crime can be managed, if not reduced. Valuable as these and other achievements are, they are not yet the spectacular results for which we hoped.
Reflecting on this, I feel that we are in the position of the doctors trying to help my friend J. Like them, we have vastly more information available about the patient, but we still have only limited understanding of how these systems of systems actually function. We have data, but we lack theories that provide understanding across multiple scales.
Further, a process plant is only a partial model for a city. For all their complexity, the networks of reactions are deterministic within well-established process windows. Cities too have predictable behaviours over certain temporal and spatial scales, but these macro-behaviours are emergent and not determined by any physical laws. These emergent behaviours result from the natural and technological infrastructures of the city and from the myriad decisions of imbedded intelligent beings – people – on how to exploit the city’s systems and sub-systems. Moreover, these people have individual views of how the city’s sub-systems should be used. These conscious or unconscious decisions constitute a natural control system for the city that is far more powerful than our technology.
To be sure, many of the smart city solutions attempt to intervene within that natural control system. For example, providing user feedback on electricity or water consumption can – at least for a time – influence consumption. But like my friend’s doctors, we have poor knowledge of how those systems of systems interact below the level of major organs – how they attempt to re-establish physiological normalcy, how big their process windows are, beyond what limits will they tip over into a new state, how reversible such states are, how and to what will they respond to external physical or psychological stimuli.
Treating the symptom is a medical practice dating back centuries if not millennia. In many cases it can be remarkably effective and, in combination with chemical and bio-chemical science , it has produced dramatic improvements in human life-expectancy. To achieve the next level of impact on how cities work, we need to go beyond nudging the symptoms and to understand the life of the city as an ecosystem.
Today a multitude of researchers and clinicians all over the world is studying these systems with myriad hyper-specialties. Yet it is hard, perhaps impossible, to integrate this medical Tower of Babel into an overall theoretical framework for the entire body owing to differences of scale, of terminology, of methodology.
The study of cities suffers from a similar diversity of specialties. Ecologists, environmentalists, geographers, architects, planners, engineers, economists, sociologists, anthropologists, political scientists, and still others all produce profound work concerning the city, yet we have no way to see the wood for the trees. As in medicine, the study of cities lacks an overall theoretical framework. As Richard Saul Wurman observed to me some years ago: “We cannot even agree on the definition of a city.”
Yet out of this rather pessimistic view of cities and technology gleams a ray of hope. While I am deeply skeptical of the more grandiose claims of Artificial Intelligence (AI), I am strongly in favour of Augmented Intelligence or Intelligence Amplification (IA) – the collaboration between machine intelligence and human specialists. Soon after IBM’s Watson achieved success in the Jeopardy! quiz game, the project was applied to seek understanding from clinical and biological data in order to design optimal treatment plans for individual patients. Similar data-intensive approaches have been applied to trying to understand diseases in terms of genetic patterns.
These experiments proved much harder than IBM anticipated. Analysis of Big Data can provide important clues – finding needles in haystacks – but it seems to require humans to assemble those needles into theoretical frameworks. Machine intelligence is likely also to be the vehicle for integrating the human intelligence from the many disciplines that study cities.
So while process control may have been only partially successful in applying machine intelligence to cities, I remain confident that through Augmented Intelligence we will develop an overall theory of cities that will provide far deeper insights into how technology can help cities achieve their goals.
Leave your comment below, or reply to others.
Please note that this comment section is for thoughtful, on-topic discussions. Admin approval is required for all comments. Your comment may be edited if it contains grammatical errors. Low effort, self-promotional, or impolite comments will be deleted.
Read more from MeetingoftheMinds.org
Spotlighting innovations in urban sustainability and connected technology
The development of public, open-access middle mile infrastructure can expand internet networks closer to unserved and underserved communities while offering equal opportunity for ISPs to link cost effectively to last mile infrastructure. This strategy would connect more Americans to high-speed internet while also driving down prices by increasing competition among local ISPs.
In addition to potentially helping narrow the digital divide, middle mile infrastructure would also provide backup options for networks if one connection pathway fails, and it would help support regional economic development by connecting businesses.
One of the most visceral manifestations of the combined problems of urbanization and climate change are the enormous wildfires that engulf areas of the American West. Fire behavior itself is now changing. Over 120 years of well-intentioned fire suppression have created huge reserves of fuel which, when combined with warmer temperatures and drought-dried landscapes, create unstoppable fires that spread with extreme speed, jump fire-breaks, level entire towns, take lives and destroy hundreds of thousands of acres, even in landscapes that are conditioned to employ fire as part of their reproductive cycle.
ARISE-US recently held a very successful symposium, “Wildfire Risk Reduction – Connecting the Dots” for wildfire stakeholders – insurers, US Forest Service, engineers, fire awareness NGOs and others – to discuss the issues and their possible solutions. This article sets out some of the major points to emerge.
Whether deep freezes in Texas, wildfires in California, hurricanes along the Gulf Coast, or any other calamity, our innovations today will build the reliable, resilient, equitable, and prosperous grid tomorrow. Innovation, in short, combines the dream of what’s possible with the pragmatism of what’s practical. That’s the big-idea, hard-reality approach that helped transform Texas into the world’s energy powerhouse — from oil and gas to zero-emissions wind, sun, and, soon, geothermal.
It’s time to make the production and consumption of energy faster, smarter, cleaner, more resilient, and more efficient. Business leaders, political leaders, the energy sector, and savvy citizens have the power to put investment and practices in place that support a robust energy innovation ecosystem. So, saddle up.