Becoming a “smart city”
Like many metropolitan centers around the world, Berlin aspires to be a “smart city.” Making a city smart usually involves constructing a dense net of sensors, often embedded in and around more traditional infrastructures throughout the urban environment, such as transportation systems, electrical grids, and water systems. The process also requires the city to solicit the distributed input of its inhabitants through active technological means, such as smart phone apps. Finally, the city employs high-end computing and learning algorithms to analyze the resulting data, with the goal of optimizing urban technical, social, and political processes. Yet, perhaps counter-intuitively, a smart city is not synonymous with a utopian—or even a specific—form of the city, which would then remain stable for the foreseeable future. In this sense, the smart city is quite unlike utopian cities as they were imagined in the past, when it was presumed that a specific form—such as Le Corbusier’s “Radiant City” or the concentric circles of Ebenezer Howard’s garden cities—would enable a specific goal, such as integration of humans into natural processes, or economic growth, or an increase in collective happiness, or democratic political participation. Rather, a city is “smart” when it achieves the capacity to adjust to any new and unexpected threats and possibilities that may emerge from the city’s ecological, political, social, and economic environments (a capacity that is generally referred to in planning documents with the term “resilience”). In short, a smart city is a site of perpetual learning, and a city is smart when it achieves the capacity to engage in perpetual learning.
The inhabitants of a smart city will thus also necessarily become perpetual learners. As the smart city constantly adapts, the people who live in it will also have to adjust. However, the smart city’s smartness is not supposed to be imposed upon its urban inhabitants from above; rather, this smartness is supposed to result from the combination of the inhabitants’ unique individual perspectives and choices. Smartness presumes that these acts of combination cannot be accomplished by humans alone, but require the assistance of computing processes—and, more specifically, of algorithms which teach the smart city (and its inhabitants) new ways in which to learn. Very much like “the market” of neoliberal economic theory, smartness optimizes processes by combining multiple perspectives in a way that cannot be achieved by any group of human planners. For some of its advocates, the ability of smartness to automate the combination of an enormous number of individual perspectives makes it possible to imagine that one could perhaps replace politics—that messy realm of self-interest, which often only seems fully open to a select few—with technological processes that could actually achieve what democracy only promises.
In our recently-published genealogy of smartness, The Smartness Mandate, we document the way in which ideologies and practices of smartness and artificial intelligence change human habitation, politics, and economics. Our central concern is the uncanny resonance between theories of the neoliberal market and the practices of smartness. We argue that this is no coincidence. In the 1950s and ’60s, some of the same developments in computer science, psychology, and evolutionary theory inspired both neoliberal theorists such as F. A. Hayek and architects of computer-learning processes such as Frank Rosenblatt (and Rosenblatt in turn drew on Hayek). Both neoliberalism and smartness also share a similar commitment to “epistemological modesty,” in the sense that they share a common assumption: Since no single individual or group of individuals can predict what the future will bring, one must rely on structures and mechanisms that perpetually adjust and “learn.” For neoliberals, this structure is “the market”; for advocates of smartness, it is smart technologies and processes. The relationship between the neoliberal market and smartness can sometimes be more than simply analogy or resonance, since smart technologies often rely on prices as a way of assigning “weights” in learning algorithms. These weights enable an algorithm to adjust its predictions over time, and this is an essential part of machine learning.
Although we note these similarities between neoliberalism and smartness partly to dampen a somewhat naïve enthusiasm concerning smartness’s capacity to solve all of our current and future problems, it is just as important not to view smartness as simply a form of neoliberalism in disguise. (This seems to be the approach taken by Naomi Klein, for example.) As we note in our book, contemporary neoliberalism and smartness are similar in part because many of the former’s key theorists crossed paths with important figures from the fields of artificial intelligence, big data, and machine learning. Still, the genealogies of neoliberalism and smartness are not identical, and our book seeks in the genealogy of smartness for points of virtual connection between what would become the technologies of smartness, on the one hand, and projects that are less focused on individual freedom (the neoliberal project) and more interested in collective freedom, on the other.
Smartness vs. omniscience
While it is easy (and correct) to point out that the language of smartness already seeks to set the terms of engagement by implying that the alternative to smartness is stupidity—and who, after all, would want to live in a dumb city?—the real opposite of smartness is not stupidity, but omniscience. The language of smartness suggests that we can either aspire to omniscience—which, if attainable, would indeed allow us to take all contingencies into account beforehand—or we can recognize that omniscience is impossible for mere mortals, and instead aim for smartness, which means perpetual learning in the light of changing circumstances. Thus, despite its heavy reliance on cutting-edge technologies, smartness can be opposed to technocratic visions of social change. Smartness contests, for example, the technocratic distinction between experts and non-experts in favor of the claim that everyone has knowledge to contribute. In this sense, it is also an appeal to include previously marginalized voices and it demands that everyone, including those who are privileged, become perpetual learners. From this perspective, it is hard to object to the basic idea of smartness, even if some of its current implementations may be considered problematic.
The importance of smartness—as ideology, as an ever-changing set of technologies and techniques, but also as a possible focal point for hope—becomes especially evident when viewed from the perspective of our current ecological crisis. This crisis includes global warming, the increasing dominance of one-crop agriculture, the global spread of microplastics, and a plethora of other global dangers. It seems clear to us that humanity has arrived at this point as a result of capitalism, and it seems equally evident that capitalism itself cannot fix this problem, no matter how many innovative new forms of market its advocates may come up with (e.g., carbon offset markets; new forms of insurance for endangered coastal areas; etc.). However, precisely because it is not identical to neoliberalism, smartness retains its potential within this context. For example, the concept and principles of smartness enabled Suzanne Simard’s innovative work in botany on the networked “intelligence” of forests, which in turn has the potential to help us rethink the roles of technology in current efforts to make cities and other processes smart. Another helpful contribution is Winona LaDuke and Deborah Cowen’s notion of “alimentary infrastructure” as it shifts the understanding of smart energy infrastructures away from the market-based principles of contemporary smart electrical grids and towards indigenous calls for environmental custodianship and sovereignty.
The appeal of smartness
Yet rethinking smartness (in part by activating dormant or forgotten elements from its genealogy) will also require us to reconsider what we might call the “appeal” of smartness. In our book, we describe smartness as a mandate in order to emphasize that many of its current advocates present smartness less as an intrinsically desirable or hopeful set of techniques, but rather as something that we must embrace if we wish to avoid disaster. This makes smartness very difficult to think outside the event horizon of crisis and disaster. Smartness is often presented, even if only implicitly, as our most effective form of protection against the future. The future is thus seen not through the lens of hope, but rather as the source of new disasters. From this perspective, smartness enables a form of learning that is neither cumulative nor progressive, but rather simply allows us to hold our place against, rather than being swept away by, the lashing storms with which the future will perpetually assail us. Thus, the “mandate” to employ smart technologies everywhere seems to emanate from nature itself. Smartness appears to be nature’s command in the sense that it is seen as the only means by which we can save ourselves from ecological disaster. Smartness also seems to come from nature in the sense that many of its technical means for algorithmic “learning” are explicitly modeled on, and thus seem like simply the technological adoption and intensification of, evolutionary processes of life that enable species to adapt to ever-changing environmental conditions.
We hope that our book can help in the process of forging a different kind of appeal for smartness. It may be worth reconsidering some apparently old-fashioned concepts such as “progress,” if only because this shifts the valence of the future from threat to promise. And although smartness establishes a problematic link between technological and natural evolutionary processes, we consider it less useful simply to discard this link. Instead, we should aim to rethink the ways in which current machine “learning” links models of evolution to models of learning. As suggested by our example of Simard’s work above, some of these new models may emerge within the biological sciences, while others may emerge from the natural sciences (e.g., geologist Peter K. Haff’s concept of the technosphere, which sees human technology as simply the latest “sphere” within a process of evolution of earth systems that previously resulted in, for example, the atmosphere and biosphere). These examples can help us to rethink the concept of learning—and, by extension, what machine learning can potentially mean.
Robert Mitchell is Professor of English at Duke University, spending 2022/23 as a visiting scholar at ZfL; Orit Halpern is Lighthouse Professor and Chair of Digital Cultures and Societal Change at Technische Universität Dresden. Together they wrote The Smartness Mandate (Cambridge, MA 2023).
VORGESCHLAGENE ZITIERWEISE: Orit Halpern/Robert Mitchell: Rethinking Smartness, in: ZfL Blog, 7.2.2023, [https://www.zflprojekte.de/zfl-blog/2023/02/07/orit-halpern-robert-mitchell-rethinking-smartness/].