At this juncture in history, it simply isn’t possible to understand the ways in which we know and use the world around us without having some sense for the way the smartphone works, and the various infrastructures it depends on.
The things we used to fix cherished memory—the dogeared, well-worried-over Kodachromes of lovers, children, schoolmates and pets that once populated the world’s plastic wallet inserts—were for the most part digitized at some point along the way, and long ago migrated to the lockscreens of our phones.
all of the conventions and arrangements that constitute our sense of the everyday now no longer evolve at any speed we’d generally associate with social mores, but at the far faster rate of digital innovation. We’re forced to accommodate some degree of change in the way we do things every time the newest version of a device, operating system or application is released.
It is, by any reckoning, a tremendously impressive technical accomplishment. Given everything it does, and all of the objects it replaces or renders unnecessary, it has to be regarded as a rather astonishing bargain. And given that it is, in principle, able to connect billions of human beings with one another and the species’ entire stock of collective knowledge, it is in some sense even a utopian one.
It is always dangerous to imagine futures that are anything like linear extrapolations from the present, but if the augurs can be relied upon, we balance on the cusp of an era in which every near- or fully adult person on Earth is instrumented and connected to the global network at all times.12 Though we’ve barely begun to reckon with what this implies for our psyches, our societies, or our ways of organizing the world, it is no exaggeration to say that this capability—and all the assumptions, habits, relations of power and blindspots bound up in it—is already foundational to the practice of the everyday.
Everyone with a smartphone has, by definition, a free, continuously zoomable, self-updating, high-resolution map of every part of the populated surface of the Earth that goes with them wherever they go, and this is in itself an epochal development.
We need to understand ourselves as nervous systems that are virtually continuous with the world beyond the walls, fused to it through the juncture of our smartphones. And what keeps us twitching at our screens, more even than the satisfaction of any practical need, is the continuously renewed opportunity to bathe in the primal rush of communion.
What links these wildly different circumstances is a vision of connected devices now being sold to us as the “internet of things,” in which a weave of networked perception wraps every space, every place, every thing and every body on Earth.
Like the smartphone, the internet of things isn’t a single technology, but an unruly assemblage of protocols, sensing regimes, capabilities and desires, all swept under a single rubric for the sake of disciplinary convenience.
The quest to instrument the body, monitor its behavior and derive actionable insight from these soundings is known as the “quantified self”; the drive to render interior, domestic spaces visible to the network “the smart home”; and when this effort is extended to municipal scale, it is known as “the smart city.”
It may be the long-awaited breakthrough in wearables: both the enabler and the visible symbol of a lifestyle in which performance is continuously monitored and plumbed for its insights into further improvements.
I don’t think it’s unfair to say that at this moment in history, internet-of-things propositions are generally imagined, designed and architected by a group of people who have completely assimilated services like Uber, Airbnb and Venmo into their daily lives, at a time when Pew Research Center figures suggest that a very significant percentage of the population has never used (or even heard of) them.12 And all of their valuations get folded into the things they design.
If the ambition beneath the instrumentation of the body is a nominal self-mastery, and that of the home convenience, the ambition at the heart of the smart city is nothing other than control.
“we measure the things that are easy to measure … the things that are cheap to measure,”33 and this suggests that sensors, however widely deployed, will only ever yield a partial picture of the world.
At present, the internet of things is the most tangible material manifestation of a desire to measure and control the world around us.
The entire pretext on which it depends is a milieu of continuously shattered attention, of overloaded awareness, and of gaps between people just barely annealed with sensors, APIs and scripts.
Regimes, after all, do change, and closely held state secrets are spilled into the open air. Businesses fail, or are acquired, and whatever property belonged to them passes from their control.
As for the commonplace assertion that those who have nothing to hide have nothing to fear, consider the sentiment often attributed to Richelieu, and salient whatever its actual provenance: “If you give me six lines written by the hand of the most honest of men, I will find something in them with which to hang him.”
So, yes: the internet of things is a sprawling and complex domain of possibility, and it would be foolish to avoid investigating it energetically and in good faith. But we would be wise to approach that investigation with an unusually strong leavening of skepticism, and in particular to resist its attempts to gather data regarding ourselves, our whereabouts, our activities and our affiliations, whatever the blandishments of ease, convenience or self-mastery on offer.
They are interface techniques—modes of mediation, rather than anything more fundamental. The difference between the two is largely the degree to which digital graphics dominate the perceptual field.
As we’ve seen, the smartphone handset brings together in a single package several different sensing and presentation technologies, which can be recombined to produce distinctly different ways of engaging networked information.
Galloway suggests that the discourse of computational augmentation, whether consciously or otherwise, “position[s] everyday places and social interactions as somewhat lacking or in need of improvement.”
There is always the possibility that neither augmented nor virtual reality will amount to very much—that the nausea, disorientation and vertigo they occasion simply cannot be surmounted, or that after a brief period of consideration they are actively rejected by the mainstream audience.
Digital fabrication appears to be following the developmental curve we’re familiar with from the domain of digital computation, where the steady advance of Moore’s Law still yields up devices that are more capable, yet cheaper, with every passing year.
We know that economic forces, and requirements founded in the material conditions of production, shape the organization of human settlements at every scale. Local and precise control over the physical form of things therefore challenges the way we think about the spatial form and social life of cities.
This technique requires that we take precise three-dimensional scans of damaged artifacts, and then turn these measurements into specification files for the construction of missing parts. Perhaps in this way, we can restore to our lives something like the ethic of repair that was once common to virtually every culture on Earth, in the days before the material conditions of everyday life were founded on mass production, disposability and the consumer economy.
In any raw material sense, we already live in a post-scarcity world, even before any particularly elaborate digital fabrication capacity is brought on line. And yet we still seem to suffer from a pervasive sense of want and lack.
Not just new things, but new kinds of things—previously unsuspected articulations of matter, limited only by physics and desire. And even in a small way, the chance to live in an environment we’ve fashioned ourselves, using tools we ourselves have crafted. True to its roots, digital fabrication is helping us work out the shape of the future, one experiment at a time.
Perhaps we could think of Bitcoin in this sense as analogous to a trade pidgin or auxiliary language: the Esperanto of currencies.
The original Bitcoin specification has to be regarded as a dazzling display of intellectual bravura, still more so if actually pulled off by a single person. It rather elegantly proposed that well-understood cryptographic techniques could be used to resolve, all at once, a cluster of problems that had beset all the electronic cash schemes that came before.
The notion that the governing body of a mint might take it upon themselves to choke off payments to parties that have fallen into disfavor for political or other reasons isn’t just a theoretical possibility, either. The effective 2010 blockade on contributions to WikiLeaks that was imposed by Bank of America, Visa, MasterCard, PayPal and Western Union is the most prominent example of this sort of thing, but it’s far from the only one.6
A scenario Szabo offered a 2001 conference for hardcore technolibertarians is illustrative in this regard: smart contracts would solve “the problem of trust by being self-executing. For example, the key to a car sold on credit might only operate if the monthly payments have been made.”
the enforcement of contracts involves a great deal of managerial and bureaucratic overhead, and these costs mean that the contract mechanism is ordinarily reserved for situations of a certain heft.
The law, such as we have known it, is a purely extrinsic phenomenon.7 It cannot prevent actions from taking place; at most, it can only discourage us from choosing to undertake them. By contrast, what makes a smart contract is not simply that its obligations are recorded on the blockchain for all to see, but that they are exacted in Ether (or, more generically, whatever cryptocurrency is used by the environment in which the smart contract is running).
If the atomic unit of the Bitcoin blockchain is transactions, then, that of the Ethereum blockchain is contracts. This “simplest form of decentralized automation” is key to everything else Ethereum does or proposes to do. Armed with this mechanism, it is capable of binding previously unaffiliated peers in a meshwork of obligation, whether those peers are human, organizational or machinic.
We want to believe in the possibilities of a technology that claims to give people powerful new tools for collective action, unsupervised by the state.
Some on the left—accelerationists such as Alex Williams and Nick Srnicek and the proponents of Fully Automated Luxury Communism prominent among them—have argued that the ends of economic justice in our time are best served by maximum automation and the elimination of work.15 Thinkers of this stripe argue that the soonest possible supplantation of human labor by cybernetic means is something close to an absolute ethical imperative. In some ways, left accelerationism is just a contemporary gloss applied to the visions of total leisure that were developed by the generations immediately preceding, in a few distinct currents.
Trucking, farming and logistics are all seething sites of research into automation, and none will likely survive very long as distinctly human fields of endeavor.
As far as industry is concerned, though—and in this instance it really is their perspective that weighs heaviest and counts most—automation also means far less elaborate technologies, like the touchscreen ordering kiosks McDonald’s began introducing into its locations in the fall of 2014. In fact, automation means anything that reduces the need for human workers, whether it’s a picking-and-packing robot, a wearable biometric monitor, a mobile-phone app or the redesign of a business process.
We barely have words for what happens when an algorithm breaks down jobs into tasks that are simple enough that they don’t call for any particular expertise—just about anybody will suffice to perform them—and outsources them to a global network of individuals made precarious and therefore willing to work for very little.
a Japan that is rapidly shrinking and aging would rather invest in developing advanced (and often specifically humanoid) robotics than admit an immigrant labor force of any significant size. There are always choices, and this is the one that Japanese society has made—but the techniques and conventions that are developed as a consequence of this choice will find purchase far beyond its shores.
If we can judge fairly from the statistics we’re offered, or the things that CEOs say in unguarded moments, automation is already sweeping across the economy at its foundations, taking up entry-level jobs and popping them one by one like blisters in a strip of bubble wrap.
For the present purposes, it seems safe to conclude that between algorithmic management and regulation, and the more than usually exploitative relations that we can see resulting from it,47 hard times are coming for those who have nothing to offer the economy but their muscle, their heart or their sex.
In the advance of automation, there will be very little that is meaningful left for anyone to do. The point will be reiterated, made again for the folks who were texting or otherwise tuned out the first time around: jobs are going away. You Better Get Ready.
All too often work cost us our health, our dreams, our lives. But it also offered us a context in which we might organize our skills and talents, it gave us some measure of common cause with others who labored under similar conditions, across all bounds of space and time, and if nothing else it filled the hours of our days on Earth.
A simple way of defining data, then, might be facts about the world, and the people, places, things and phenomena that together comprise it, that we collect in order that they may be acted upon.
A commonplace of information science holds that data, information, knowledge and wisdom form a coherent continuum, and that we apply different procedures at every stage of that continuum to transform the facts we observe into insight and awareness. There are many versions of this model, but they all fundamentally assert that we measure the world to produce data, organize that data to produce meaningful, actionable information, synthesize that information with our prior experience of the world to produce knowledge, and then—in some unspecified and probably indescribable way—arrive at a state in which we are able to apply the things we know with the ineffable quality of balanced discernment we think of as wisdom.
Whatever data we measure and retain with our sensors, as with our bodily senses, is invariably a selection from the far broader array available to us; perception itself is always already a process of editing and curation.
like any of us an algorithm will ideally be equipped with the ability to learn from its experiences, generalize from what it’s encountered, and develop adaptive strategies in response. Over time, it will learn to recognize what distinguishes a good performance from an unacceptable one, and how to improve the odds of success next time out. It will refine its ability to detect what is salient in any given situation, and act on that insight. This process is called “machine learning.”
Like any other sorting algorithm, the ones used in the determination of creditworthiness always direct our attention to a subset of the information that is available.
As groups of people, each acting for their own reasons, bring these discrete capabilities together and fuse them in instrumental ensembles, we finally and suddenly arrive at the place where we must have known we were headed all along: the edge of the human. We have hauled up at the shores of a general artificial intelligence, competent to take up the world as it is, derive meaning from its play of events, and intervene in its evolution, purposively and independently.
A recent project called Next Rembrandt set out to do just this, and in at least the coarsest sense, it succeeded in its aims.2 A team of engineers and data modelers sponsored by Microsoft and the Dutch bank ING plumbed the painter’s corpus “to extract the features that make Rembrandt Rembrandt,” deriving from them parameters governing every aspect of his work, from his choice of subject and lighting angle to the precise proportions of the “typical Rembrandt eye or nose or ear.” Having crunched the data, they arrived at their “conclusive subject”—“a Caucasian male with facial hair, between the ages of thirty and forty, wearing black clothes with a white collar and a hat, facing to the right”—and then used this data set projectively, to create a portrait of someone who never existed, in the unique style of a master three and a half centuries in the ground.
Whether most of us quite realize it or not, we already live in a time in which technical systems have learned at least some skills that have always been understood as indices of the deepest degree of spiritual attainment.
It’s surely banal to describe the coming decades as a time of great beauty and greater sadness, when all of human history might be described that way with just as much accuracy. And yet that feels like the most honest and useful way I have of characterizing the epoch I believe we’ve already entered, once it’s had time to emerge in its fullness.
The truly transformative circumstances will arise not from any one technology standing alone, but from multiple technical capabilities woven together in combination.
Information is the substance of the new mobility, as it is of the new healthcare, the new urbanism, the new warfare and so on, and this affords the enterprise that has mastered information-work a near-infinite series of pivots.
The names, the logos and the shareholders might be replaced by new ones, but the colonization of everyday life by information technology, the measurement and monetization of ordinary experience, and the cementing of existing power relations would all proceed apace.
The Stacks don’t care in the slightest if you’re trans or poly or vegan or a weed smoker. In fact, they encourage the maximum possible degree of differentiation in self-expression, and are delighted to serve all markets equally.
As individuals and as societies, we desperately need to acquire a more sophisticated understanding of how technologies work in the world, and who benefits most from the way they accomplish that work.
Whenever we get swept up in the self-reinforcing momentum and seductive logic of some new technology, we forget to ask what else it might be doing, how else it might be working, and who ultimately benefits most from its appearance.