While we have had many kinds of robots (and other computer-based assistants) for many decades, their abilities have remained limited. For example, if we weigh work tasks by how much we pay to get them done, we find that humans remain far more valuable than robots, because humans get paid far more in total than robots do.
However, robot abilities have been slowly improving over many decades, and they have been improving much faster than human abilities have. So while it may take centuries, we expect robots to eventually get better than humans at almost all tasks; almost all of the money paid to do tasks will then be paid to robots (or their owners). This would be a world “dominated” by robots, at least in the sense of who does most of the work and who makes most of the concrete decisions requiring a detailed understanding of context. Humans might perhaps continue to choose key abstract, symbolic, and high-level policies.
What can we do today to influence events in a future dominated by robots? I first consider the general case of how to influence the future, and then I focus in particular on two kinds of future robots.
Legacies Are Hard
When people talk about the distant future, they usually talk about what they want to happen in the future. Among the many future scenarios that they can imagine, which ones do they most prefer? They talk more about their basic values, less about the practical constraints that limit feasible scenarios, and they talk even less about how we today might influence future outcomes. But this last neglected topic, how to influence the future, seems crucial. Why think about what we want if we cannot change it?
Imagine standing in a river, a river that eventually reaches the ocean, miles downstream. Down there at the ocean, someone you care about is standing in that river. You want to do something to the river next to you to somehow influence this other person downstream. Ideally for the better, but to start you will settle for influencing them in any noticeable way.
This actually turns out to be quite hard. You might splash some water next to you, heat it with a torch, or put a new rock in the river. But rivers tend to be stable arrangements that swallow such disturbances, quickly reverting back to their undisturbed forms. Even building a dam may result in only a temporary change, one that is reversed when the dam fills to overflowing. Perhaps you could put a bottle in the river, a bottle strong enough not to smash on the rapids along the way. Or you might try to divert the river to a new path that does not intersect the ocean at the same place. But none of these are easy.
Trying to influence the distant future is a lot like trying to influence a river far downstream. Many aspects of our world are locally stable arrangements that swallow small disturbances. If you build a mound, rains may wash it away. And if you add one more sandwich shop to your city, one extra shop may soon go out of business, leaving the same number of shops as before.
Yes, since the world has many possible stable arrangements, you might hope to “tip” it into a new one, analogous to a different stable path for a river. For example, maybe perhaps if enough customers experience more sandwich shops for a while, they will perceive a new fashion for sandwiches, and that fashion will allow more sandwich shops to exist. But fashion can be fickle, and perhaps a new food fashion will arise that displaces sandwiches. Also, it can be hard to map the stable arrangements, and rare to find oneself near a tipping point where a small effort can make a big difference.
The set of groups that ally with each other in politics can be somewhat stable, and so you might try to tip your political world to a new set of political coalitions. Similarly, you might join a social movement to lobby to give some values more priority in social and political contexts. But if such values respond to circumstances, they can also be part of stable arrangements, and so resist change. Yes, you may have seen changes in politically expressed values recently, but these may have resulted less from changes in truly fundamental values and more from temporary fashions and responses to changing circumstances.
Some kinds of things naturally accumulate. For example, many innovations, such as academic insights and technical or organizational design choices, are general and robust enough to retain their value outside of current social contexts. If such innovations are also big and simple enough, the world may collect a big pile of them. If so, you might influence the future by adding another such innovation to the pile. However, if someone else would soon have discovered a similar innovation if you had not done so, the distant future may not look much different as a result of your contribution.
The whole world economy accumulates, in the sense that it grows, mostly via innovation. You might try to help some parts of it grow faster relative to other parts, but if there is a natural balance between parts, restoring forces may reverse your changes. You might try to influence the overall rate of growth, such as by saving more, but that might just get the same future to happen a bit earlier. You might try to save resources and commit them to a future plan. This can be an especially attractive option when, as through most of history, investment rates of return have been higher than economic growth rates. In this case you can have a larger fractional influence on the future than you can on the present, at least when you can reliably influence how your future investment returns are spent.
For example, you might save and try to live a long time personally. Or you might make a plan and teach your children and grandchildren to follow your plan. You might create and fund a long-lived organization committed to achieving specified ends. Or you might try to take control of a pre-existing long-lived institution, like a church or government, and get it to commit to your plan.
If you think many others prefer your planned outcomes, but each face personal incentives to pursue other objectives, you might hope to coordinate with them via contracts or larger institutions. If you think most of the world agrees with you, you might even try to make a stronger world government, and get it to commit to your plan. But such commitments can be hard to arrange.
Some kinds of things, like rocks, buildings, or constitutions, tend to naturally last a long time. So by changing such a thing, you might hope to create longer-lasting changes. Some of our longest lasting things are the ways we coordinate with one another. For example, we coordinate to live near the same locations, to speak the same languages, and to share the same laws and governments. And because it is hard to change such things, changes in such things can last longer. But for that same reason, it can be rare to find yourself in a position to greatly influence such things.
For billions of years of the biosphere, by far the most common way to influence the distant future has been to work to have more children who grow up to have more kids of their own. This has been done via not dying, weakening rivals, collecting resources, showing off good abilities to attract mates, and raising kids. Similar behavior has also been the main human strategy for many thousands of years.
This overwhelming dominance of the usual biological strategies suggests that in fact they are relatively effective ways to influence the long-run future; there appear on average to be relatively weak restoring forces that swallow small disturbances of this sort. This makes sense if our complex world is constantly trying to coordinate to match whatever complexity currently exists in it. In this case, the more you can fill the world now with things like you, the more that the world will try to adjust soon to match things like you, making more room in the distant future for things like you.
It tends to be easier to destroy than to create. This tempts us to find ways to achieve long-term aims via destruction. Fortunately, destruction-based approaches are somewhat in conflict with make-more-stuff-like-you approaches. Yes, we do try to kill our rivals, and societies sometimes go to war, but overall unfocused general destruction rarely does much to help individuals create more descendants.
Now that we have reviewed some basic issues in how to influence the future, what can we say about influencing a robot future?
Machines have been displacing humans on job tasks for several centuries, and for seventy years many of these machines have been controlled by computers. While the raw abilities of these computers have improved at an exponential rate over many orders of magnitude, the rate at which human jobs have been displaced has remained modest and relatively constant. This is plausibly because human jobs vary enormously in the computing power required to do those jobs adequately. This suggests that the rate of future job displacement may remain mild and relatively constant even if computing power continues to improve exponentially over a great many more orders of magnitude.
“Artificial intelligence” (AI) is a field of computer research in which researchers attempt to teach computers to accomplish tasks that previously only humans could do. When individual AI researchers have gone out of their way to make public estimates of the overall future rate of progress in AI research, averaged over all of the subfields of AI, their median estimate has been that human-level abilities would be achieved in about thirty years. This thirty-year estimate has stayed constant for over five decades, and by now we can say that the first twenty years of such estimates were quite wrong. Researchers who have not gone out of their way to make public estimates, but instead responded to surveys, have given estimates about ten years later (Armstrong and Sotala, 2012; Grace, 2014).
However, AI experts are much less optimistic when asked about the topics on which they should know the most: recent progress in the AI subfield where they have the most expertise. I was a professional AI researcher for nine years (1984–93), and when I meet other experienced AI experts informally, I ask them how much progress they have seen in their specific AI subfield in the last twenty years. They typically say they have only seen five to ten percent of the progress required to achieve human-level abilities in their subfield. They have also typically seen no noticeable acceleration over this period (Hanson, 2012). At this rate of progress, it should take two to four centuries for the median AI subfield to reach human-level abilities. I am more inclined to trust this later estimate, instead of the typical forward estimate of forty years, as it is based more directly on what these people should know best.
Even if it takes many centuries, however, eventually robots may plausibly do pretty much all the jobs that need doing. At that point, the overall growth rate of the economy could be far higher; the economy might double roughly every month, instead of every fifteen years as it does today. At that point, human income would have to come from assets other than their abilities to work. These assets could include stock, patents, and real estate. While asset values should double about as fast as the economy, humans without sufficient assets, insurance, or charity could starve.
A future world dominated by robots could in principle evolve gradually from a world dominated by humans. The basic nature, divisions, and distributions of cities, nations, industries, professions, and firms need not change greatly as machines slowly displace humans on jobs. That is, machines might fit into the social slots that humans had previously occupied. Of course at the very least, industries that previously raised and trained humans would be replaced by new industries that design, maintain, and manufacture robots.
However, there could also be much larger changes in the organization of a robot society if, as seems plausible, machines are different enough from humans in their relative costs or productivity so as to make substantially different arrangements more efficient. One reasonable way to guess at the costs, productivity, and larger-scale structures of a robot society is to look at the distribution of similar features in the software that we have created and observed for many decades. While it is possible that future software will look very different from historical software, in the absence of good reasons to expect particular kinds of changes, historical software may still be our best estimate of future software. So we might reasonably expect the structure of a robot society to look like the structure of our largest software systems, especially systems that are spread across many firms in many industries.
How might one try to influence such a robot future? One straightforward approach is to accumulate resources, and entrust them to appropriate organizations. For example, if you just wanted to influence the future in order to make yourself or your descendants comfortable and happy, you might try to live a long time and keep resources to yourself, or you might give resources to your descendants to spend as they please.
Perhaps you dislike the overall nature or structure that a robot society would likely have in a decentralized world with only weak global coordination. In this case, if you and enough others felt strongly, you might try to promote large-scale political institutions, and encourage them to adopt sufficiently strong regulations. With detailed enough monitoring for violations and strong enough penalties for those found guilty, regulations might force the changes you desire. If the profits that organizations could gain from more decentralized arrangements were strong enough, however, global regulation might be required. The structures of a future robot society may plausibly result from a gradual evolution over time from structures in the most robot-like parts of our society today. In this case, one might hope to influence future structures via our choices today of structures in computer-intensive parts of our society. For example, if one preferred a future robot society to have relatively decentralized security mechanisms, then one might try to promote such a future via promoting the development and adoption of relatively decentralized security mechanisms today. And if one feared high levels of firm concentration in a particular industry of a future robot society, one might try to promote low levels of firm concentration in that industry today.
As we have been discussing, it is possible that a future world will be filled with robots similar to the kinds of robots that we have been building for many decades. However, it is also possible to, at least for a time, fill a future with a very different kind of robot: brain emulations.
Brain emulations, also known as “uploads” or “ems,” have been a staple of science fiction and tech futurism for decades. To make a brain emulation, one takes a particular human brain, scans it to record its particular cell features and connections, and then builds a computer model that processes signals according to those same features and connections. A good enough em has very close to the same overall input-output signal behavior as the original human. One might talk with it, and convince it to do useful jobs.
Like humans, ems would remember a past, are aware of a present, and anticipate a future. Ems can be happy or sad, eager or tired, fearful or hopeful, proud or shamed, creative or derivative, compassionate or cold. Ems can learn, and have friends, lovers, bosses, and colleagues. While em psychological features may differ from the human average, they are usually near the range of human variation.
The three technologies required to create ems—computing, scanning, and cell modelling—all seem likely to be ready within roughly a century, well before the two to four centuries estimated above for ordinary robots to do almost all jobs. So ems could appear at a time when there is plenty of demand for human workers, and thus plenty of demand for ems to replace those human workers.
I recently published a book, The Age of Em: Work, Love, and Life when Robots Rule the Earth (Oxford University Press, 2016), giving a detailed description of a world dominated by ems, at least regarding its early form, the one that would appear soon after a transition to an em world. Let me now summarize some of that detail.
My analysis of the early em era paints a picture that is disturbing and alien to many. The population of ems would quickly explode toward trillions, driving em wages down to near em subsistence levels, em work hours up to fill most of their waking hours, and economic doubling times down to a month or less. Most ems would be copies of less than a thousand very smart, conscientious, and productive humans. Most ems would be near a subjective peak productivity age of fifty or more, and most would also be copies made to do a short-term task and then end when that task is done.
Ems would cram in a few tall cities packed densely with hot computer hardware. Ems would leisure in virtual reality, and most ems would work there as well. Em virtual reality would be of a spectacular quality, and ems would have beautiful virtual bodies that never need feel hunger, cold, grime, pain, or sickness. Since the typical em would run roughly a thousand times faster than humans, their world would seem more stable to them than ours seems to us. Ems would often spin off copies to do short-term tasks and then end when those tasks are done. After a subjective career lasting perhaps a century or two, em minds would become less flexible and no longer compete well with younger minds. Such ems could then retire to an indefinite life of leisure at a slower speed. The ease of making copies of ems would make preparation easier. One em could conceive of a software or artistic design and vision, and then split into an army of ems who execute that vision. Big projects could be completed more often on time if not on budget by speeding up the ems who work on lagging parts. One em could be trained to do a job, with many copies then made of that trained em. Em labor markets would thus be more like our product markets today, dominated by a few main suppliers. Ems would be more unequal than we are, both because em speeds could vary, and because longer lifespans let unequal outcomes accumulate. Ems would split by speed into status castes, with faster ems being higher status. Em democracies would probably use speed-weighted voting, and em rulers would usually run faster than subordinates, to more easily coordinate bigger organizations. Em organizations may also use new governance methods like decision markets and combinatorial auctions. Each em would feel strongly attached to its clan of copies all descended from the same original human. Em clans may self-govern and negotiate with other clans for the legal rules to apply to disputes with them. Clans may give members continual advice based on the life experiences of similar clan members. To allow romantic relations when there is unequal demand for male vs. female em workers, the less demanded gender may run slower, and periodically speed up to meet with faster mates. Fast ems with physical robotic bodies would have proportionally smaller bodies. A typical thousand-times-human-speed em would stand two millimeters tall. To such an em, the Earth would seem far larger. Most long-distance physical travel would be via “beam me up” electronic travel, done with care to avoid mind theft.
Em cities are likely inhospitable to ordinary humans, who, controlling most of the rest of the Earth, mostly live a comfortable retirement on their em-economy investments. While ems could easily buy the rest of the Earth, they do not care enough to bother, beyond ensuring energy, raw materials, and cooling for em cities. Just as we rarely kill our retirees and take their stuff, there is a reasonable hope that ems might leave retired humanity in peace.
Over the longer run, the main risk to both humans and nature is probably em civilization instabilities like wars or revolutions. Ems running a thousand times faster than humans might fit a few millennia of history into a few objective years. As slow em-retirees face similar risks, they would be allies to help humans to promote stability in the em civilization.
Em Robot Legacies
Above we quickly discussed some of the main ways to try to influence a general robot future. How does this situation change for em-based robots?
The most obvious difference is that since each em results from scanning a particular human, particular humans can hope to have great influence over the individual ems that result from scanning them. The parents and grandparents of such humans can also hope to have a related influence. These sorts of influences are quite similar to those resulting from the typical and so-far-very-effective biosphere strategy of promoting future creatures who share many of your details.
Another big difference is that as ems are very human-like, ems can fit much more directly and easily into the various social slots in the previous human society. There is less reason to expect big immediate changes in the basic nature, divisions, and distributions of cities, nations, industries, professions, and firms when ems arrive. Because of this, the investments made today into influencing such social institutions can more plausibly last into the em era. Of course social arrangements and institutions are likely to change over time with ems, just as they would have changed over time if humans had still dominated the Earth.
We can expect that, during the em era, em robots would continue to develop the abilities of traditional non-em-based robots. Eventually such robots might become more capable than ems in pretty much all jobs. That could plausibly mark the end of the em era. It is less obvious that traditional robots would eventually displace ems than that such robots would eventually displace humans, because ems have more ways to improve over time than do humans. Even so, displacement of ems by traditional robots seems a scenario worth considering.
Compared to a scenario where humans are directly replaced by traditional robots, a scenario where ems first replace humans and then are replaced in turn by traditional robots seems to allow humans today to have a larger impact on the distant future. This is because the first scenario contains a more jarring transition in which existing social arrangements are more likely to be replaced wholesale with arrangements more suitable to traditional robots. In contrast, the second scenario holds more possibilities for more gradual change that inherits more structure from today’s arrangements.
As the em economy advances, it is likely to become gradually more successful at finding useful and larger modifications of em brains. But the search would mainly be for modifications that can make the modified ems even more productive with the existing em jobs and social relations. And since most modifications are likely to be small, em minds, jobs, and other social relations would gradually co-evolve into new arrangements. Larger em brain modification is probably accompanied by better abilities to usefully separate parts of em brains, and better theories for how these parts work. This probably encourages developers of traditional robots to include more subsystems like em brain parts, which fit more easily and naturally into the em economy.
The net effect is that a transition from ems to traditional robots would plausibly be less jarring and more incremental, including and continuing more elements of em minds and the larger arrangements of the em society. And since an em society would also continue more of the arrangements of the prior human society, humans today and their arrangements would face a more incremental path of future change, allowing more ways for people today to influence the future, and creating a future more like the people and institutions of today.