Foresight Institute Logo
Image of nano
Home > About Foresight

The Foresight Institute's Founding Vision

page 1 | page 2 | page 3 | page 4

Dangers and Hopes

In the following chapters Drexler discusses possible dangers presented by "The Assembler Breakthrough" and strategies that could provide the benefits of the technology while avoiding the risks.

Engines of Destruction

Drexler begins the chapter on the threats presented by evolving technologies with two observations: (1) today's organisms are far from the limits set by physical law, and (2) our machines are evolving faster than we are, and will likely surpass us within a few decades. Briefly citing several social problems that could arise, Drexler chooses to focus on the survival of life and liberty as the most fundamental. What follows however, is based on the idea of assemblers (introduced above in chapter 4) capable of self-replication. By several years after EOC was published, it was clear to Drexler and others that nanofactories (see above) would be the simplest and most efficient implementation of atomically precise manufacturing, that self-replicating assemblers were neither necessary nor desirable, and that deliberate misuse of advanced nanotechnology was a much greater danger than the type of accident described below. Chris Phoenix and Eric Drexler collected these arguments in a paper titled "Safe exponential manufacturing" and published by the journal Nanotechnology in 2004. The full text is available as a PDF courtesy of the Center for Responsible Nanotechnology. The publication of this paper was reported in Foresight Update No. 54 on page 18, in an article titled "Nanotechnology pioneer calms fears of runaway replicators". The fact that deliberate misuse arising from malice is a much greater problem than "grey goo" created accidentally had been explicitly recognized 13 years earlier in the article "Accidents, Malice, and 'Gray Goo'" in Foresight Background 2. This distinction was recognized in an early statement of Foresight policy published in August 1991.

With the caveats noted in the previous paragraph, assemblers will be able to build all that ribosomes can build, and more. Assembler-based replicators would therefore be able to do all that living organisms can do, and more, spreading swiftly and reducing the biosphere to dust in a matter of days.

In the last few paragraphs of this section Drexler applies the logic of the superiority of the assembler to the ribosome to advanced AI systems. He notes that advanced AI systems will be able to out-think not just human individuals, but entire human societies and even humanity as a whole, and thus be able to displace us. He concludes, "… we need to find ways to live with thinking machines, to make them law-abiding citizens." Unlike the case with accidental release of replicating assemblers described in the previous paragraphs, however, this is a very real problem that has not been solved. There are no obvious ways to remove the potential threat from sufficiently advanced AI systems, or to ensure that such systems will remain friendly to humans.

In the next section of this chapter Drexler turns to the use of replicating assemblers and AI systems as weapons of power by sovereign states. He points out that states could use replicating assemblers to cheaply build vast quantities of conventional weapons, or to wage "germ" warfare with programmable, computer-controlled "germs". In the modern conception of advanced nanotechnology, that would be to use nanofactories to cheaply build vast quantities of conventional weapons or to build perverted cell repair machines to use as programmable, computer-controlled germ warfare agents. Similarly, "AI systems could serve as weapon designers, strategists, or fighters." The capabilities that both emerging technologies would confer on states to expand their military capabilities "by orders of magnitude in a brief time" would be greatly destabilizing to international order. Replicating technology thus joins nuclear war as a possible cause of human extinction.

Turning from threats to survival to threats to liberty, Drexler describes how replicators provide much more flexible weapons than do nuclear warheads, able not only to destroy, but also to "infiltrate, seize, change, and govern a territory or a world." States could use ubiquitous miniature surveillance devices equipped with AI to understand and analyze everything they hear and see. With modified cell repair machines governments could "cheaply tranquilize, lobotomize, or otherwise modify entire populations." Alternatively, advanced technology will eliminate the need for people as workers, soldiers, or bureaucrats, so totalitarian states could simply choose inexpensive genocide instead of control.

The effects of advanced nanotechnology on international security have been a topic of concern to Foresight over the years since its founding in 1986. The Foresight Institute Public Policy page lists several papers on related topics. The roll of nanotechnology in increasing surveillance by governments is addressed on the policy page, and following a brief overview of the issues, was the topic of a project Foresight pursued from 2007-2009: "Open Source Sensing and Data Control".

Given these potential threats to life and liberty, Drexler poses the question of what tools we can use to avoid enslavement and death, and instead achieve the utopian dream of a future of wonders that these technologies could bring.

The first tactic Drexler considers is how to make hardware more trustworthy through reliable components and systems. Although accidental grey goo is no longer a concern, reliability remains an underlying issue for advanced technology. Reliable products depend upon reliable components and reliable system design. Component failures are often due to material flaws. Atomically precise manufacturing will make atomically perfect components; however, cosmic ray damage will introduce random flaws that can cause failure of small components that are initially flawless. Nevertheless, designs that embody redundant components can continue to work even when components fail. Even greater reliability results from design diversity, in which critical functions are implemented using multiple devices made from different designs. Drexler gives the example of improving computer software by having several different programmers design programs to accomplish the same goal, and then have the programs run in parallel and vote on the answer. Redundancy entails costs in terms of making systems bulkier, more expensive and less efficient, but, as Drexler point out, nanotechnology will make systems smaller, cheaper, and more efficient, and thus better able to afford the costs of greater reliability through redundancy.

Applying similar logic to the problem of reliable AI systems, Drexler claims that AI systems with many parts will have room for redundancy and design diversity, increasing reliability. He cites a 1981 paper from the MIT Artificial Intelligence Laboratory suggesting that AI researchers model their systems on the methods evolved by the scientific community to form, test, and discard hypotheses. He cites also an Agora system proposed by Mark Miller and himself in which many independent systems of software would interact in a market system. In this context Drexler points out that human institutions are evolved artificial systems that often solve problems that their individual members cannot. Like constitutional checks and balances in governments, we will use intelligent machines to check and balance each other.

From this general consideration of reliability of components and systems, Drexler turns to the task of identifying reliable institutions to guide democratic governments through the 'Assembler Breakthrough' (as Drexler phrases it), or in more current terminology, the advent of atomically precise manufacturing (APM) and artificial general intelligence (AGI). Writing in 1986 and presenting radical ideas that heretofore had drawn little or no attention, Drexler assumed that the 'leading force' in developing assemblers would be some organization controlled by one or more democratic governments. Citizens of democratic states are thus called on make policy for the 'leading force'.

From June of 2000 through April of 2006, Foresight and the Institute for Molecular Manufacturing worked to develop Molecular Nanotechnology Guidelines for the safe development of atomically precise manufacturing, AKA molecular manufacturing, AKA molecular nanotechnology, AKA atomically precise productive nanosystems.

Because of the focus in EOC on grey goo, Drexler's first policy priority is not to let a single replicating assembler "of the wrong kind" loose on the world. Since dangerous replicators will be simpler to design than systems to stop them, it will be necessary, Drexler argued, to contain nanotechnology while learning to control it. Suggestions for containment include isolation behind multiple walls, laboratories in space, mechanisms to count replication cycles (analogous to telomeres in eukaryotic cells), requirements for special synthetic 'vitamins' found only in laboratories, and designing them so that they cannot evolve. The latter should not be difficult because all modern organisms have evolved to be able to evolve. Similar capabilities can be omitted from synthetic replicator designs.

Drexler concludes that it should be easy for the leading force to "make replicating assemblers useful, harmless, and stable." Further, making useful, safe assemblers available will reduce incentives for others to develop them independently. Drexler also introduces the concept of 'limited assemblers'—programmed to make a specific set of products and capable of only limited or no self-replication. Such assemblers could not be reprogrammed by their users for other purposes:

Machines built by limited assemblers will enable us to open space, heal the biosphere, and repair human cells. Limited assemblers can bring almost unlimited wealth to the people of the world.
—K. Eric Drexler Engines of Creation

Presumably, the nanofactory equivalent of limited assemblers would be nanofactories that only make approved products because they only accept certified instruction sets.

To address the needs of scientists and engineers who will need freely programmable assemblers to conduct research and test designs, Drexler proposes 'sealed assembler laboratories'. A user sitting at a computer communicates through many layers of a thumb-sized device to direct nanomachinery in the center of the device to build molecular machinery to the user's specifications. Both the outermost and the innermost layers contain sensors to detect attempts to breach the security of the sealed assembler lab, either from the outside or from the inside. Any breech triggers an explosion within the thumb-sized device that instantly incinerates any device in the laboratory inside. Thus the device lets out information but not dangerous replicators or dangerous tools.

Sealed labs will let engineers build and test even dangerous devices in complete safety. Scientists and engineers will build and test new materials and new devices, perhaps early cell repair machines. After a public safety review of designs proposed as a result of experiments in sealed assembler labs, the design could be approved, and limited assemblers programmed to provide the new devices. Sealed labs will enable the whole of society to participate in developing nanotechnology. Note: Anyone doubting the ability of the public at large to contribute creative designs to emerging technology should check out a story we posted in 2012 in which gamers playing an online game developed from a protein folding program outperformed experts in redesigning a protein.

Drexler suggests this widespread participation could help to prepare for the time when some other group independently learns to build assemblers and applies them "to build something nasty." To delay that happening, Drexler considers that once limited assemblers and sealed assembler labs are available, information about the transition from bulk technology to assembler technology be destroyed. If the process were complex enough, then perhaps no individual would know more than a small fraction of how it was done, so it would be difficult for another group to repeat the process independently. [This presupposes that the whole process had been done under extreme secrecy.]

Such tactics, however, would only delay the independent development of advanced nanotechnology, be it assemblers or nanofactories. Only a totalitarian government might be able to stop independent development indefinitely. Therefore, Drexler next considers how we might learn to live with a world in which untrustworthy replicators are present. He proposes 'active shields': nanomachines that function like white blood cells to fight all sorts of dangerous replicators, be they bacteria, viruses, or dangerous nanomachines. Reliable active shields would have to be able to cope with the entire range of threats that an intelligent enemy could devise. Perhaps successful designs could be evolved by opening design competitions played out in sealed assembler labs to a wide range of professionals, hobbyists, gamers, hackers, and automated engineering systems. If an arms race should develop between rival forces, one attacking with aggressive nanomachines, and one building active shields, the possession of automated engineering systems to accelerate design would be a great advantage. Drexler writes:

Nanotechnology and artificial intelligence could bring the ultimate tools of destruction, but they are not inherently destructive. With care, we can use them to build the ultimate tools of peace.
—K. Eric Drexler Engines of Creation

For further discussion of limited assemblers and active shields, see "Regulating Nanotechnology Development", written by David Forrest in 1989.

Strategies and Survival

The next chapter, on strategies to deal with the above threats, opens with the observation that the technology race, driven by evolutionary pressures, is leading toward great dangers. The first strategy for dealing with those dangers that he considers is restraining research and development. He concludes this would be ineffective, or worse. Personal restraint merely means that in a diverse world, others will proceed. Laws passed in democracies to suppress research will simply give the lead to a more repressive regime to become the leading force. Drexler writes:

This deserves emphasis. Without some novel way to reform the world's oppressive states, simple research-suppression movements cannot have total success. Without a total success, a major success would mean disaster for the democracies. …
—K. Eric Drexler Engines of Creation

The next strategy Drexler considers and rejects is to press for a verifiable, worldwide ban on nanotechnology development. Conceding that such a strategy might be useful to control nuclear weapons, he argues that nanotechnology and artificial intelligence are different for at least two reasons: (1) while nuclear weapons require certain isotopes of rare metals and are thus distinct from other activities, there is no natural dividing line that identifies small advances in biochemistry that could lead to nanotechnology, and likewise modern computer technology leads in small steps to artificial intelligence; (2) while nuclear weapons systems are large and fairly easy for inspectors to find, any small laboratory could be on the verge of a breakthrough in nanotechnology, and likewise any hacker could be on the verge of a breakthrough in artificial intelligence.

If global suppression-of-research agreements would not be practical, global suppression by force would be even worse. Global suppression by force would require one power to conquer and occupy other powers armed with nuclear weapons. If this were even possible, who would trust such a power to police its own advances, much less to maintain thorough, unending vigilance over the entire world. Thus strategies for stopping research all seem doomed to fail. Instead, Drexler argues, we will need selective, targeted delay to postpone threats until we can prepare for them. We need to guide advance, not halt it.

Drexler elaborated his thoughts on how to deal with the potential risks of self-replicating atomically precise manufacturing (nanotechnology) and of genuine artificial intelligence (AI) in a "Dialog on Dangers" featuring a "Pro-Progress Advocate," a "Pro-Caution Advocate," and a "Moderator" in Foresight Background 3 published in 1988.

Fourteen years after Engines of Creation was published, computer scientist Bill Joy, who had been a speaker at the First Foresight Conference on Nanotechnology in 1989, suggested eleven years later, in an essay published in Wired in 2000, that nanotechnology was so dangerous we should choose to relinquish developing it. His essay and responses to it were reported in Foresight Update: "Oh, Joy! A Media Watch Special Report" and "Joy Joins Discussion of Foresight Guidelines at May Gathering". These issues were also extensively discussed on Foresight's blog Nanodot. The vigorous response to Joy's essay emphasized the arguments that Drexler made in EOC, summarized above, that attempted global suppression of nanotechnology development would not succeed. A particularly thorough analysis by Robert A. Freitas Jr. published on Foresight's web site shortly after Joy's essay appeared ("Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations"), made it clear that vigorous monitoring would permit early detection and effective counter measures against grey goo, underscoring the argument that attention should be focused on preparing for the emergence of advanced nanotechnology.

Turning from suppression of research to the opposite strategy, Drexler warns that "an all-out, unilateral effort" would initiate a destabilizing arms race since it would be very difficult to keep an effort of the necessary scale secret, especially in a democracy. Upon learning of the effort to develop nanotechnology or AI, a powerful adversary might choose to attack with nuclear weapons while it still had the capability to do so. Because of the speed with which the development of advanced nanotechnology (self-replicating atomically precise manufacturing) or the development of genuine AI could alter a strategic military balance, a stable balance of power seems an unrealistic hope.

Drexler therefore recommends cooperation between the democracies and the Soviet block (EOC was published five years before the dissolution of the Soviet Union) as the only viable strategy. He suggests that working together on a shared goal will blur the adversarial nature of the relationship. Close cooperation would also enable each side to verify that the other was adhering to agreements by providing open access to the other's facilities. However, cooperation could still lead to a balance of power in which each side had an incentive to shoot first to eliminate the other. Further, there is the problem of other powers not included in the cooperative agreement, or of breakthroughs made by hobbyists, or of accidents.

These considerations lead Drexler to recommend a synthesis of strategies, based on the proposal that the democracies lead the world in most areas of science and technology and therefore are the leading force that he has proposed, and should be able to maintain that lead. He further assumes that they will be abe to use the strategies he has proposed (sealed assembler labs, limited assemblers, and maintaining secrecy about the road from bulk technology to advanced nanotechnology. Finally, he reasons that the time bought by these strategies will allow time to develop active shields to protect from nanotechnology threats. For Drexler, these thoughts define a goal. He suggests a two-part strategy to reach the goal.

Before describing Drexler's proposed two-part strategy, a brief pause is perhaps in order to consider how well the goal proposed in 1986 fits into the world of 2016. Given the changes in the world scene during the past 30 years, and the current state of science and technology relevant to the development of nanotechnology and AI, one might question whether this strategy still seems feasible. In particular, given the development of the Internet, is it at all reasonable to expect that these technologies will be developed by large, secret government programs? Given the trajectory of developing the enabling technologies during the past 30 years, should we expect the scenario of very rapid development of advanced technologies following a crucial breakthrough, described by Drexler, or more of the incremental development that we have already seen these past 30 years?

Returning to Drexler's two-part strategy, the first part concerns the cooperating democracies. It requires maintaining a lead in the development of nanotechnology and AI comfortable enough to permit proceeding with caution, developing trustworthy institutions to manage both the initial breakthroughs, and the development of active shields.

The second part applies to the behavior of the cooperating democracies toward "presently hostile powers"—maintaining the initiative while minimizing the threat that these powers perceive. He proposes that the institutions that the democracies must develop for the first part of the strategy must also be stable and trustworthy enough that even our opponents would have some measure of confidence in them. This would be accomplished by making those institutions as open as possible, and by offering a role for Soviet cooperation. Conceivably, this could lead to a public Soviet stake in our joint success. At the very least, we should avoid threatening any government's control over its own territory so that it has no incentive to attack.

Because active shields are the heart of Drexler's long-range plans for dealing with the dangers posed by the development of nanotechnology, he describes the nature of active shields in the context of the question of whether defense can be kept separate from war-making ability. Traditionally defense has required weapons that are also useful for offense. Warriors to defend walls to halt invaders could also be used to invade an enemy. Drexler puts forward the example of the "Star Wars" anti-missile defense system proposed by US President Reagan, and how it could, in principle, be made into a practical defense while being fundamentally incapable of offense. The key would be to automate the systems rather than have it under human control. It would be programmed to only fire at massive flights of missiles that look like an attempted first strike. It would not fire at targets other than missiles, and it would not fire on isolated rockets on peaceful space missions. Most importantly, it would not be able to discriminate between sides—it would be programmed to destroy any apparent first strike. Such a system could be named an active shield because it would act by defending both sides while menacing neither. Such active shields could diminish the arms race. Such systems could be built unilaterally by one side while allowing multilateral inspection by other sides to verify what the systems could and could not do, or they could be built jointly, with technology transfer limited to the minimum needed for cooperation and verification.

A foundational component of the above proposals is the central role of openness. Without necessarily imparting how-to-do knowledge, there must be enough openness to evaluate and verify essential features. An in-depth exploration of the usefulness of openness, applied specifically to the case of President Reagan's Strategic Defense Initiative, is supplied by the essay "The Weapon of Openness" written by Foresight Advisor Arthur Kantrowitz and presented in 1989 as Foresight Background No. 4.

In the last section of this chapter, having established that confronting the potential threats of advanced nanotechnology and genuine AI will require sophisticated and complex trustworthy institutions, Drexler considers how to "cope with the emergence of a concentration of power greater than any in history." He proposes that the beginnings of a solution can be found in current institutions like the free press, the research community, and activist networks. Success will require "that a growing community of people strive to develop, publicize, and implement workable solutions—and that they have a good and growing measure of success." Catalyzing the development of a growing community of people to prepare for powerful emerging technologies has been a prime goal of the Foresight Institute since its origin. The year following the publication of EOC Foresight Background 0, Rev. 5 presented Drexler's elaboration of the importance of building groups to address these issues "Postscript for Engines of Creation: The Need for Foresight Groups".

For institutions to be trustworthy and for a community to prepare for the emergence of powerful, transformative technologies, it becomes crucial to improve institutions for judging important technical facts. How can the institutions that we currently depend upon—the free press, the scientific community, and the courts—be improved?

Finding the Facts

The next chapter suggests how to improve the way society tries to understand technology. Due to lack of knowledge, the public generally leaves technical judgments to technical experts. Drexler cites several cases in which flawed judgments commonly result from the personal and institutional preferences and conveniences of bureaucrats and others. He notes that some authors consider it inevitable that complex technologies will become increasingly totalitarian because neither voters nor legislators can understand them. Drexler challenges these opinions maintaining that a democratic framework can be found to use experts to clarify issues without "giving them control of our lives".

Currently technical deliberations are flawed by partisan feuding among experts. Opposing groups recruit and pay opposing experts, who claim credibility on the basis of who they are rather than how they work. Quiet experts are lost in a welter of opposing advertising, lobbying, and media campaigns. Established experts lose credibility as demagogues join the battle. The method that societies have evolved for judging facts about people suggests a method to judge facts about technology.

Courts use due process to judge facts about people. The sides publicly confront each other about specific allegations; debate proceeds before an impartial jury, and is refereed by a judge who enforces the rules. Drexler proposes that due process should also be of use in judging technical facts. He notes that the scientific literature already uses due process: specific scientific statements are argued back and forth in public refereed articles, with editors having the role of judges. Journals compete with each other for prestige, readership, and papers, eventually frequently reaching consensus. However, journals evolved to meet the limits of printing and the needs of academic science. Conferences and research networks also have their limitations. Further, journals often ignore technical questions of public policy importance or of limited intrinsic scientific interest. These institutions evolved to advance science, not to judge facts for policy makers.

Drexler proposes fact forums, in which each side would start by listing what it sees as the key facts. The referee will seek points of agreement through back and forth argumentation, cross examination, and negotiation. A knowledgeable technical panel will then write opinions describing what seems to be known and what is still uncertain. The output of the fact forum would be limited to statements of fact, free of policy recommendations.

Experts who serve on the panel must not be directly involved in the dispute, but must be knowledgeable in related fields to be competent judges of the arguments of the disputants. Drexler credits the fact forum concept to Foresight Advisor Arthur Kantrowitz (1913-2008), a member of the National Academy of Sciences, who formulated the concept of a "board of technical inquiry" based upon his experience as an advisor to NASA. His arguments recommending reaching the moon by building several small rockets and assembling the components in orbit were never answered because debate had been closed by those committed to building a new generation of giant rockets.

As Drexler notes in the following pages, fact forums, often called science courts, are resisted "because knowledge is power, and hence jealously guarded." Such resistance may delay implementation of fact forums by government agencies. However, since fact forums need no legal powers, they could gradually gain credibility through decentralized development by universities or other entities, outflanking "existing bureaucracies and entrenched interests." As fact forums based on due process gain credibility, the public will come to expect that any advocate with a good case will agree to defend that position in public. In turn, more efficient ways to identify facts with fewer distractions will allow public debate to focus on choosing a path forward through emerging transformative technologies to a world worth living in.

The Network of Knowledge

To further help society to learn faster to prepare for the emergence of transformative technologies, the following chapter turns from fact forums to a technology-based "network of knowledge". The problem is that our current information systems make it difficult to find, file, organize, spread, or correct information. Therefore, shared knowledge is "relatively scarce, incorrect, and disorganized." To find a technological solution, Drexler begins with a 1945 proposal by Vannevar Bush—a desk-sized collection of microfilm and mechanisms that would display stored pages and let the user note relationships among them. With the advent of inexpensive computers, the dream was carried on by Theodore Nelson and his Xanadu hypertext system—a computer based system of text linked in many directions, not just a one-dimensional system of document to references. [This and related proposals originated in the early days of the personal computer, a decade before the advent of the World Wide Web.]

The core of the Xanadu system would be a computer network able to store both documents and links between documents, in which documents could be books and articles of any type, news releases, programs, music, or movies. Users would be able to link any part of any document to any part of any other document, and the link would show both in the originating document and in the target document, allowing a reader of either document to display the document at the other end of the link. Further, the system would track revisions as each document was modified, showing what changes were made when. Such a computer network would mature into a world electronic library. Hypertext will allow us to weave our knowledge into more coherent wholes to better represent reality. To make information gathering and organization more effective, Xanadu would let anyone publish and would pay authors whenever readers use their material, rewarding those who provide what others want. On a hypertext system, comments by readers and listeners will be easy to make and easy to find. Comments by readers will help other readers sort knowledge from garbage. Hypertext will allow readers to find documents that link to the one they are reading. "This means a breakthrough: it will subject ideas to more thorough criticism, making them evolve faster." Readers will be able to quickly and easily identify ideas that have been decisively refuted or retracted by their authors. Conversely, ideas that have survived all known criticisms will gain credibility.

Hypertext will represent knowledge in a more natural way than is possible now (that is, in 1986)—as an unbroken web with pieces of knowledge linked by association, as are our memories. It will become easier to compare competing worldviews, judge ideas faster and better, adjust our thinking to the coming revolutions in technologies, and thus strengthen our foresight. In particular, the absence of objections to an idea will readily indicate an absence of known holes in the proposal. Conversely, by making holes more visible, hypertext will encourage people to fill the holes. Hypertext will build confidence by exposing flaws more easily.

Drexler points out how hypertext, like all powerful tools, could also be used to do harm. He acknowledges that hypertext could help governments keep track of citizens, but points out that it could also help citizens keep tab on government databases. A major challenge will be to extend traditional guarantees of free speech to new media.

Written half a decade before the World Wide Web was proposed, and nearly a decade before it became widespread, EOC envisioned the gradual growth of hypertext into a world library. Drexler expressed confidence that hypertext would not only extend the quantity of information and ease of access to information by a large factor, as had the invention of the printing press, but would extend the quality of information available, contributing to the increase in intelligence necessary to survive the revolutions that atomically precise manufacturing and AI will bring. Drexler subsequently expanded on these ideas in an essay "Hypertext Publishing and the Evolution of Knowledge".

The importance of hypertext publishing in preparing for transformative emerging technologies was a central portion of Foresight's founding vision and early Foresight efforts. A 1989 conference honoring Eric Drexler included a progress report on project Xanadu and a presentation on an early hypermedia image access system. By 1997 it was clear that the newly popular World Wide Web was a partial answer to the need for a hypertext publishing system, and an article by Drexler in Foresight Update explained why the WWW was only half a solution and launched an effort to get the missing functions incorporated into web standards: "Call to Action: Foresight Web Enhancement Project". Foresight's web enhancement project ran for about two years and produced the CritLink public annotation server that is available for use.

In EOC Drexler envisioned hypertext publishing as an effective tool in arguing the feasibility of advanced nanotechnology enabling atomically precise manufacturing, and the necessity of preparing for the challenges this emerging technology (and also advanced artificial intelligence) will present. Arguing for the feasibility of advanced nanotechnology/molecular manufacturing did in fact occupy much of Foresight's efforts during its first two decades. Without the availability of a complete hypertext publishing system, Foresight has used the WWW and other media to conduct these debates. For example, a 1996 article in Scientific American presented a very negative view of a meeting on nanotechnology held by Foresight the previous November, provoking a vigorous response and debate on the web. Another debate on a similar topic—the policy of the US National Nanotechnology Initiative with respect to molecular manufacturing—between Eric Drexler and Nobel laureate chemist Richard Smalley, published in Chemical & Engineering News in 2003, was reported in Foresight Update the following month.

In other efforts to guide public policy in the US, in 1992 Eric Drexler testified before the US Senate committee chaired by then-Senator Al Gore on molecular nanotechnology as a technology for a sustainable world (written testimony prepared in advance, oral testimony). In 2003 Foresight and its sister organization the Institute for Molecular Manufacturing presented to the White House Office of Science and Technology Policy a white paper written by Neil Jacobstein, Ralph Merkle, and Robert Freitas titled "Balancing the National Nanotechnology Initiative's R&D Portfolio". Also in 2003 Foresight Co-Founder Christine Peterson testified before the Committee on Science, U.S. House of Representatives considering The Nanotechnology Research and Development Act of 2003: "Molecular Manufacturing: Societal Implications of Advanced Nanotechnology".

Worlds Enough, and Time

In the final chapter of EOC Drexler considers what kind of world we might move toward if we develop molecular manufacturing/atomically precise manufacturing and automated engineering via artificial intelligence, and wisely use fact forums and hypertext publishing to avoid annihilation.

Wise use of these technologies could bring abundance for all. In a note to the main text, Drexler confronts the Malthusian caveat to the prospect of unlimited abundance, and proposes "Inheritance Day"—the one-time equal division of the resources of space. This would provide a fair and generous distribution of vast resources while ensuring that in the future those who responsibly limited reproduction did not lose out to those who chose unlimited exponential reproduction.

In looking for goals that seem appealing and feasible, Drexler proposes "an open future of liberty, diversity, and peace", with active shields to secure peace. Consequences for daily life could include a self-cleaning environment, growing any kind of food in the home without killing anything conscious and feeling, and enjoying full immersion virtual reality. Some may choose bodily changes or enhancements beyond simple perfect health and indefinitely long lives with full youthful vigor. Large factories and bureaucracies could be replaced by small, self-sufficient communities. Large complex worlds can be created not only on Earth, but also from the abundant resources of space—worlds in our solar system alone with a total land area a million times that of Earth. Diverse groups will be able to form and experiment with almost any society that they wish, as long as their wishes do not include dominating everyone else.

Our problem today is thus not how to build utopias, but rather how to secure a chance to try. Drexler argues that the earlier we start preparing for the challenges that advances in nanotechnology and AI will bring, the greater the chances that good ideas, favorable public opinion, and wise policy will be available when the challenges arrive, perhaps with the speed of a crisis. This perspective informed Foresight's early interest in developing hypertext publishing technology and using the WWW to inform opinion on nanotechnology, as referenced above.

Drexler cites reasons why we might fail to achieve the future that he points to:

Despite the broad appeal of an open future, some people will oppose it. The power-hungry, the intolerant idealists, and a handful of sheer people-haters will find the prospect of freedom and diversity repugnant. The question is, will they shape public policy? Governments will inevitably subsidize, delay, classify, manage, bungle, or guide the coming breakthroughs. The cooperating democracies may make a fatal error, but if they do, it will likely be the result of public confusion about which policies will have which consequences.
—K. Eric Drexler Engines of Creation

Drexler especially stresses that ideas like replicating assemblers (nanofactories that can make more nanofactories), true artificial intelligence, cell repair machines, fact forums, hypertext publishing, and active shields form a complex set of interrelated memes. Spread of some memes without connection to related memes can sow misunderstanding and conflict. Many of the possible confusions about which Drexler cautions can in fact be discerned in the history of the past 30 years. As of 2016 there is in fact no large US Government program to develop atomically precise manufacturing, although some recent encouraging developments can be seen here and here. Nevertheless, preparations are clearly a decade or two behind where Drexler hoped in 1986. A mature and widespread conversation on the various related facets of the emergence of transformative technologies remains to be established. Drexler's 1986 vision was Foresight's founding vision, and developing the conversation he envisioned, in all the aspects that must be tied together, remains Foresight's purpose. From Drexler's concluding remarks:

If we push in the right directions — learning, teaching, arguing, shifting directions, and pushing further — then we may yet steer the technology race toward a future with room enough for our dreams.
—K. Eric Drexler Engines of Creation

page 1 | page 2 | page 3 | page 4

 

About Foresight

Foresight Programs

 

Home About Foresight Blog News & Events Roadmap About Nanotechnology Resources Facebook Contact Privacy Policy

Foresight materials on the Web are ©1986–2024 Foresight Institute. All rights reserved. Legal Notices.

Web site developed by Stephan Spencer and Netconcepts; maintained by James B. Lewis Enterprises.