Foresight Institute Logo
Image of nano

Unbounding the Future:
the Nanotechnology Revolution


Chapter 12

Safety, Accidents, and Abuse

Some truisms: Almost any technology is subject to use, misuse, abuse, and accidents. The more powerful a technology is when properly used, the worse it is likely to be when abused. Any powerful technology in human hands can be the subject of accidents. Nanotechnology and molecular manufacturing will be no exception. Indeed, if molecular manufacturing replaces modern industry, and if its nanotechnological products replace most modern technologies, then most future accidents will have to involve nanotechnology.

Another truism: In a diverse, competitive world, any reasonably-inexpensive technology with enormous commercial, medical, and military applications will almost surely be developed and used. It is hard to envision a scenario (short of the collapse of civilization) in which nanotechnology will not make its appearance; it seems inevitable. If so, then its problems, however tough, must be dealt with.

Like trucks, aircraft, biotechnology, rockets, computers, boots, and warm clothes, nanotechnology has the potential for both peaceful and aggressive uses. In peaceful uses (by definition), harm to people occurs either by accident or as an unintended consequence. In aggressive uses, harm is deliberate. In a peaceful context, the proper question to ask is Can fallible people of goodwill, pursuing normal human purposes, use nanotechnology in a way that reduces risk and harm to others? In an aggressive, military context, the proper question to ask is Can we somehow keep the peace? Our answer to the first will be a clear yes, and to the second, an apprehensive maybe.

Throughout this discussion, we assume that most people will be alert in matters concerning their personal safety, and that some will be alert in matters concerning world safety. During the 1970s, people awakening to the new large-scale, long-term problems of technology often felt isolated and powerless. They naturally felt that technology was out of their control, in the hands of shortsighted and irresponsible groups. Today, there are still battles to be fought, but the tide has turned. When a concern arises regarding a new, obvious technology, it is now much easier to get a hearing in the media, in the courts, and in the political arena. Improving these mechanisms for social vigilance and the political control of technology is an important challenge. Current mechanisms are imperfect, but they can still give a big push in the right directions.

Though we assume alertness, alertness can be a scarce resource. The total amount of concern and energy available for focusing on long-term problems is so limited that it must be used carefully, not squandered on problems that are trivial or illusory. Part of our aim in this chapter is to help sort out the issues raised by nanotechnology so that attention can be focused on problems that must be solved, but might not be.

The next few sections deal with accidents of conventional sorts, where safety benefits are obvious. Later sections discuss more novel problems, some tough enough that we have no good answers.

Safety in Ordinary Activities

As countries have grown richer, their people have lived longer despite pollution and automobile accidents. Greater wealth means safer roads, safer cars, safer homes, and safer workplaces. Throughout history, new technologies have brought new risks, including risks of death, injury, and harm to the environment, but prudent people have only accepted new technologies when they offered an improved mix of risks and benefits. Despite occasional dramatic mistakes, the historical record says that people have succeeded in choosing technologies that reduce their personal risks. This must be so, or we wouldn't be living longer.

Molecular manufacturing and its products should continue this trend, not as an automatic consequence, but as a result of continued vigilance, of people exercising care in picking and choosing which technologies they allow into their daily lives. Nanotechnology will give better control of production and products, and better control usually means greater safety. Nanotechnology will increase wealth, and safety is a form of wealth that people value. Public debate, product testing, and safety regulations are standard parts of this process.

Home Safety

In common home accidents, a dangerous product is wrongly applied, spilled, or consumed. Homes today are full of corrosive and toxic materials, for cleaning drains, dissolving stains, poisoning insects, and so forth. All too often, children drink them and die. With advanced technology, none of these tasks will require such harsh, crude chemicals. Cleaning could be performed by selective nanomachines instead of corrosive chemicals; insects could be controlled by devices like ecosystem protectors that know the difference between a cockroach and a person or a ladybug. There will doubtless be room for deadly accidents, but with care and hard work, it should be possible to ensure that nanotechnologies for the home are safer than what they replace, saving many lives.

It is, of course, possible to imagine safety nightmares: nanotechnology could be used to make products far more destructive than anything we've seen because it could be used to extend almost any ability further than we've seen. Such products presumably won't be commonplace: even today, nerve gas would make a potent pesticide, but it isn't sold for home use. Thinking realistically about hazards requires common sense.

Industrial Safety

We've already seen how post-breakthrough technologies can eliminate oil spills by eliminating oil consumption. A similar story could be told of almost any class of industrial accident today. But what about accidents—spills and the like—with the new technologies? Rather than trying to paint a picture of a future technology, of how it could fail and what the responses could be, it seems better to try a thought experiment. What could be done to deal with oil spills, if oil were still in use? This will show how nanotechnologies can be used to cope with accidents:

If there were a spill and oil on the shore, advanced nanomechanisms could do an excellent job of separating oil from sand, removing oil from rocks, and cleaning crude oil from feathers on birds and the feathery legs of barnacles. Oil contamination is a pollution problem, and nanotechnology will be a great aid in cleaning up pollution.

But why should the oil reach the shore? Economical production would make it easy to stockpile cleanup equipment near all the major shipping routes, along with fleets of helicopters to deliver it at the first distress call from a tanker. Oil cleanup equipment built with nanotechnology could surely do an excellent job of scooping oil from the water before it could reach the shore.

But why should the oil leave the tanker? Economical production of strong materials could make seamless hulls of fibrous materials far tougher than steel, with double, triple, or quadruple layers. Smart materials could even make punctures self-sealing. Hulls like this could be run into rocks at highway speeds without spilling oil.

But why should anyone be shipping crude oil across the sea? Even if oil were still being pumped (despite inexpensive solar energy and solar-derived fuels), efficient molecule processing systems could refine it into pure, fluid fuels at the wellhead, and inexpensive tunneling machines could provide routes for deeply buried pipelines.

Any one of these advances would shrink or eliminate today's problem with oil spills, and all of them are feasible. This example suggests a general pattern. If nanotechnology can provide so many different ways to avoid or deal with an oil spill—one of the largest and most environmentally destructive accidents caused by today's industry—it can probably do likewise for industrial accidents in general.

The most direct approach is the most basic: the elimination of anything resembling today's bulk industrial plants and processes. The shift from messy drilling activities and huge tankers to small-scale distributed systems based on solar cells is characteristic of the style in which nanotechnology can be used. The chemical industry today typically relies on plants full of large, pressurized tanks of chemicals. Not surprisingly, these occasionally spill, explode, or burn. With nanotechnology, chemical plants will be unnecessary because molecules can be transformed in smaller numbers, as needed and where needed, with no need for high temperatures, high pressures, or big tanks. This will not only avoid polluting by-products, but reduce the risk of accidents.

Medical safety

Medicine can be safer too. Drugs often have side effects that can do permanent damage or kill. Nanomedicine will offer alternatives to flooding the body with a possibly toxic chemical. Often, one wants to affect just one target: just the stomach, or perhaps just the ulcer. An antibiotic or antiviral treatment should fight specific bacteria or viruses and not damage anything else. When medicine achieves the sophistication of immune machines and cell-surgery devices, this will become possible.

But what about medical accidents and side effects? Molecular manufacturing will make possible superior sensors to tell medical researchers of the effects of a new treatment, thereby improving testing. Better sensors will also help in monitoring any negative effects of a treatment on an individual patient. With care, only a few cells would be damaged and only small concentrations of toxic by-products would be produced before this was noticed and the treatment corrected.

The resources of nanotechnology-based medicine would then be available for dealing with the problem. With biostasis techniques available, even the worst medically induced illnesses could be put on hold while a treatment was developed. In short, serious medical mistakes could be made far rarer, and most mistakes could be corrected.

The conclusion that follows from these examples of oil spills, chemical plants, and the effects of medical treatments is straightforward. Today our comparative poverty and our comparative technological incompetence press us in the direction of building and using relatively dangerous and destructive devices, systems, and techniques. With greater wealth and technological competence, we will have the option of accomplishing what we do today (and more) with less risk and less environmental destruction: in short, being able to do more, and do it better.

With better-controlled technologies, and with an ample measure of foresight and concern, we will even be able to do a better job of recovering from mistakes. It won't happen automatically, but with normal care we can arrange for our future accidents to be smaller and less frequent than those in our past.

Extraordinary Accidents

The previous section discussed ordinary accidents that would occur during the use of nanotechnology by generally responsible, yet fallible, human beings. Nanotechnology also raises the specter, however, of what have been termed "extraordinary accidents": accidents involving runaway self-replicating machines. One can imagine building a device about the size of a bacterium but tougher and more nearly omnivorous. Such runaways might blow like pollen and reproduce like bacteria, eating any of a wide range of organic materials: an ecological disaster of unprecedented magnitude—indeed, one that could destroy the biosphere as we know it. This may be worth worrying about, but can this happen by accident?

How to Prepare a Big Mistake

The so-called "Star Trek scenario" (named after an episode of Star Trek: The Next Generation that featured runaway "nanites") is perhaps the most commonly imagined problem. In this scenario, someone first invests considerable engineering effort in designing and building devices almost exactly like the one just described: bacterial-sized, omnivorous, able to survive in a wide range of natural environments, able to build copies of themselves, and made with just a few built-in safeguards—perhaps a clock that shuts them off after a time, perhaps something else. Then, accidentally, the clock fails, or one of these dangerous replicators builds a copy with a defective clock, and away we go with an unprecedented ecological disaster.

This would be an extraordinary accident indeed. Note well, though, that this accident scenario starts with someone building a highly capable device that is almost disastrously dangerous, but held in check by a few safeguards. This would be like wiring your house with dynamite and relying on a safety-catch to protect the trigger: a subsequent explosion could be called an accident, but the problem isn't with the safety mechanism, it's with the dynamite installation.

Do we need to build nanotechnological dynamite? It's worth considering just how little practical incentive there is for anything even resembling the dangerous replicator just described. (Note that our topic here is accidents; deliberate acts of aggression are another matter.)

How to Avoid It

With our present technology, which is simpler to build—a car that runs on gasoline, or one that forages for fuels in the forest? A foraging car would be very hard to design, cost more to manufacture, and have more parts to break down. The situation is similar with nanotechnology.

Ralph Merkle of Xerox Palo Alto Research Center discussed this issue at the First Foresight Conference on Nanotechnology. He explains, "It's both uneconomical and more difficult to design a self-replicating system that manufactures every part it needs from naturally occurring compounds. Bacteria do this, but in the process they have to synthesize all twenty amino acids and many other compounds, using elaborate enzyme systems tailored specifically for the purpose. For bacteria facing a hostile world, the ability to adapt and respond to a changing environment is worth almost any cost, for lacking this ability they would be wiped out.

"But in a factory setting, where adequate supplies of all the needed parts are provided, the ability to synthesize parts from scratch is not only unneeded, it consumes extra time and energy, and produces excess waste. Even if we could design artificial self-replicating systems as flexible as existing natural ones, an inflexible and rigid system is better adapted to the controlled factory setting in which it will find itself than a more complex, more adaptable, less efficient design."

What is more, the Desert Rose Industries scenario showed how an expandable factory setup could operate with no self-replicating machines at all: molecular manufacturing doesn't require them. If they are used for some purpose, they will most likely resemble automobiles in their finicky requirements. A self-replicating molecular machine built for industrial purposes (and made as simple as possible) would float in a container of specially selected chemicals. As with the automobile, the best chemicals to use will probably be chemicals not commonly found in nature, and it would be easy to make that a design rule: Never make a replicator that can use an abundant natural compound as fuel.

If we follow this rule, the idea of a replicator "escaping" and replicating in the wild will be as absurd as the notion of an automobile going feral and refueling itself from tree sap. Whether for replicators or cars, to design a machine that could operate in the wild would not be a matter of a flick of the draftsman's pen, but an intense, sustained research-and-development effort focused on that objective. Crashes and explosions occur in machinery by accident, but complex new capabilities don't.

A simple psychological error frequently occurs when someone first hears about nanotechnology, and hears mention of "molecular machines," and "replicators," and "nanocomputers," and "nanomachines that operate in nature." The error is this: The person makes a single new mental pigeonhole for "nanotechnology," throws everything into it, and stirs. After some mental fermentation, the result is the mythical nanomachine that does everything: it's a replicator, it's a supercomputer, it's a Land-Rover, it slices, it dices—and on reflection, this imaginary nanomachine sounds uncontrolled and dangerous. With enough effort, a do-it-all nanomachine could perhaps be built, but it sounds difficult and there's no good reason to try.

There are advantages to making systems of molecular machinery that can use inexpensive, abundant chemicals, and devices that can operate in nature, but these machines needn't be replicators. A facility like Desert Rose might be designed to use little but electric power from solar panels and molecules from the air, but a setup like this isn't going to slip away. Nanomachines built for cleaning up pollutants and other outdoor tasks could be manufactured in facilities run like Desert Rose and then spread or installed where they're needed.

Extraordinary accidents deserve attention, but with a little care they can be completely avoided. The incentive to build anything resembling a Star Trek-scenario replicator is negligible, even from a military perspective. Any effort toward building such a thing should be seen not as a use of nanotechnology, but as an abuse. Other abuses seem more likely, however, and are quite bad enough.

The Chief Danger: Abuse

The chief danger of nanotechnology isn't accidents, but abuse. The safety benefits of nanotechnology, when used with normal care, will free some of our attention to grapple with this far more difficult problem. As Lester Milbrath observes, "Nanotechnologies have such great power that they could be used for evil or environmentally destructive purposes as easily as they could be used for good and environmentally nourishing purposes. This great danger will require a level of political control far beyond that which most nations know how to exercise. We have a prodigious social learning task that we must face."

Thus far, we've focused on how increased abilities can serve constructive ends. Not surprisingly, the potential consequences—with the huge exception of social and economic disruption—are overwhelmingly positive. Inherently clean, well-controlled, inexpensive, superior technologies, when applied with care, can yield far better results than inherently dirty, messy, costly, inferior technologies. This should come as no surprise, but it is only half of the story. The other half is the application of those same superior technologies to destructive ends.

Readers feeling that all this may be too good to be true can breathe a sigh of relief. This problem looks tough.

Cooperative Controls

Molecular manufacturing will lead to more powerful technologies, but our current, crude technology already has world-smashing potential. We have lived with that potential for decades now. In the coming years, we will need to strengthen institutions for maintaining peaceful security.

If most of the political power in the world, and with it most of the police and military power, sees that the course of self interest lies in peace and stability, then solutions seem possible. (The prospect of an arms race in nanotechnology is terrifying and to be avoided at almost any cost. As of this writing, the end of the Cold War offers a better hope of avoiding this nightmare.) James C. Bennett, a high-tech entrepreneur and public policy commentator affiliated with the Center for Constitutional Issues in Technology, explains the goal: "Advanced technologies, particularly as far-ranging a capability as nanotechnology, will create a strong demand for their regulation. The challenge will be to create sufficient controls to prevent the power-hungry from abusing the technologies, without either smothering development or creating an overbearing international regime."

In the coming decades, preventing major abuse of nanotechnology will take the form of regulation, arms control, and antiterrorist activities. In the field of arms control, nanotechnology should present strong motivation for international cooperation and for intimate mutual inspection in the form of joint research programs.

The sheer productive capabilities of molecular manufacturing will make it possible to move from a working weapons prototype to mass production in a matter of days. In a more exotic vein, dangerous nanomachines could be developed, including programmable "germs" (replicating or nonreplicating) for germ warfare. Either development could bring war. With peace looking so profitable and an arms race looking so dangerous, arms control through cooperative development should look attractive. This does not make it easy, or likely.

Terrorism is not an immediate concern. We have lived with nuclear weapons and nerve gas for decades now, and nerve gas, at least, is not difficult to make. As of this writing, no city has been obliterated by terrorists using these means, and no terrorist has even made a credible threat of this sort. The citizens of Hiroshima and Nagasaki, like the Kurds in Iraq, fell victim to nuclear and chemical weapons wielded by governments, not small groups. So long as nanotechnology is technologically more challenging than the simple chemistry of nerve gas, nanoterrorism should not be a primary concern.

To keep dangerous nanotechnologies unavailable, however, will require regulation. If anyone were free to build anything using molecular manufacturing, then someday as the technology base improves and designs become available for more and more nanodevices, someone, somewhere—if only out of sheer spite—would figure out how to combine those nanodevices to make a dangerous replicator and turn it loose. There will almost surely be warning signs, however: In the natural course of events, causes attract protesters before stone-throwers, and produce letter bombs before car bombs. Abuse of nanotechnology is likely to be visible long before it is devastating, and this at least gives some time to try to respond.

Regulatory Tactics

Abuse of this sort can be delayed, perhaps for a long time, by proper regulation. The goal here isn't to make regulations so tight that people will have to violate them to get anything done. This would encourage holdouts, underground work, and disrespect for the law. Instead, the goal is to draw boundaries loosely enough to cause little difficulty for legitimate work, while making dangerous activities very difficult indeed. This is a delicate balance to strike: those fearful of risk naturally try to apply crude and oppressive regulations, and companies naturally try to loosen and avoid regulation entirely. Nonetheless, the problem must be solved, and this seems the best direction to explore.

In one approach, nanomachines could be divided into two classes: experimental devices and approved products. Approved products could be made widely available through special-purpose molecular manufacturing systems. Thus, once an experimental device had passed regulatory inspection, it could become inexpensive and abundant. In this way, popular demands for a product could be satisfied without anyone needing to break safety rules.

Approved products could include devices like (but superior to) the full range of modern consumer products, ranging from personal supercomputers with 3-D color displays, through smart construction materials, to running shoes with truly amazing features. The main cost of such goods might be the royalty to the designer. In Engines of Creation (the first book to examine this topic), this strategy for producing and distributing approved products is called a "limited assembler system."

Note that both approved products and the limited assemblers that build them would lack the ability to make copies of themselves, to self-replicate. Ralph Merkle sees this ability as the one to keep an eye on: "Self-replicating systems can and should be appropriately regulated. There seems no need, however, to have any more than normal concerns for devices which cannot replicate. While we might, as with any device, need laws to ensure their appropriate use, they pose no extraordinary problems." For most products, normal medical, commercial, and environmental standards would apply; the regulatory bureaucracies are already in place.

There are great advantages to permitting nearly free experimentation in a new technology, allowing creative people to try ideas without seeking prior approval from a cumbersome committee. Surprisingly, this, too, seems compatible with safety.

In the world of nanotechnology, one cubic micron is a large space, with room enough for millions of components. For many purposes, a few cubic microns would amount to a large laboratory space. To a device on a micron scale, a centimeter is an enormous distance. Surrounding a micron-scale device with a centimeter-thick wall would be like surrounding a human being with a wall kilometers thick, and just as hard to penetrate. Further, a micron-scale device can be incinerated in an instant by something as small as a spark of static electricity. Based on observations like these, Engines of Creation outlined the idea of a sealed assembler lab, in which a researcher could build anything, even something deliberately designed to be dangerous, and yet be unable to get anything out of the microscopic sealed laboratory except for information.

With a good communications network, a researcher or product developer in Texas could equally easily perform experiments in a remote Maine laboratory run with the security and secrecy of a Swiss bank. A lab would have a responsibility to its customers to keep proprietary work confidential, and a responsibility to regulatory authorities to ensure that nothing but information leaves the laboratory. Researchers could then perform any small-scale experiments they wish. Only approved products, of course, would be built outside the sealed laboratories. While this may not be the best pattern of regulation possible, it does show one way in which freedom of experimentation could be combined with strict regulation of use. By providing a clear separation between legitimate and illegitimate activity, it would help with the difficult problem of identifying and preventing research aimed at damaging ends.

A sensible policy will have to balance the risk of private abuse of technology against the risk of government abuse of technology and regulation. Low-cost manufacturing can make surveillance equipment less expensive. Increased surveillance can reduce some risks in society, but the watchers themselves often aren't very well watched. Placing bounds on surveillance is a challenge for today's citizens as well as tomorrow's, and lessons learned in the past can be applied in the future.

In the long run, it seems wise to assume that someone, somewhere, somehow, will escape the bounds of regulation and arms control and apply molecular-manufacturing capabilities to making novel weapons. If by then we have had several decades of peaceful, responsible, creative development of nanotechnology (or perhaps a few years of help from smart machines), then we may have developed both ecosystem protectors and sophisticated immune machines for medicine. There is good reason to think that distributed technologies of this sort could be adapted and extended to deal with the problem of protecting against novel nanoweaponry. Failure to do so could mean disaster. Nonetheless, building protective systems of this sort will be by far the greatest challenge of any we have discussed. The chief purpose of regulatory tactics like those we have described must be to buy time for those peaceful developments, to maximize the chances that this challenge can be met before time runs out.

(Any critic declaring this to be an optimistic book hereby stands charged with having failed to read and understand the above paragraph.)

Guide It, or Stop It?

Potential accidents richly deserve the attention they will get, and we have confidence that this attention will suffice to make nanotechnology a force for improved human and environmental safety. Abuse is the greater danger, and harder to deal with. When considering a proposed policy, the first question should be, "How will this affect the long-term likelihood of serious abuse?"

Guiding Means Making Many Choices

Guiding a technology is a complex task. It means grappling with myriad decisions regarding which applications are beneficial and which harmful in such complex areas as medicine, the economy, and the environment. It means making such happy choices as which of several good approaches to apply in cleaning up toxic-waste dumps and reversing the greenhouse effect. It also means making more difficult choices in planning ecosystem restoration and environmental modification.

These problems will confront us with a range of choices better than we have today, yet choices that throw values into conflict. Which is a better use for a particular piece of land—the slow restoration of a wilderness ecosystem, or development as a recreational park? Either may be far better than pavement, strip mines, and dumps, but the choices will be controversial.

Likewise, in medicine, we will have a choice of developing many different ways to cure cancers, many different ways to cure heart disease, many different ways to cure AIDS. But the technologies that can be used to rebuild damaged heart muscle could be extended to rebuild muscle and connective tissue structures elsewhere in the body, without the harmful side effects of steroid drugs. The range of choices open to people will be enormous, and again will be cause for great debate.

When a new medical technology is discussed today, a frequent comment is, "This procedure raises ethical questions." This is often taken as a signal to delay its use, neglecting such ethical questions as "Is withholding this lifesaving treatment while we ponder akin to murder?" When a choice raises ethical questions or throws values into conflict, it is time to make an ethical decision or to step aside and let others choose for themselves. Deciding to avoid whatever raised the question is itself a decision—and often ethically indefensible. New technologies will face us with uncomfortable decisions, but so does life itself.

Setting up rules for nanotechnology development will be challenging: finding ways to maximize research freedom while preventing serious abuse and making this stick worldwide is a social challenge of the first rank. Beyond this are decisions regarding rules for its application, and the challenge of maximizing freedom of choice and action while preventing serious abuse, again worldwide.

To guide nanotechnology means grappling with a set of decisions that could ultimately remake much of the world—for the better if we are reasonably wise, or for the worse if we are too blundering and incautious. To avoid this responsibility (if we could) would be tempting, yet given the environmental and human stakes, it would, perhaps, be a wrong of historic proportions.

Trying to Stop Means Losing Control

The simplest imaginable approach to "guiding" nanotechnology would be to stop it. The easiest trip to plan is the trip that goes nowhere.

This would have a certain appeal, if it were possible. Because of its enormous potential for abuse, nanotechnology has the potential of doing great harm. If we believe that human beings and human institutions are too incompetent to deal with nanotechnology—that they are too likely to turn it to aggressive military use, or too likely to make it freely available to madmen—then the option of stopping the development of nanotechnology may seem attractive indeed. But the ethical question that must guide human actions is not "Would it be better to stop?", but "Would attempts to stop make things better?"

One option is to push forward, emphasizing the need for caution but also the potential for good applications. The promise of medical, economic, and environmental applications, joined with the threat posed by a new arms race, provides a powerful motive for international cooperation. With positive goals and an inclusive stance, international cooperation is a promising strategy; it could provide a basis for guiding the development and application of nanotechnology.

Another option would be to emphasize the downside, to focus debate on potential abuses in support of a campaign to halt development. In following this strategy, an activist group would want to downplay the civilian applications of nanotechnology and emphasize its military applications. Horror stories of potential abuse (including abuses that regulation could easily prevent) would help to make the technology seem strange and dangerous.

This strategy might succeed in suppressing civilian research in many countries, though probably not all. Unfortunately, it would also guarantee funding for classified military research programs in laboratories around the world, even in the most morally honest countries, because of their then-inevitable fear of the consequences if someone else developed nanotechnology first. In a hostile public atmosphere, research would be pushed into secret programs, and in secrecy the prospects for broad international cooperation would disappear. Attempts to stop nanotechnology for fear of a new, unstable arms race become self-fulfilling prophecies. Afterward, the advocates of this view could then say, "We warned you!" as the world slid toward a war they themselves had helped to prepare.

Attempting to stop technological development is a simple but dangerous idea. The greater its success, the greater the polarization it would cause between technology advocates and technology critics. A moderate success would push research out of the public universities and into corporate and military research labs. A greater success would push research out of the corporate laboratories and into heavily classified programs. A truly amazing success would end most of these, leaving the only remaining military programs in the hands of those states with thoroughly repressive governments or alien ideologies. This, presumably, is not how one would prefer nanotechnology to be developed.

The only genuine success would be a total success, and this would mean banning research not only in the United States, and Germany, and France, and the rest of Western Europe, and Japan, and the Soviet Union, and the People's Republic of China, and Taiwan, but in Korea, South Africa, Iran, Iraq, Israel, Brazil, Argentina, Vietnam, and the part of Colombia controlled by the Medellín Cartel. Later, as computers improve, as chemistry advances, as more and more proximal probe microscopes are built by high school students, total success would require banning kids from tinkering in suburban garages in Pittsburgh.

Competitive pressures are pushing technology toward thorough control of matter, and we have seen that this goal can be reached by many different paths. Preventing one area of research would not prevent the advance, nor would stopping work in one country. When the United States delays drug development through strong regulation by the FDA, drug companies simply switch research overseas, or non-U.S. companies pull ahead. Orbital-launch capability and nuclear weapons capability are other examples. Very seldom has one country given these abilities to another, yet at least eight nations are able to launch satellites to orbit independently, at least seven have detonated nuclear devices, and another two are suspected to be within easy reach of nuclear capability. India and Israel have built bombs and launched satellites, though neither is considered a great power or a leading force in world technology.

Where nanotechnology is concerned, many countries are capable of doing the required research, and more will be in the future. South Korea has both the needed educational levels and the ambition; visitors from the People's Republic of China ask about nanotechnology. A decision at the top directing the resources of a nation could get results almost anywhere. The United States is only gradually being shaken from its illusion that it rules the world of technology. This illusion is a poor basis for decisions and action.

Responsible Action

For all practical purposes, nanotechnology seems inevitable. With work, it can be made beneficial, but only if we exercise ordinary care in avoiding accidents and extraordinary care in preventing abuse.

It's hard to get people to take future technologies seriously. Present-day problems dominate discussions, and ideas about future possibilities take effort to judge. Because of this inertia, broad international regulation of nanotechnology won't be possible until nanotechnology already exists, until people begin to see its results. And then, for regulation to be most effective, researchers and governments in many countries will need to cooperate and be on speaking terms with the technology's critics.

What, then, is the socially responsible course of action, the approach most likely to avoid serious abuse of nanotechnology and most likely to deliver some of its potential benefits? It is, we believe, to point out potential dangers and abuses and how they can be avoided, but also to emphasize the civilian applications in medicine, the environment, and the economy. It is these benefits that provide grounds for advocating open civilian development programs, and for international cooperation that can provide a basis for effective international guidance.

To guide nanotechnology will not be simple. We will be confronted with a range of choices greater than we have faced before in history. It is only by grappling with those choices that we will be able to affect them for the better.


Previous | Next

Unbounding the Future - Table of Contents

 

Foresight Programs

 

Home About Foresight Blog News & Events Roadmap About Nanotechnology Resources Facebook Contact Privacy Policy

Foresight materials on the Web are ©1986–2024 Foresight Institute. All rights reserved. Legal Notices.

Web site developed by Stephan Spencer and Netconcepts; maintained by James B. Lewis Enterprises.