CRN Science & Technology
Essays - 2006
stages of acceptance: 1) this is worthless nonsense; 2) this is an interesting,
but perverse, point of view; 3) this is true, but quite unimportant; 4) I always
— Geneticist J.B.S. Haldane,
on the stages scientific theory goes through
Each issue of the
features a brief article explaining technical aspects of advanced
nanotechnology. They are gathered in these archives for your review. If you have comments
or questions, please
1. Powering Civilization Sustainably (January 2006)
Who remembers analog computers? (February 2006)
Trends in Medicine (March
Bottom-up Design (April 2006)
Types of Nanotechnology (May
History of the Nanofactory
Concept (June 2006)
7. Inapplicable Intuitions (July 2006)
Military Implications of
Molecular Manufacturing (August 2006)
New Opportunities for DNA Design
Nano-Products (October 2006)
Preventing Errors in Molecular Manufacturing
2004 Essays Archive
2005 Essays Archive
2007 Essays Archive
2008 Essays Archive
Preventing Errors in Molecular Manufacturing
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
What kind of system can perform a
billion manufacturing operations without an error?
Many people familiar with today's manufacturing technologies will assume that
such a thing is near-impossible. Today's manufacturing operations are doing very
well to get one error in a million products. To reduce the error to one in a
billion--to say nothing of one in a million billion, which Drexler talks about
ridiculously difficult. None of today's technologies could do it, despite many
decades of improvement. So how can molecular manufacturing theorists reasonably
expect to develop systems with such low error rates--much less, to develop them
on a schedule measured in years rather than decades?
There are physical systems today that have error rates even lower than molecular
manufacturing systems would require. A desktop computer executes more than a
billion instructions each second, and can operate for years without a single
error. Each instruction involves tens of millions of transistors, flipping on or
off with near-perfect reliability. If each transistor operation were an atom,
the computer would process about a gram of atoms each day--and they would all be
The existence of computers demonstrates that an engineered, real-world system,
containing millions of interacting components, can handle simple operations in
vast numbers with an insignificant error rate. The computer must continue
working flawlessly despite changes in temperature, line voltage, and
electromagnetic noise, and regardless of what program it is asked to run.
The computer can do this because it is digital. Digital values are
discrete--each signal in the computer is either on or off, never in-between. The
signals are generated by transistor circuits which have a non-linear response to
input signals; an input that is anywhere near the ideal "on" or "off" level will
produce an output that is closer to the ideal. Deviations from perfection,
rather than building up, are reduced as the signal propagates from one
transistor to another. Even without active error detection, correction, or
feedback, the non-linear behavior of transistors means that error rates can be
kept as low as desired: for the purposes of computer designers, the error rate
of any signal is effectively zero.
An error rate of zero means that the signals inside a computer are perfectly
characterized: each signal and computation is exactly predictable. This allows a
very powerful design technique to be used, called "levels of abstraction."
Error-free operations can be combined in intricate patterns and in large
numbers, with perfectly predictable results. The result of any sequence of
operations can be calculated with certainty and precision. Thousands of
transistors can be combined into number-processing circuits that do arithmetic
and other calculations. Thousands of those circuits can be combined in
general-purpose processor chips that execute simple instructions. Thousands of
those instructions can be combined into data-processing functions. And those
functions can be executed thousands or even billions of times, in any desired
sequence, to perform any calculation that humans can invent... performing
billions of billions of operations with reliably zero errors.
Modern manufacturing operations, for all their precision, are not digital. There
is no discrete line between a good and a bad part--just as it's impossible to
say exactly when someone who loses one hair at a time becomes bald. Worse, there
is no mechanism in manufacturing that naturally restores precision. Difficult
and complicated processes are required to construct a machine more precise than
the machines used to build it. To build a modern machine such as a
computer-controlled lathe requires so many different techniques--polymer
chemistry, semiconductor lithography, metallurgy and metal working, and
thousands of others--that the "real world" will inevitably create errors that
must be detected and corrected. And to top it off, machines suffer from
wear--their dimensions change as they are used.
Given the problems inherent in today's manufacturing methods and machine
designs, the idea of building a fully automated general-purpose manufacturing
system that could build a complete duplicate of itself... is ridiculous.
The ability to form covalent solids by placing individual molecules changes all
that. Fundamentally, covalent bonds are digital: two atoms are either bonded, or
they are held some distance apart by a repelling force. (Not all bond types are
fully covalent, but many useful bonds including carbon-carbon bonds are.) If a
covalent bond is stretched out of shape, it will return to its ideal
configuration all by itself, without any need for external error detection,
correction, or feedback.
If a covalent-bonding manufacturing system performs an operation with less than
one atom out of place, then the resulting product will have exactly zero atoms
out of place. Just like transistor signals in a digital computer, imperfections
fade away all by themselves. (In both cases, a bit of energy is used up in
making the imperfections disappear.) In digital systems, there is no physical
law that requires imperfections to accumulate into errors--not in digital
computer logic, and not in atomic fabrication systems.
Atomic fabrication operations, like transistor operations, can be characterized
with great reliability. Only a few transistor operations are a sufficient
toolkit with which to design a computer. A general-purpose molecular
manufacturing system may use a dozen or so different kinds of atoms, and perhaps
100 reactions between the atoms. That is a small enough number to study each
reaction in detail, and know how it works with as much precision as necessary.
Each reaction can proceed in a predictable way each and every time it is
A sequence of completely predictable operations will itself have a completely
predictable outcome, regardless of the length of the sequence. If each one of a
sequence of a billion reactions is carried out as expected, then the final
product can be produced reliably.
Chemists who read this may be objecting that there's no such thing as a reaction
with 100% yield. Answering that objection in detail would require a separate
essay--but briefly, mechanical manipulation and control of reactants can in many
cases prevent unwanted reaction pathways as well as shifting the energetics so
far (hundreds of zJ/bond or kJ/mole) that the missed reaction rate is reduced by
many orders of magnitude.
At this point, it is necessary to consider the "real world." What factors, in
practice, will reduce the predictability of mechanically-guided molecular
One factor that doesn't have to be considered in a well-designed system of this
type is wear. Again, it would take a separate essay to discuss wear in detail,
but wear in a covalent solid requires breaking strong inter-atomic bonds, and a
well-designed system will never, in normal operation, exert enough force on any
single atom to cause its bonds to break. Likewise, machines built with the same
sequence of reliable operations will themselves be identical. Once a machine is
characterized, all of its siblings will be just as fully understood.
Mechanical vibration from outside the system is unlikely to be a problem. It is
a problem in today's nanotech tools because the tools are far bigger than the
manipulations or measurements those tools perform--big enough to have slow
vibrational periods and high momentum. Nanoscale tools, such as would be used in
a molecular manufacturing system, would have vibrational frequencies in the
gigahertz or higher, and momentum vanishingly small compared to restoring
It is possible that vibrations generated within the system, from previous
operations of the system or of neighboring systems, could be a problem. In
computers, transistor operations can cause voltage ripples that cause headaches
for designers, and are probably analogous. But these problems are practical, not
Contaminant molecules should not be a problem in a well-designed system. The
ability to build covalent solids without error implies the ability to build
hermetically sealed enclosures. Feedstock molecules would have to be taken in
through the enclosures, but sorting mechanisms have been planned that should
reject any contaminants in the feedstock stream with extremely low error rates.
There are ways for a manufacturing system inside a sealed enclosure to build
another system of the same size or larger without breaking the seal. It would
take a third essay to discuss these topics in detail, but they have been
considered and none of the problems appears unlikely to be addressable in
Despite everything written above, there will be some fraction of molecular
manufacturing systems that suffer from errors--if nothing else, background
levels of ionizing radiation will cause at least some bond breakage. In theory,
an imperfect machine could fabricate more imperfect machines, perpetuating and
perhaps exacerbating the error. However, in practice, this seems unlikely.
Whereas a perfect manufacturing machine could do a billion operations without
error, an imperfect machine would probably make at least one atom-placement
error fairly early in the fabrication sequence. That first error would leave an
atom out of its expected position on the surface of the workpiece. A flawed
workpiece surface would usually cause a cascade of further fabrication errors in
the same product, and long before a product could be completed, the process
would be hopelessly jammed. Thus, imperfect machines would quickly become inert,
before producing even one imperfect product.
The biggest source of unpredictability probably will be thermal noise, sometimes
referred to as Brownian motion. (Quantum uncertainty and Heisenberg uncertainty
are similar but smaller sources of unpredictability.) Thermal noise means that
the exact dimensions of a mechanical system will change unpredictably, too
rapidly to permit active compensation. In other words, the exact position of the
operating machinery cannot be known. The degree of variance depends on the
temperature of the system, as well as the stiffness of the mechanical design. If
the position varies too far, then a molecule-bonding operation may result in one
of several unpredictable outcomes. This is a practical problem, not a
fundamental limitation; in any given system, the variance is limited, and there
are a number of ways to reduce it. More research on this point is needed, but so
far, high-resolution computational chemistry experiments by Robert Freitas seem
to show that even without using some of the available tricks, difficult
molecule-placement operations can be carried out with high reliability at liquid
nitrogen temperatures and possibly at room temperature. If positional variance
can be reduced to the point where the molecule is placed in approximately the
right position, the digital nature of covalent bonding will do the rest.
This is a key point:
The mechanical unpredictability in
the system does not have to be reduced to zero, or even extremely close to zero,
in order to achieve extremely high levels of reliability in the product.
As long as each reaction trajectory leads closer to the right outcome than to
competing outcomes, the atoms will naturally be pulled into their proper
configuration each time--and by the time the next atoms are deposited, any
positional error will have dissipated into heat, leaving the physical bond
structure perfectly predictable for the next operation.
Molecular manufacturing requires an error rate that is extremely low by most
standards, but is quite permissive compared to the error rates necessary for
digital computers. Error rate is an extremely important topic, and
unfortunately, understanding of errors in mechanically guided chemistry is
susceptible to incorrect intuitions from chemistry, manufacturing, and even
physics (many physicists assume that entropy must increase without considering
that the system is not closed). It appears that the nature of covalent bonds
provides an automatic error-reducing mechanism that will make molecular
manufacturing closer in significant ways to computer logic than to today's
manufacturing or chemistry.
Three previous science essays have touched on related topics:
Who remembers analog
computers? (February 2006)
Nanoscale Errors (September 2004)
The Bugbear of
Entropy (May 2004)
Subsequent CRN science
essays will cover topics that this essay raised but did not have space to cover
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
We are often asked, "How will nanofactory-built products be recycled?"
One of the advantages of molecular manufacturing is
that it will use very strong and high-performance materials. Most of them
probably will not be biodegradable. So what will save us from being buried in
The first piece of good news is that nano-built products will use materials more
efficiently. Mechanical parts can be built mostly without defects, making them a
lot stronger than today's materials. Active components can be even more compact,
because scaling laws are advantageous to small machines: motors may have a
million times the power density, and computers may be a million times as
compact. So for equivalent functionality, nano-built products will use perhaps
100-1000 times less material. In fact, some products may be so light that they
have to be ballasted with water. (This would also make carbon-based products
The second good news is that carbon-based products will burn once any water
ballast is removed. Traditionally, incineration has been a dirty way to dispose
of trash; heavy metals, chlorine compounds, and other nasty stuff goes up the
smokestack and pollutes wherever the wind blows. Fortunately, one of the first
products of molecular manufacturing will be efficient molecular sorting systems.
It will be possible to remove the harmless and useful gases from the combustion
products--perhaps using them to build the next products--and send the rest back
to be re-burned.
The third good news is that fewer exotic materials and elements should be
needed. Today's products use a lot of different substances for different jobs.
Molecular manufacturing, by contrast, will be able to implement different
functions by building different molecular shapes out of a much smaller set of
materials. For example, carbon can be either an insulator or a conductor, shapes
built of carbon can be both flexible or rigid, and carbon molecules can be
transparent (diamond) or opaque (graphite).
Finally, it may be possible to build many full-size products out of modular
building blocks: microscopic
that might contain a billion atoms and provide flexible functionality. In
theory, rather than discarding and recycling a product, it could be pulled apart
into its constituent blocks, which could then be reassembled into a new product.
However, this may be impractical, since the nanoblocks would have to be
carefully cleaned in order to fit together precisely enough to make a reliable
product. But re-using rather than scrapping products is certainly a possibility
that's worth investigating further.
Not surprisingly, there is also some bad news. The first bad news is that carbon
is not the only possible material for molecular manufacturing. It is probably
the most flexible, but others have been proposed. For example, sapphire
(corundum, alumina) is a very strong crystal of aluminum oxide. It will not
burn, and alumina products probably will have to be scrapped into landfills if
their nanoblocks cannot be re-used. Of course, if we are still using industrial
abrasives, old nano-built products might simply be crushed and used for that
The second bad news is that nano-built products will come in a range of sizes,
and some will be small enough that they will be easy to lose. Let me stress that
a nano-built product is not a grey goo robot, any more
than a toaster is. Tiny products may be sensors, computer nodes, or medical
devices, but they will have specialized functionality--not general-purpose
manufacturing capability. A lost product will likely be totally inert. But
enough tiny lost products could add up to an irritating dust.
The third bad news is that widespread use of personal
nanofactories will make it very easy and inexpensive to build stuff.
Although each product will use far less material than today's versions, we may
be using far more products.
Some readers may be wondering about "disassemblers" and whether they could be
used for recycling. Unfortunately, the "disassembler" described in Eric
Engines of Creation was a slow and energy-intensive research tool, not
an efficient way of taking apart large amounts of matter. It might be possible
to take apart old nano-products molecule by molecule, but it would probably be
less efficient than incinerating them.
Collecting products for disposal of is an interesting problem. Large products
can be handled one at a time. Small and medium-sized products might be enough of
a hassle to keep track of that people will be tempted to use them and forget
them. For example, networked sensors with one-year batteries might be scattered
around, used for two months, and then forgotten--better models would have been
developed long before the battery would wear out. In such cases, the products
might need to be collected robotically. Any product big enough to have an RFID
antenna would be able to be interrogated as to its age and when it was last
used. Ideally, it would also tell who its owner had been, so the owner could be
billed, fined, or warned as appropriate.
This essay has described what could be. Environmentally friendly cleanup and
disposal schemes will not be difficult to implement in most cases. However, as
with so much else about molecular manufacturing, the availability of good
choices does not mean that the best options necessarily will be chosen. It is
likely that profligate manufacturing and bad design will lead to some amount of
nano-litter. But the world will be very fortunate if nano-litter turns out to be
the biggest problem created by molecular
New Opportunities for DNA Design
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
DNA is a very versatile molecule, if you know how to use it. Of course, the
genetic material for all organisms (except some viruses) is made of DNA. But it
is also useful for building shapes and structures, and it is this use that is
most interesting to a nanotechnologist.
Readers familiar with DNA complementarity should skip this paragraph.
Non-technical readers should read my earlier
science essay on DNA folding. Briefly, DNA is made of four molecules,
abbreviated A, C, G, and T, in a long string (polymer). G and C attract each
other, as do A and T. A string with the sequence AACGC will tend to attach
itself to another string with the sequence GCGTT (the strings match
head-to-tail). Longer sequences attach more permanently. Heating up a mixture of
DNA makes the matched strings vibrate apart; slowly cooling a mixture makes the
strings reattach in (hopefully) their closest-matched configuration.
Until recently, designing a shape out of DNA was a painstaking process of
planning sequences that would match in just the right way – and none of the
wrong ways. Over the years, a number of useful design patterns were developed,
including ways to attach four strands of DNA side by side for extra stiffness;
ways to make structures that would contract or twist when a third strand was
added to bridge two strands in the structure; and three-way junctions between
strands, useful for building geometric shapes. A new structure or technique
would make the news every year or so. In addition to design difficulties, it was
hard to make sufficiently long error-free strands to form useful shapes.
A few months ago, a
new technique was invented by Dr. Paul Rothemund. Instead of building all
the DNA artificially for his shapes, he realized that half of it could be
derived from a high-quality natural source with a fixed but random sequence, and
the other half could be divided into short, easily synthesized pieces –
“staples” – with sequences chosen to match whatever sequence the natural strand
happens to have at the place the staple needs to attach. Although the random
strand will tend to fold up on itself randomly to some extent, the use of large
numbers of precisely-matching staples will pull it into the desired
If a bit of extra DNA is appended to the end of a staple piece, the extra bit
will stick out from the shape. This extra DNA can be used to attach multiple
shapes together, or to grab on to a DNA-tagged molecule or particle. This
implies that DNA-shape structures can be built that include other molecules for
increased strength or stiffness, or useful features such as actuation.
Although the first shapes designed were planar, because planar shapes are easier
to scan with atomic force microscopes so as to verify what’s been built, the
stapling technique can also be used to pull the DNA strand into a
three-dimensional shape. So this implies that with a rather small design effort
(at least by the standards of a year ago), 3D structures built of DNA can be
constructed, with “Velcro” hanging off of them to attach them to other DNA
structures, and with other molecules either attached to the surface or embedded
in the interior.
The staple strands are short and easy to synthesize (and don’t need to be
purified), so the cost is quite low. According to page 81 of
Rothemund’s notes [PDF],
a single staple costs about $7.00 – for 80 nmol, or 50 quintillion molecules.
Enough different staples to make a DNA shape cost about $1,500 to synthesize.
The backbone strand costs about $12.50 per trillion molecules. Now, those
trillion molecules only add up to 4 micrograms. Building a human-scale product
out of that material would be far too costly. But a computer chip with only 100
million transistors costs a lot more than $12.50.
The goal that’s the focus of this essay is combining a lot of these molecular
“bricks” to build engineered heterogeneous structures with huge numbers of
atoms. In other words, rather than creating simple tilings of a few bricks,
stick them together in arbitrary computer-controlled patterns, constructs in
which every brick can be different and independently designed.
I was hoping that nano-manipulation robotics had advanced to the point where the
molecular shapes could be attached to large handles that would be grabbed and
positioned by a robot, making the brick go exactly where it’s wanted relative to
the growing construct, but I’m told that the state of the art probably isn’t
there yet. Just one of the many problems is that if you can’t sense the molecule
as you’re positioning it, you don’t know if temperature shifts have caused the
handle to expand or contract. However, there may be another way to do it.
An atomic force microscope (AFM) uses a small tip. With focused ion beam (FIB)
nano-machining, the tip can be hollowed out so as to form a pocket suitable for
a brick to nestle in. By depositing DNA Velcro with different sequences at
different places in the pocket (which could probably be done by coating the
whole tip, burning away a patch with the FIB, then depositing a different
sequence), it should be possible to orient the brick relative to the tip. (If
the brick has two kinds of Velcro on each face, and the tip only has one kind
deposited, the brick will stick less strongly to the tip than to its target
Now, the tip can be used for ordinary microscopy, except that instead of having
a silicon point, it will have a corner of the DNA brick. It should still be
usable to scan the construct, hopefully with enough resolution to tell where the
tip is relative to the construct. This would solve the nano-positioning problem.
I said above that the brick would have DNA Velcro sticking out all over. For
convenience, it may be desirable to have a lot of bricks of identical design,
floating around the construct – as long as they would not get stuck in places
they’re not wanted. This would allow the microscope tip to pick up a brick from
solution, then deposit it, then pick up another right away, without having to
move away to a separate “inkwell.” But why don’t the bricks stick to the
construct and each other, and if they don’t, then how can the tip deposit them,
and why do they stay where they’re put?
To make the bricks attach only when and where they’re put requires three
conditions. First, the Velcro should be designed to be sticky, so that the
bricks will stay firmly in place once attached. Second, the Velcro should be
capped with other DNA strands so that the bricks will not attach by accident.
Third, the capping strands should be designed so that physically pushing the
brick against a surface will weaken the bond between Velcro and cap, allowing
the Velcro to get free and bind to the target surface. For example, if the cap
strands stick stiffly out away from the block (perhaps by being bound to two
Velcro strands at once), then mechanical pressure will weaken the connection.
Mechanical pressure, of course, can be applied by an AFM. Scan with low force,
and when the brick is in the right place, press down with the microscope. Wait
for the cap strands to float away and the Velcro to pair up, and the brick is
deposited. With multiple Velcro strands between each brick, the chance of them
all coming loose at once and allowing the brick to be re-capped can be made
miniscule, especially since the effective concentration of Velcro strands would
be far higher than the concentration of cap strands. But before the brick was
pushed into place, the chance of all the cap strands coming loose at once also
would have been miniscule. (For any physicists reading this, thermodynamic
equilibrium between bound and free bricks still applies, but the transition rate
can be made even slower than the above concentration argument implies, since the
use of mechanical force allows an extremely high energy barrier. If the
mechanical force applied is 100 pN over 5 nm, that is 500 zJ, approximately the
dissociation energy of a C-C bond.)
So, it seems that with lots of R&D (but without a whole lot of DNA design), it
might be possible to stick DNA bricks (plus attached molecules) together in
arbitrary patterns, using an AFM. But an AFM is going to be pretty slow. It
would be nice to make the work go faster by doing it in parallel. My
NIAC project suggests a
way to do that.
The plan is to build an array of “towers” or “needles” out of DNA bricks. In the
tip of each one, put a brick-binding cavity. Use an AFM to build the first one
in the middle of a flat surface. Then use that to build a second and third
needle on an opposing surface. (One of the surfaces would be attached to a nano-positioner.)
Use those two towers to build a fourth, fifth, sixth, and seventh on the first
surface. The number of towers could grow exponentially.
By the time this is working, there may be molecules available that can act as
fast, independently addressable, nanoscale actuators. Build a mechanism into
each tower that lets it extend or retract – just a nanometer or so. Now, when
the towers are used to build something, the user can select which bricks to
place and which ones to hold back. This means that different towers, all
attached to the same surface and moved by the same nano-positioner, can be doing
different things. Now, instead of an exponential number of identical designs, it
has become possible to build an exponential number of different designs, or to
work on many areas of a large heterogeneous design in parallel.
A cubic micron is not large by human standards, but it is bigger than most
bacteria. There would be about 35,000 DNA bricks in a cubic micron. If a brick
could be placed every fifteen seconds, then it would take a week to build a
cubic micron out of bricks. This seems a little fast for a single AFM that has
to bind bricks from solution, find a position, and then push the brick into
place, even if all steps were fully automated – but it might be doable with a
parallel array (either an array of DNA needles, or a multi-tip AFM). If every
brick were different, it would cost about $50 million for the staples, but of
course not every brick will be different. For 1,000 different bricks, it would
cost only about $1.5 million.
We will want the system to deposit any of a number of brick types in any
location. How to select one of numerous types? The simplest way is to make all
bricks bind to the same tip, then flush them through one type at a time. This is
slow and wasteful. Better to include several tips in one machine, and then flush
through a mixture of bricks that will each bind to only one tip. The best
answer, once really high-function bricks are available and you’re using
DNA-built tips instead of hollowed-out AFM tips, is to make the tips
reconfigurable by using fast internal actuators to present various combinations
of DNA strands for binding of various differently-tagged bricks.
I started by suggesting that a scanning probe microscope be used to build the
first construct. Self-assembly also could be used to build small constructs, if
you can generate enough distinct blocks. But you may not have to build hundreds
of different bricks to make them join in arbitrary patterns. Instead, build
identical bricks, and cap the Velcro strands with a second-level “Velcro
staple.” Start with a generic brick coated with Velcro – optionally, put a
different Velcro sequence on each side. Stir that together with strands that are
complementary to the Velcro at one end and contain a recognition pattern on the
other end. Now, with one generic brick and six custom-made Velcro staples, you
have a brick with a completely unique recognition pattern on each side. Do that
for a number of bricks, and you can make them bind together any way you want.
One possible problem with this is that DNA binding operations usually need to be
“annealed” – heated to a temperature where the DNA falls apart, then cooled
slowly. This implies that the Velcro-staple approach would need three different
temperature ranges: one to form the shapes, one to attach the staples, and one
to let the shapes join together.
The Velcro-staple idea might even be tested today, using only the basic
DNA-shape technology, with one low-cost shape and a few dozen very-low-cost
staples. Plus, of course, whatever analysis tools you’ll need to convince you
that you’re making what you think you’re making.
There is a major issue involved here that I have not yet touched on. Although
the DNA staple technique makes a high percentage of good shapes, it also makes a
lot of broken or incomplete shapes. How can the usable shapes be sorted from the
broken shapes? Incomplete shapes may be sorted out by chromatography. Broken
shapes might possibly be rejected by adding a fluorescence pair and using a cell
sorter to reject shapes that did not fluoresce properly. Another possibility, if
using a scanning probe microscope (as opposed to the “blind” multi-needle
approach) is to detect the overall shape of the brick by deconvolving it against
a known surface feature, and if an unwanted shape is found, heat the tip to make
This is just a sketch of some preliminary ideas. But it does go to show that the
new DNA staple technology makes things seem plausible that would not have been
thinkable before it was developed.
Implications of Molecular Manufacturing
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
(Originally published in the July 2006 issue of
NanoNews-Now -- reprinted by permission)
will survey the technology of molecular manufacturing, the basic capabilities of
its products, some possible weapon systems, some tactical and strategic
considerations, and some possible effects of molecular manufacturing on the
broader context of societies and nations. However, all of this discussion must
take place in the context of the underlying fact that the effects and outcome of
molecular manufacturing will be almost inconceivable, and certainly not
susceptible to shallow or linear analysis.
minute and try to imagine a modern battlefield without electricity. No radar or
radios; no satellites; no computers; no night vision, or even flashlights; no
airplanes, and few ground vehicles of any kind. Imagination is not sufficient to
generate this picture—it simply doesn't make sense to talk of a modern military
manufacturing will have a similarly profound effect on near-future military
is a general-purpose energy technology, useful for applications from motors to
data processing. A few inventions, ramified and combined—the storage battery,
transistor, electromagnet, and a few others—are powerful enough to be necessary
components of almost all modern military equipment and activities.
If it is
impossible to conceive of a modern military without electricity—a technology
that exists, and the use of which we can study—it will be even less feasible to
try to imagine a military with molecular manufacturing.
manufacturing will be the world's first general-purpose manufacturing
technology. Its products will be many times more plentiful, more intricate, and
higher performance than any existing product. They will be built faster and less
expensively, speeding research and development. They will cover a far greater
range of size, energy, and distance than today's weapons systems. As
increasingly powerful weapons make the battlefield untenable for human soldiers,
computers vastly more powerful and compact than today's will enable far higher
degrees of automation and remote operation. Kilogram-scale manufacturing
systems, building directly from the latest blueprints in minutes, will utterly
transform supply, logistics, and deployment.
X-rays were discovered within months of each other. Within a few years, X-rays
had inspired stories about military uses of “death rays.” Decades later, Madame
Curie gave speeches on the wonderful anti-cancer properties of radium. It would
have been difficult or impossible to predict that a few decades after that,
X-rays would be a ubiquitous medical technology, and nuclear radiation would be
the basis of the world's most horrific weapons. While reading the rest of this
article, keep in mind that the implications of various molecular manufacturing
products and capabilities will be at least as unpredictable and
Technical Basis of Molecular
foundation, molecular manufacturing works by doing a few precise fabrication
operations, very rapidly, at the molecular level, under computer control. It can
thus be viewed as a combination of mechanical engineering and chemistry, with
some additional help from rapid prototyping, automated assembly, and related
fields of research.
inter-atomic bonds are completely precise: every atom of a type is identical to
every other, and there are only a few types. Barring an identifiable error in
fabrication, two molecules manufactured according to the same blueprint will be
identical in structure and shape (with transient variations of predictable scale
due to thermal noise and other known physical effects). This consistency will
allow fully automated fabrication. Computer controlled addition of molecular
fragments, creating a few well-characterized bond types in a multitude of
selected locations, will enable a vast range of components to be built with
extremely high reliability. Building with reliable components, higher levels of
structure can retain the same predictability and engineerability.
fundamental “scaling law” of physics is that small systems operate faster than
large systems. Moving at moderate speed over tiny distances, a nanoscale
fabrication system could perform many millions of operations per second,
creating products of its own mass and complexity in hours or even minutes. Along
with faster operation comes higher power density, again proportional to the
shrinkage: nanoscale machines might be a million times more compact than today's
technology. Computers would shrink even more profoundly, and non-electronic
technologies already analyzed could dissipate enough less power to make the
shrinkage feasible. Although individual nanoscale machines would have small
capacity, massive arrays could work together; it appears that gram-scale
computer and motor systems, and ton-scale manufacturing systems, preserving
nanoscale performance levels, can be built without running afoul of scaling laws
or other architectural constraints including cooling. Thus, products will be
buildable in a wide range of sizes.
list of advantages and capabilities of molecularly manufactured products, much
less an analysis of the physical basis of the advantages, would be beyond the
scope of this paper. But several additional advantages should be noted.
Precisely fabricated covalent materials will be much stronger than materials
formed by today's imprecise manufacturing processes. Precise, well-designed,
covalently structured bearings should suffer neither from wear nor from static
friction (stiction). Carbon can be an excellent conductor, an excellent
insulator, or a semiconductor, allowing a wide range of electrical and
electronic devices to be built in-place by a molecular manufacturing system.
Development of Molecular Manufacturing
capabilities will be far-reaching, the development of molecular manufacturing
may require a surprisingly small effort. A finite, and possibly small, number of
deposition reactions may suffice to build molecular structures with programmable
shape—and therefore, diverse and engineerable function. High-level architectures
for integrated kilogram-scale arrays of nanoscale manufacturing systems have
already been worked out in some detail. Current-day tools are already able to
remove and deposit atoms from selected locations in covalent solids. Engineering
of protein and other biopolymers is another pathway to molecularly precise
fabrication of engineered nanosystem components. Analysis tools, both physical
and theoretical, are developing rapidly.
As a general
rule, nanoscale research and development capabilities are advancing in
proportion to Moore's Law—even faster in some cases. Conceptual barriers to
developing molecular manufacturing systems are also falling rapidly. It seems
likely that within a few years, a program to develop a nanofactory will be
launched; some observers believe that one or more covert programs may already
have been launched. It also seems likely that, within a few years of the first
success, the cost of developing an independent capability will have dropped to
the point where relatively small groups can tackle the project. Without
stringent and widespread restrictions on technology, it most likely will not be
possible to prevent the development of multiple molecular manufacturing systems
with diverse owners.
Products of Molecular Manufacturing
exploratory engineering in the field to date has pointed to the same set of
conclusions about molecular manufacturing-built products:
Manufacturing systems can build more manufacturing systems.
products can be extremely compact.
Human-scale products can be extremely inexpensive and lightweight.
products can be astonishingly powerful.
self-contained manufacturing system can be its own product, then manufacturing
systems can be inexpensive—even non-scarce. Product cost can approach the cost
of the feedstock and energy required to make it (plus licensing and regulatory
overhead). Although molecular manufacturing systems will be extremely portable,
most products will not include one—it will be more efficient to manufacture at a
dedicated facility with installed feedstock, energy, and cooling supplies.
size of nanosystems will probably be about 1 nanometer (nm), implying a million
features in a bacteria-sized object, a billion features per cubic micron, or a
trillion features in the volume of a ten-micron human cell. A million features
is enough to implement a simple CPU, along with sensors, actuators, power
supply, and supporting structure. Thus, the smallest robots may be
bacteria-sized, with all the scaling law advantages that implies, and a medical
system (or weapon system based thereon) could be able to interact with cells and
even sub-cellular structures on an equal footing. (See
Nanomedicine Vol. I:
Basic Capabilities for further exploration.)
As a general
rule of thumb, human-scale products may be expected to be 100-1000 times lighter
than today's versions. Covalent carbon-based materials such as buckytubes should
be at least 100 times stronger than steel, and materials could be used more
efficiently with more elegant construction techniques. Active components will
shrink even more. (Of course, inconveniently light products could be ballasted
nanofactories could build very large products, from spacecraft to particle
accelerators. Large products, like smaller ones, could benefit from stronger
materials and from active systems that are quite compact. Nanofactories should
scale to at least ton-per-hour production rates for integrated products, though
this might require massive cooling capacity depending on the sophistication of
the nanofactory design.
Possible Weapons Systems
systems may not be actual weapons, but computer platforms for sensing and
surveillance. Such platforms could be micron-scale. The power requirement of a
1-MIPS computer might be on the order of 10-100 pW; at that rate, a cubic micron
of fuel might last for 100-1000 seconds. The computer itself would occupy
approximately one cubic micron.
devices could deliver fatal quantities of toxins to unprotected humans.
smallest ballistic projectiles (bullets) could contain supercomputers, sensors,
and avionics sufficient to guide them to targets with great accuracy. Flying
devices could be quite small. It should be noted that small devices could
benefit from a process of automated design tuning; milligram-scale devices could
be built by the millions, with slight variations in each design, and the best
designs used as the basis for the next “generation” of improvements; this could
enable, for example, UAV's in the laminar regime to be developed without a full
understanding of the relevant physics. The possibility of rapid design is far
more general than this, and will be discussed below.
between bullets, missiles, aircraft, and spacecraft would blur. With lightweight
motors and inexpensive manufacturing, a vehicle could contain a number of
different disposable propulsion systems for different flight regimes. A
“briefcase to orbit” system appears feasible, though such a small device might
have to fly slowly to conserve fuel until it reached the upper atmosphere. It
might be feasible to use 1 kg of airframe (largely discarded) and 20 kg of fuel
(not counting oxidizer) to place 1 kg into orbit; some of the fuel would be used
to gather and liquify oxygen in the upper atmosphere for the rocket portion of
its flight. (Engineering studies have not yet been done for such a device, and
it might require somewhat more fuel than stated here.)
construction could produce novel energy-absorbing materials involving
high-friction mechanical slippage under high stress via micro- or nano-scale
mechanical components. In effect, every molecule would be a shock absorber, and
the material could probably absorb mechanical energy until it was destroyed by
New kinds of
weapons might be developed more quickly with rapid inexpensive fabrication. Many
classes of device will be buildable monolithically. For example, a new type of
aircraft or even spacecraft might be tested an order of magnitude more rapidly
and inexpensively, reducing the cost of failure and allowing further
acceleration in schedule and more aggressive experimentation. Although materials
and molecular structures would not encompass today's full range of manufactured
substances, they could encompass many of the properties of those substances,
especially mechanical and electrical properties. This may enable construction of
weapons such as battlefield lasers, rail guns, and even more exotic
armor certainly could not stop attacks from a rapid series of impacts by
precisely targeted projectiles. However, armor could get a lot smarter,
detecting incoming attacks and rapidly shifting to interpose material at the
right point. There may be a continuum from self-reconfiguring armor, to armor
that detaches parts of itself to hurl in the path of incoming attacks, to armor
that consists of a detached cloud of semi-independent counterweapons.
A new class
of weapon for wide-area destruction is kinetic impact from space. Small
impactors would be slowed by the atmosphere, but medium-to-large asteroids,
redirected onto a collision course, could destroy many square miles. The attack
would be detectable far in advance, but asteroid deflection and destruction
technology is not sufficiently advanced at this time to say whether a defender
with comparable space capacity could avoid being struck, especially if the
asteroid was defended by the attacker. Another class of space impactor is
lightweight solar sails accelerated to a respectable fraction of light speed by
passage near the sun. These could require massive amounts of inert shielding to
stop; it is not clear whether or not the atmosphere would perform this function
hypothetical device often associated with molecular manufacturing is a small,
uncontrolled, exponentially self-replicating system. However, a self-replicating
system would not make a very good weapon. In popular conception, such a system
could be built to use a wide range of feedstocks, deriving energy from oxidizing
part of the material (or from ambient light), and converting the rest into
duplicate systems. In practice, such flexibility would be quite difficult to
achieve; however, a system using a few readily available chemicals and bypassing
the rest might be able to replicate itself—though even the simplest such system
would be extremely difficult to design. Although unrestrained replication of
inorganic systems poses a theoretical risk of widespread biosphere destruction
through competition for resources—the so-called “grey goo” threat—it seems
unlikely that anyone would bother to develop grey goo as a weapon, even a
doomsday deterrent. It would be more difficult to guide than a biological
weapon. It would be slower than a device designed simply to disrupt the physical
structure of its target. And it would be susceptible to detection and cleanup by
analysis of attack and defense is impossible at this point. It is not known
whether sensor systems will be able to effectively detect and repel an
encroachment by small, stealthy robotic systems; it should be noted that the
smallest such systems might be smaller than a wavelength of visible light,
making detection at a distance problematic. It is unknown whether armor will be
able to stop the variety of penetrating objects and forces that could be
directed at it. Semi-automated R&D may or may not produce new designs so quickly
that the side with the better software will have an overwhelming advantage. The
energy cost of construction has only been roughly estimated, and is uncertain
within at least two orders of magnitude; active systems, including airframes for
nano-built weapons, will probably be cost-effective in any case, but passive or
static systems including armor may or may not be worth deploying.
appear relatively certain. Unprotected humans, whether civilian or soldier, will
be utterly vulnerable to nano-built weapons. In a scenario of interpenetrating
forces, where a widespread physical perimeter cannot be established, humans on
both sides can be killed at will unless protected at great expense and
inconvenience. Even relatively primitive weapons such as hummingbird-sized
flying guns with human target recognition and poisoned bullets could make an
area unsurvivable without countermeasures; the weight of each gun platform would
be well under one gram. Given the potential for both remote and semi-autonomous
operation of advanced robotics and weapons, a force with a developed molecular
manufacturing capability should have no need to field soldiers; this implies
that battlefield death rates will be low to zero for such forces.
commonly raised in discussions of nanotech weapons is the creation of new
diseases. Molecular manufacturing seems likely to reduce the danger of this.
Diseases act slowly and spread slowly. A sufficiently capable bio-sensor and
diagnostic infrastructure should allow a very effective and responsive
quarantine to be implemented. Rapid testing of newly manufactured treatment
methods, perhaps combined with metabolism-slowing techniques to allow additional
R&D time, could minimize disease even in infected persons
amazing power and flexibility of molecular manufactured devices, a lesson from
World War I should not be forgotten: Dirt makes a surprisingly effective shield.
It is possible that a worthwhile defensive tactic would be to embed an item to
be protected deeply in earth or water. Without active defenses, which would also
be hampered by the embedding material, this would be at best a delaying tactic.
is likely to be a key determiner of military success. If, as seems likely,
unexpected offense with unexpected weapons can overwhelm defense, then rapid
detection and analysis of an attacker's weapons will be very important.
Information-gathering systems are likely to survive more by stealth than by
force, leading to a “spy vs. spy” game. To the extent that this involves
destruction of opposing spy-bots, it is similar to the problem of defending
against small-scale weapons. Note that except for the very smallest systems, the
high functional density of molecularly constructed devices will frequently allow
both weapon and sensor technology to be piggybacked on platforms primarily
intended for other purposes.
likely that, with the possible exception of a few small, fiercely defended
volumes, a robotic battleground would consist of interpenetrated forces rather
than defensive lines (or defensive walls). This implies that any non-active
matter could be destroyed with little difficulty unless shielded heavily enough
to outlast the battle.
above, a major strategy is to avoid putting soldiers on the battlefield via the
use of autonomous or remotely operated weapons. Unfortunately, this implies that
an enemy wanting to damage human resources will have to attack either civilian
populations or people in leadership positions. To further darken the picture,
civilian populations will be almost impossible to protect from a determined
attack without maintaining a near-hermetic seal around their physical location.
Since the resource cost of such a shield increases as the shield grows (and the
vulnerability and unreliability probably also increase), this implies that
civilians under threat will face severe physical restrictions from their own
substantial variety of attack mechanisms will be available, including kinetic
impact, cutting, sonic shock and pressure, plasma, electromagnetic beam,
electromagnetic jamming and EMP, heat, toxic or destructive chemicals, and
perhaps more exotic technologies such as particle beam and relativistic
projectile. A variety of defensive techniques will be available, including
camouflage, small size, physical avoidance of attack, interposing of sacrificial
mass, jamming or hacking of enemy sensors and computers, and preemptive strike.
Many of these offensive and defensive techniques will be available to devices
across a wide range of sizes. As explored above, development of new weapon
systems may be quite rapid, especially if automated or semi-automated design is
to the variety of physical modes of attack and defense, the cyber sphere will
become an increasingly important and complex battleground, as weapon systems
increasingly depend on networking and computer control. It remains to be seen
whether a major electronic security breach might destroy one side's military
capacity, but with increasing system complexity, such an occurrence cannot be
what is being defended, it may or may not be possible to prepare an efficient
defense for all possible modes of attack. If defense is not possible, then the
available choices would seem to be either preemptive strike or avoidance of
conflict. Defense of civilians, as stated above, is likely to be difficult to
impossible. Conflict may be avoided by deterrence only in certain cases—where
the opponent has a comparable amount to lose. In asymmetric situations, where
opponents may have very different resources and may value them very differently,
deterrence may not work at all. Conflict may also be avoided by reducing the
sources of tension
activity does not take place in isolation. It is frequently motivated by
non-military politics, though warlords can fight simply to improve their
military position. Molecular manufacturing will be able to revolutionize
economic infrastructures, creating material abundance and security that may
reduce the desire for war—if it is distributed wisely.
that an all-out war between molecular manufacturing powers would be highly
destructive of humans and of natural resources; the objects of protection would
be destroyed long before the war-fighting ability of the enemy. In contrast, a
war between molecular manufacturing and a conventionally armed power would
probably be rapid and decisive. The same is true against a nuclear power that
was prevented from using its nuclear weapons, either by politics or by
anti-missile technologies. Even if nuclear weapons were used, the
decentralization allowed by self-contained exponentially manufacturing
nanofactories would allow survival, continued prosecution of the war, and rapid
between policing and military action is increasingly blurred. Civilians are
becoming very effective at attacking soldiers. Meanwhile, soldiers are
increasingly expected to treat civilians under occupation as citizens (albeit
second-class citizens) rather than enemy. At least in the US, paramilitary
organizations (both governmental and commercial) are being deployed in internal
civilian settings, such as the use of SWAT teams in some crime situations, and
Blackwater in post-Katrina New Orleans.
molecular manufactured weapon systems will be useable for policing. Several
factors will make the systems desirable for police activity: a wide range of
weapon effects and intensities to choose from; less risk to police as
telepresence is employed; maintaining parity with increasingly armed criminals;
and increased deterrence due to increased information-gathering and
surveillance. This means that even without military conflict, a variety of
military-type systems will be not only developed, but also deployed and used.
tempting to think that the absence of nuclear war after six decades of nuclear
weapons implies that we know how to handle insanely destructive weapons.
However, a number of factors will make molecular manufacturing arms races less
stable than the nuclear arms race—and it should be remembered that on several
different occasions, a single fortuitous person or event has prevented a nuclear
attack. Nuclear weapons are hard to design, hard to build, require easily
monitored testing, do indiscriminate and lasting damage, do not rapidly become
obsolete, have almost no peaceful use, and are universally abhorred. Molecular
manufactured weapons will be easy to build, will in many cases allow easily
concealable testing, will be relatively easy to control and deactivate, and
would become obsolete very rapidly; almost every design is dual-use, and
peaceful and non-lethal (police) use will be common. Nuclear weapons are easier
to stockpile than to use; molecular manufactured weapons will be the opposite.
Interpenetrating arrays of multi-scale complex weapons cannot be stable for
long. Sooner or later, and probably sooner, a perceived attack will be answered
by an actual attack. Whether this mushrooms out of control into a full-scale
conflict will depend on the programming of the weapon systems. As long as it is
only inanimate hardware at stake, probing attacks and small-scale accidental
attacks may be tolerated.
amount of damage that a hostile power armed with molecular manufacturing
products could do to the civilian sector, it seems likely that hostile actors
will be tolerated only as a last resort, and even apparently non-hostile but
untrustworthy actors will be highly undesirable. As mentioned above, an
asymmetry in values may prevent deterrence from working. An asymmetry in force,
such as between a molecular manufacturing and a pre-MM power, may tempt a
preemptive strike to prevent molecular manufacturing proliferation. Likewise, a
substantial but decreasing lead in military capability may lead to a preemptive
strike. It is unclear whether in general a well-planned surprise attack would
lead to rapid and/or inexpensive victory; this may not become clear until
offensive and defensive systems are actually developed.
situation appears to be that in which a single power deploys sufficient sensors
and weapons to prevent any other power from developing molecular manufacturing.
This would probably require substantial oppression of civilians and crippling of
industrial and scientific capacity. The government in power would have
near-absolute control, being threatened only by internal factors; near-absolute
power, combined with an ongoing need for oppression, would likely lead to
recognition of the dangers of arms race, preemptive strike, and war might
inspire widespread desire to avoid such an outcome. This would require an
unprecedented degree of trust and accountability, worldwide. Current government
paradigms are probably not compatible with allowing foreign powers such intimate
access to their secrets; however, in the absence of this degree of openness,
spying and hostile inspections will only raise tension and reduce trust. One
possible solution is for governments to allow their own citizens to observe
them, and then allow the information gained by such distributed and
non-combative (and thus presumably more trustworthy) observation to be made
available to foreign powers.
manufacturing will introduce a wide diversity of new weapon systems and modes of
warfighting. In the absence of actual systems to test, it is difficult if not
impossible to know key facts about offensive and defensive capability, and how
the balance between offense and defense may change over time. Incentives for
devastating war are unknown, but potentially large—the current geopolitical
context may favor a strategy of preemptive strike.
information about molecular manufacturing's capabilities will probably be
lacking until a nanofactory is developed. At that point, once an exponential
manufacturing capacity exists that can make virtually unlimited quantities of
high-performance products, sudden development of unfamiliar and powerful weapon
systems appears likely. It is impossible, from today's knowledge, to predict
what a molecular manufacturing-enabled war will be like—but it is possible to
predict that it would be most destructive to our most precious resources.
Given these facts and observations, an immediate and urgent search for
alternatives to arms races and armed conflict is imperative.
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
Experts in a
field develop intuitions about the way things work. For example, a biochemist
will develop intuitions about the complexity of interactions between
biomolecules. When faced with a new idea, a scientist will first evaluate it in
light of existing intuitions.
molecular manufacturing is rapidly growing, and many scientists may be
encountering the ideas for the first time. Because molecular manufacturing cuts
across a number of fields -- physics, chemistry, mechanical engineering,
software, and more -- and because it uses a rather novel approach to building
stuff, almost any scientist will find something in the proposal that violates
one or more intuitions. It is worth examining some of these intuitions. Notice
that each intuition is true, although in a limited context, and molecular
manufacturing avoids that context.
to personally developed intuitions, scientists new to molecular manufacturing
may run across objections formerly raised by others in different fields. In
general, these objections were the result of similarly misplaced intuitions. The
intent here is not to re-fight old battles, but simply to explain what the
battles were about.
Here in a
nutshell is the molecular manufacturing plan: Build a system that does a billion
chemical reactions, one after the other, on the same molecule, with very high
reliability, to make perfect molecular products. The system does chemical
reactions by holding molecules and moving them into place through a vacuum, to
transfer atoms to the product, adding a few atoms at a time to build molecular
shapes. Use that system to build nanoscale machine components, and assemble the
components into nanoscale machines. Control a bunch of these machines to build
more machine components, one deposition at a time; then combine those machine
components into large products. This will need huge numbers of machines, arrayed
in a factory. Use an initial small factory to make another bigger factory,
repeating enough times to grow to kilogram scale. Use the resulting big factory
to make products from downloaded blueprints.
As we will
see, nearly every phrase in this description may evoke skepticism from someone;
however, all of these objections, and many others, have been addressed. The
technical foundation for the modern approach to molecular manufacturing was laid
with the 1992 publication of Nanosystems. After so
many years, any objection that comes readily to mind has probably been thought
of before. We encourage those who are just encountering the ideas of MM to work
through the initial skepticism and misunderstanding that comes from
unfamiliarity, recognizing that a large number of scientists have been unable to
identify any showstoppers. Although the theory has not yet reached the point of
being proved by the existence of a nanofactory, it has reached the point where a
conversation that assumes most of it is correct will be more productive than a
conversation that assumes it's fatally flawed.
is an imagined conversation between an MM researcher (MMer) and a room full of
scientists who are new to the ideas.
OK, we're going to build a system that does a billion chemical reactions, one
after the other, on the same molecule, with very high reliability.
Wait a minute. 99% is an excellent yield, but 99% times 99% times 99%... a
billion times is a big fat ZERO. You would reliably get zero molecules of
A chemist is used to reactions between molecules that bump into each other
randomly. In molecular manufacturing, the molecules would be held in place,
and only allowed to react at chosen locations. Yield could be many "nines"
better than 99%.
So we take a system that does chemical reactions by holding molecules and
moving them into place through a vacuum...
Wait. You're going to hold the molecules in a vacuum and make them react as
you want? Chemistry's more complex than that; you need more control, and you
may even need water to help out with really complex reactions.
Yes, chemistry is complex when you have lots of potentially reactive molecules
bumping around. But if the motion of the molecules is constrained, then the
set of potential reaction products is also constrained. Also, there are new
kinds of freedom that traditional chemistry doesn't have, including freedom to
select from nearly identical reaction sites, and freedom to keep very reactive
molecules from touching anything until you're ready. And by the way, even
enzymes evolved for water don't necessarily need water -- this has been known
since the mid-80's.
So we move the molecules into place to transfer atoms...
Atoms are more reactive than that.
MM wouldn't be grabbing individual unbound atoms -- it would transfer
molecular fragments from a "tool" molecule to a "workpiece" molecule, in
reactions that work according to standard chemistry laws.
We add a few atoms at a time to build molecular shapes...
Proteins make molecular shapes, and they are very, very hard to design.
Natural proteins are indeed hard to understand. They have to fold into shape
under the influence of a large number of weak forces. But even with proteins,
desired shapes have been engineered. DNA, another biomolecule, is a lot easier
to design shapes with. And MM plans to build three-dimensional shapes
directly, not build long stringy molecules that have to fold up to make
Then we're going to use that system to build nanoscale machine components...
Micro-mechanical system researcher:
Wait a minute! We've tried building machine components, and friction kills
them. The smaller you make them, the worse it gets.
The micromachines were built with a fabrication technique that left the
surfaces rough. Friction and wear between rough surfaces are in fact worse as
machines get smaller. But if the surfaces are atomically precise and smooth,
and the atoms are spaced differently on the two surfaces, they can have
extremely low friction and wear. This has been verified experimentally with
nested carbon nanotubes and with graphite sheets; it's called
Assemble the components into nanoscale machines...
Why not use machines inspired by nature? Biology does a great job and has lots
of designs we could adapt.
This isn't an argument against the feasibility of MM. If biology-based designs
work even better than mechanical designs and are more convenient to develop,
then MM could use them. The main advantage of biology is that a technical
toolkit to work with biomolecules has already been developed. However, there
are several fundamental reasons why biomachines, as good as they are, aren't
nearly as good as what MM expects to build. (For example, any machine immersed
in water must move slowly to avoid excessive drag.) And mechanical designs
will almost certainly be easier to understand and engineer than biological
So we take a bunch of these machines and control them...
How can you hope to control them? It's very, very hard to get information to
MM intends to build nanoscale data-processing systems as well as machines. And
MM also proposes to build large and multi-scale systems that can get info to
the nanoscale without requiring external nanoscale equipment to do so.
We control the machines to build more machine components, one deposition at a
That'll take forever to build anything!
It would indeed take almost forever for a large scanning probe microscope to
build its own mass of product. But as the size of the tool decreases, the time
required to build its own mass shrinks as the fourth power of the size. Shrink
by 10X, decrease the time by 10,000X. By the time you get down to a
100-nanometer scanning probe microscope, the scaling laws of volume and
operation frequency suggest it should be able to build its own mass in about
Then we'll combine those machine components into large products...
You plan to build large products with nanoscale systems? It'll take billions
MM won't be using just a few nanoscale systems; it'll be using huge numbers of
them, working together under the control of nanocomputers. Each workstation
will build one tiny sub-part.
So we take huge numbers of machines, arrayed in a factory...
Whoa, how do you plan to put together this factory? Self-assembly isn't nearly
Use a factory, with robotic component-handling etc., to make a factory. Use a
small factory to make a bigger factory. (The first tiny sub-micron factory
would be made painstakingly in the lab.)
So we take this factory and make another bigger factory...
Wait, how can you have a dumb machine making something more complex than
itself? Only life can do things like that.
The complexity of the manufacturing system is the physical system plus the
software that drives it. The physical manufacturing system need not be
more physically complex than the thing it makes, as long as the software makes
up the difference. And the software can be as complex as human brains can
We take this big factory and make a product...
engineer: How are you going to design a product with zillions of parts?
The product will not have zillions of different parts. It will have to
be engineered in a hierarchical approach, with well-characterized re-usable
structures at all levels. Software engineers design computer programs along
these lines; the technique is called "levels of abstraction."
Download a blueprint to the factory to make a product...
The factory would need amazingly advanced software to run zillions of
operations to build zillions of parts.
Just as the product would contain zillions of parts, but only relatively few
distinct parts, so the nanofactory would contain relatively few different
types of machines to be controlled. The blueprint file format could be
designed to be divided into hierarchical patterns and sub-patterns.
Distributing the file fragments to the correct processors, and processing the
instructions to drive the workstations, would be straightforward operations.
And so on.
As you can see, each objection brought by intuition from within a specific field
has an answer that comes from the interdisciplinary approach of molecular
manufacturing theory. We are not, of course, asking anyone to take it on faith
that molecular manufacturing will work as planned. We are only asking newcomers
to the ideas to refrain from snap judgments that it can't work for some
apparently obvious reason.
History of the Nanofactory Concept
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
When CRN talks about molecular
manufacturing, we usually focus on one particular implementation: a
nanofactory. A nanofactory is basically a box with a whole lot of molecular
manufacturing machines inside; feedstock and energy go in, and products come
out. But why do we focus on nanofactories? Where did the idea come from? I'll
tackle the second question first.
Richard Feynman is often credited as a founder of nanotechnology, though the
word would not exist until decades after his now famous talk, “There's
Plenty of Room at the Bottom,” in 1959. In that talk, Feynman proposed that
machines could build smaller machines until the smallest of them was working
with atomic precision, and indeed “maneuvering things atom by atom.” Materials
could be built under direct control: “Put the atoms down where the chemist says,
and so you make the substance.” Along the way to this goal, he said, “I want to
build a billion tiny factories, models of each other, which are manufacturing
simultaneously...” However, these factories would have been on the border
between microtech and nanotech, with individual machines larger than 100
nanometers. Atom manipulation would come “ultimately---in the great future.”
In the 1980's, Eric Drexler introduced most of the ideas of molecular
manufacturing (then called simply “nanotechnology”). However, instead of using
machines to make smaller machines, Drexler's plan started directly with
molecules engineered to have mechanical functionality. Build a number of
intricate molecules, he said, join them together into a programmable robotic
system, and that system could be used to perform more molecule-building and
Both Feynman and Drexler recognized that small machines can't do much
individually. Feynman planned to have his manufacturing process make multiple
copies of each tiny machine in parallel, growing the number exponentially with
each stage of shrinkage. Drexler, starting from nanoscale machines, planned to
design his machine so that it could build a complete duplicate. The first
machine would build two, then they would build four, then eight, and so on. This
is actually an easier problem in many ways than designing a factory to build
smaller machines than those in the factory.
Drexler was working from a biological model, in which cells build more cells.
Rather than designing a factory, Drexler pictured vast numbers of
self-contained, independent robotic fabrication systems. The systems,
“assemblers,” were intended to cooperate to build large products. In his 1986
Engines of Creation, Drexler described a vat of assemblers, floating in
fluid, building a rocket engine.
By 1992, when he published Nanosystems, Drexler's
plans had evolved somewhat. Instead of vast quantities of free-floating
assemblers, each with its own manufacturing system, control system, power
system, shell, and chemical input system, he planned to fasten down vast numbers
of manufacturing devices into a framework. Instead of cooperating to attach
molecules to an external product, each manufacturing workstation would build a
tiny fragment of the product. These fragments would then be combined into larger
and larger components, using a system much like a tree of assembly lines feeding
larger assembly lines.
Drexler's nanofactory proposal in Nanosystems was to be refined several times.
In Drexler's proposal, the assembly lines occupied a three-dimensional branching
structure. This structure is more complex
than it looks, because some of the smaller lines must be bent aside in order
to avoid the larger ones. In
1997 refinement, the assembly lines occupied a simpler stacked
configuration. The price of this is constraining the allowable dimensions of
sub-parts. Essentially, Merkle's system works best if the product is easily
divisible into cubes and sub-cubes.
In my 2003 paper “Design
of a Primitive Nanofactory”, I continued to use a convergent assembly
approach, accepting the limitations of dividing a product into sub-cubes.
Another limitation that should be noted with convergent assembly is that the
product must be small enough to fit in the assembly line: significantly smaller
than the factory. The paper includes an entire chapter on product design, much
of which is guided by the problems inherent in building diverse products out of
small dense rigid multi-scale cubes. Basically, the plan was to build the
product folded up, and then unfold it after completion and removal from the
nanofactory. My design, as well as Drexler's and Merkle's, required large
internal factory volumes for handling the product in various stages of
A few months after my Primitive Nanofactory paper was published, John Burch and
Eric Drexler unveiled their newest nanofactory concept. Instead of many levels
of converging assembly lines, the Burch/Drexler factory design deposits tiny
blocks directly onto a planar surface of a product under construction. Although
this requires many thousands of deposition operations at each position to build
each centimeter of product, the process is not actually slow, because the
smaller the blocks are, the faster each one can be placed. (Although the
physical layout of my nanofactory is now obsolete, most of the calculations in
my paper are still useful.)
Instead of requiring the product to be divisible into sub-cubes at numerous size
scales, the Burch/Drexler architecture requires only that the product be made of
aggregated tiny components—which would be necessary in any case for anything
constructed by molecular manufacturing workstations. Instead of requiring a
large internal volume for product handling, the factory only needs enough
internal volume to handle the tiny components; the growing product can be
attached to an external surface of the factory.
Focus on the Factory
So, that is how the nanofactory concept has evolved. Why does CRN use it as the
basis for talking about molecular manufacturing? The answer is that a
nanofactory will be a general-purpose manufacturing technology. Although it
could not build every product that could possibly be built by molecular
manufacturing, it will be able to build a very wide range of very powerful
products. At the same time, a personal nanofactory would be perhaps the most
user-friendly way to package molecular manufacturing. Technologies that are
user-friendly, assuming they are adequate, tend to be used more widely than more
powerful but less convenient alternatives. Although there may come a time when
computer-aided design processes run into the limits of the nanofactory approach,
it seems unlikely that humans using current design techniques would be able even
to fully map, let alone explore, the range of possible designs.
A nanofactory is easy to conceptualize. At the highest level, it's a
computer-controlled box that makes stuff, sort of like a 3D inkjet printer. Add
in a couple of key facts, and its importance becomes clear:
- It can make more nanofactories.
- Its products will be extremely powerful.
- Rapid programmable manufacture implies rapid prototyping
and rapid design.
It is difficult to see how “diamondoid mechanosynthesis of
multi-scale nanosystem-based products” can revolutionize the world. It is much
easier to imagine a nanofactory being flown in to a disaster area, used to
produce more nanofactories and feedstock factories, and then all of them
producing water filters, tents, and whatever else is needed, in any quantity
desired—within just a few days.
Nanotechnology today is largely the province of the laboratory, where most
people cannot participate. But a personal nanofactory could be made easy enough
for untrained people to use, even to the point of making new product designs.
This advantage comes with a cost: the simpler the design software, the more
limited the range of products. But molecularly constructed products will be so
intricate and high-performance that a certain amount of tradeoff will be quite
acceptable for most applications. If a design has an array of a thousand tiny
motors where one hundred would suffice, that probably would not even be
A final advantage of conceptualizing the means of production as a human-scale
box is that it helps to separate the production system from the product. In the
pre-nanofactory days of molecular manufacturing discussion, when tiny assemblers
were the presumed manufacturing system, a lot of people came to assume that
every product would include assemblers—and thus be prone to a variety of risks,
such as making more of itself without limit. The nanofactory concept makes it
much clearer that products of molecular manufacturing will not have any spooky
self-replicating attributes, and the manufacturing apparatus itself—the
nanofactory—may be about as dangerous as a printer.
Types of Nanotechnology
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
Now that nanotechnology has been in the public eye for twenty years, and
well-funded for half a decade, it's worth a quick look at just what it is—and
how it got that way.
When the word “nanotechnology” was introduced to the public by Eric Drexler's
1986 book Engines of Creation, it meant something very specific: small
precise machines built out of molecules, which could build more molecular
machines and products—large, high-performance products. This goal or aspect of
nanotechnology now goes by several names, including molecular nanotechnology,
molecular manufacturing, and productive nanosystems. The reason for this
renaming is that “nanotechnology” has become a broad and inclusive term, but
it's still important to distinguish molecular manufacturing from all the other
types. I'll talk about molecular manufacturing, and why it is unique and
important, after surveying some of the other types of nanotechnology.
With the funding of the U.S. National Nanotechnology Initiative (NNI), there has
been a strong financial incentive to define nanotechnology so that one's own
research counts—but not so broadly that everyone's research counts. There has
been a less focused, but still real, incentive to define the goals of
nanotechnology aggressively, to justify major funding, but not too aggressively,
lest it sound scary or implausible.
With all the different research fields applying the above rules to a wide
variety of research, it is not surprising that there's no single hard-edged
definition of nanotechnology that everyone can agree on. Perhaps the most
commonly quoted definition of nanotechnology is the one
used by the
NNI: “Nanotechnology is the understanding and control of matter at
dimensions of roughly 1 to 100 nanometers, where unique phenomena enable novel
applications.” I don't know how they decided on the size scale; thinking
cynically, it might have had something to do with the fact that computer chips
were just about to gain features smaller than 100 nanometers, so they were
guaranteed at least one early success.
Nanotechnology can be even broader than that. A rough rule of thumb is: if it's
too small to see with an ordinary light microscope, it's likely to be considered
nanotechnology. Without using special physics
tricks, light can't be used to see anything smaller than half a wavelength
of light, which is a few hundred nanometers (I can't be more precise because
light comes in different colors with different wavelengths). Because some optics
technology uses structures smaller than light (such as photonic crystals) to
manipulate light, you will sometimes see optics researchers describe their work
as nanotechnology. However, because these structures tend to be larger than the
official 100-nm cutoff, many nanotechnologists will reject this usage.
Another point of contention is how unique the “unique phenomena enabl[ing] novel
applications” have to be. For example, some nanotechnology simply uses
ordinary materials like clay, in smaller chunks, in fairly ordinary ways.
They can get new material properties; they are using nanoscale materials; they
are studying them with new techniques; but is it really nanotechnology, or is it
just materials science? It might as well be called nanotech, seems to be the
consensus. It's providing early successes for the field and it’s putting “nano”
into consumers' hands in a beneficial, non-threatening way.
Another kind of nanotechnology involves building increasingly large and
intricate molecules. Some of these molecules can be very useful: for example, it
appears possible to combine a cancer-cell-recognizer, a toxic drug, and a
component that shows up in MRI scans, into a single molecule that kills cancer
cells while showing you where they were and leaving the rest of the body
untouched. This is a little bit different from traditional chemistry in that the
chemist isn't trying to create a new molecule with a single function, but rather
to join together several different functions into one connected package.
Some new nanomaterials have genuinely new properties. For example, small mineral
particles can be transparent to visible light, which makes them useful in
sunscreen. Even smaller particles can glow in useful colors, forming more-stable
markers for biomedical research. For related reasons, small particles can be
useful additions to computer circuits, lending their quantum effects to make
smaller and better transistors.
We should talk about semiconductors (computer chips), a major application of
nanotechnology. Feature sizes on mainstream silicon chips are well below 100
nanometers now. This obviously is a great success for nanotechnology (as defined
by the NNI). From one point of view, semiconductor makers are continuing to do
what they have always done: make chips smaller and faster using silicon-based
transistors. From another point of view, as sizes shrink, their task is rapidly
getting harder, and they are inventing new technology every day just to keep up
with expectations. There are more unusual computer-chip designs underway as
well, most of which use nanotechnology of one form or another, from quantum-dot
transistors to sub-wavelength optics (plasmonics) to holographic storage to
buckytube-based mechanical switches.
Which brings us to buckytubes. Buckytubes are remarkable molecules that were
discovered not long ago. They are tiny strips of graphite, rolled up with the
sides fastened together to form a seamless tube. They are very strong, very
stiff, and can be quite long in proportion to their width;
four-centimeter long buckytubes have been reported, which is more than ten
million times the width of the tube. Some buckytubes are world-class conductors
and electron emitters. They may be useful in a wide variety of applications.
And what about those quantum effects? According to the NNI, “At the nanoscale,
the physical, chemical, and biological properties of materials differ in
fundamental and valuable ways from the properties of individual atoms and
molecules or bulk matter.” Materials are of course made up of atoms, which
contain electrons, and it is the interaction of electrons that gives materials
most of their properties. In very small chunks of material, the electrons
interact differently, which can create new material properties. Nanoparticles
can be more chemically active; as mentioned above, they can fluoresce; they can
even participate in weird physics such as quantum computers. But, as the above
overview should make clear, a lot of “nanotechnology” does not make use of these
Molecular manufacturing (MM) is a fairly mundane branch of nanotech, or it would
be if not for the political controversy that has swirled around it. The idea is
simple: Use nanoscale machines as construction tools, joining molecular
fragments into more machines. Every biological cell contains molecular machines
that do exactly that. There are, however, a few reasons why molecular
manufacturing has been highly controversial.
Much of the controversy stems from the fact that MM proposes to use engineered
devices to build duplicate devices. Although biology can do this, intuition
suggests that such self-duplication requires some special spark of complexity or
something even more numinous: surely a simple engineered machine can't be so
lifelike! This ultimate spark of vitalism is fading as we learn how machinelike
cellular molecules actually are, and as increasingly detailed plans make it
clear that hardware does not have to be very complex in order to make duplicate
hardware. (Even the software doesn't have to be very complex, just intricate and
well-designed. This has been known by computer scientists for many decades, but
the paradigm has taken a while to shift in the wider world.)
There is another problem with self-replication: in some forms, it may be
dangerous. In 1986, Eric Drexler warned that tiny engineered self-replicators
could outcompete natural life, turning the biosphere into boring copies of
themselves: “grey goo.” This formed a cornerstone of Bill Joy's essay “Why The
Future Doesn't Need Us,” which hit just as the NNI was ramping up. No
nanoscientist wanted to be associated with a poorly-understood technology that
might destroy the world, and the easiest thing was to assert that MM was simply
impossible. (Modern MM designs do not use small self-replicators; in fact, they
have been obsolete since Drexler's 1992 technical book Nanosystems.)
A third source of controversy is that MM plans to use diamond as its major
building material, not bio-based polymers like protein and DNA. (Some pathways
to this capability, including the pathway favored by Drexler, go through a
biopolymer stage.) Although there is a wide variety of reactions that can form
diamond and graphite, living organisms do not build with these materials, so
there is no existence proof that such structures can be built using
point-by-point computer-controlled molecular deposition.
If diamond-like structures can be built by molecular manufacturing techniques,
they should have
astonishingly high performance characteristics. To those who study MM, its
projected high performance indicates that researchers should work toward this
goal with a focused intensity not seen since the Manhattan Project. To those who
have not studied MM, talk of motors a million times more powerful than today's
merely seems fanciful, a reason (or an excuse) to discount the entire field.
At least as problematic as the extreme technical claims are the concerns about
the extreme implications of molecular manufacturing. It is rare that a
technology comes along which revolutionizes society in a decade or so, and even
more rare that such things are correctly predicted in advance. It is very
tempting to dismiss claims of unstable arms races,
wholesale destruction of existing jobs, and widespread personal capacity for
mass destruction, as improbable.
However, all the skepticism in the world won't change the laws of physics. In
more than two decades (almost five, if you count from
Feynman's visionary speech), no one has found a reason why MM, even
diamond-based MM, shouldn't work. In fact, the more work that's done, the less
complex it appears. Predicting social responses to technology is even more
difficult than predicting technology itself, but it seems beyond plausibility
that such a powerful capability won't have at least some disruptive
effects—perhaps fatally disruptive, unless we can understand the potential and
find ways to bypass the worst pitfalls.
In the near future, nanotechnology in the broad sense will continue to develop
dozens of interesting technologies and capabilities, leading to hundreds of
improved capabilities and applications. Meanwhile, molecular manufacturing will
continue to move closer, despite the (rapidly fading) opposition to the idea.
Sometime in the next few years, someone will have the vision to fund a targeted
study of molecular manufacturing's potential; less than a decade after that,
general-purpose nanoscale manufacturing will be a reality that the world will
have to deal with. Molecular manufacturing will build virtually unlimited
quantities of new products as rapidly as the software can be designed—and it
should be noted that most of today's physical products are far less complex than
today's software. Molecular manufacturing will both enable and eclipse large
areas of nanotechnology, further accelerating the achievements of the field. We
are in for some interesting times.
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
At first encounter, the idea of designing products with
100,000,000,000,000,000,000,000 atoms, each in an engineered position, and each
one placed without error, may seem ridiculous. But the goal is not as
implausible as it sounds. Today's personal computers do that number of
transistor operations every few weeks. The operations are done without error,
and each one was engineered—though not directly. There are two reasons why
computers can do this: digital operations and
levels of abstraction. I've talked about both of these in
previous essays, but it bears repeating: at the lowest level of operations,
personal computers do a mole of engineered, reliable transistor operations every
few weeks, and the techniques used to accomplish this can be applied to
Computers can be so precise and reliable because they are based on digital
operations. A digital operation uses discrete values: either a 1 or a 0. A value
of 0.95 will be corrected to 1, and a value of 0.05 will be corrected to 0. This
correction happens naturally with every transistor operation. Transistors can do
this correction because they are nonlinear: there is a large region of input
where the output is very close to 1, and another large region of input where the
output is very close to 0. A little bit of energy is used to overcome entropy at
each step. Rather than letting inaccuracies accumulate into errors, they are
fixed immediately. Thermal noise and quantum effects are corrected before they
compound into errors.
Forces between atoms are nonlinear. As atoms approach each other, they feel a
weak attractive force. Then, at a certain distance, the force becomes repulsive.
If they are pushed together even more closely, the force becomes more strongly
attractive than before; finally, it becomes sharply repulsive. Chemists and
physicists know the region of weak distant attraction as “surface forces”; the
closer, stronger attraction is “covalent bonds”; and the intervening zone of
repulsion is responsible for the “activation energy” that is required to make
reactions happen. Of course, this picture is over-simplified; covalent bonds are
not the only type of bond. But for many types of atoms, especially carbon, this
is a pretty good description.
Several types of errors must be considered in fabricating and using a mechanical
component. A fabrication operation may fail, causing the component to be damaged
during manufacture. The operations may be correct but imprecise, causing small
variances in the manufactured part. During use, the part may wear, causing
further variance. As we will see, the nonlinear nature of molecular bonds can be
used (with good design) to virtually eliminate all three classes of error.
Nonlinear forces between atoms can be used to correct inaccuracies in
fabrication operations before they turn into errors. If the atom is placed in
slightly the wrong location, it will be pulled to the correct location by
inter-atomic forces. The correction happens naturally. If the placement tool is
inaccurate, then energy will be lost as the atom moves into place; as with
transistors, entropy isn't overcome for free. But, as with transistors,
reliability can be maintained over virtually unlimited numbers of operations by
spending a little bit of energy at each step.
In practice, there are several different kinds of errors that must be considered
when a moiety—an atom or a molecular fragment—is added to a part under
construction. It may fail to transfer from the “tool” molecule to the
“workpiece” molecule. This kind of error can be detected and the operation can
be retried. The moiety may bond to the wrong atom on the workpiece. Or it may
exert a force on the workpiece that causes other atoms, already in the
workpiece, to rearrange their bonds. This is called “reconstruction,” and
avoiding it imposes additional requirements for precise placement of the moiety,
but it is also a non-linear phenomenon: if the moiety is positioned within a
certain range of the ideal location, reconstruction won't happen, at least in
Errors of dimensional tolerance, which in traditional manufacturing are caused
by imprecise operations or wear during operation, need not be a factor in
molecular manufactured components. If an atom is pulled slightly out of place,
either during manufacture or during operation, it will be pulled back into place
by its bonds. In engineering terms, there is no plastic deformation, only
elastic deformation. Of course, if a strong enough force is applied, the bonds
can be broken, but preventing this is a matter of engineering the product
properly. It requires a lot of force to break a bond. If a component must be
either perfectly fastened or broken, then it will remain perfect for a long,
long time under normal usage.
Traditional mechanical engineering and manufacturing involve a lot of operations
to deal with errors of dimensional tolerance—including measuring, finishing, and
sensing during operation—that will not be required with molecular manufactured
components. This will make molecular manufacturing systems significantly easier
to automate. As long as low-level operations are reliable and repeatable, then
higher-level operations built on them also will be reliable. Knowing precisely
how the system works at the lowest level will allow confident engineering at
higher levels. This design principle is called levels of abstraction.
A computer programmer can write an instruction such as, “Draw a black rectangle
in the middle of the screen,” in just a few characters of computer code. These
few characters, however, may invoke thousands of low-level instructions carried
out by billions of transistor operations. The programmer has implicit confidence
that each transistor will work correctly. Actually, programmers don't think
about transistors at all, any more than you think about each spark in your car's
engine when you step on the gas. Transistors are combined into registers, which
are used by CPU microcode, which is controlled by assembly language, which is
machine-generated from high-level languages, which are used to write several
layers of operating system functions and libraries, and this is what the
programmers actually use. Because transistors are, in effect, completely
reliable and predictable, each level built on top of them also is completely
reliable and predictable (with the exception of design errors).
Molecular manufacturing will involve massive numbers of simple mechanosynthetic
operations done under fully automated control. A nanofactory building a product
would not be much different, at several important levels of function, from a
computer-driven printer printing a page. The nanofactory product designer would
not see each atom, any more than a graphic artist sees each ink droplet. Graphic
artists usually work in abstractions such as splines, rather than individual
pixels. The user does not even see each spline. The user just hits "Print" and
the picture comes out of the printer with each ink droplet in its proper place.
A molecular manufactured product could include a microscopic component
containing a billion atoms—which could be placed with complete reliability by a
single instruction written by a designer. An array of a billion identical
components, each containing a billion atoms, could be specified without any
additional difficulty. Each component could reliably work for many years without
a single broken bond. Thus, just as a computer programmer can write a simple
program that does an almost unlimited number of reliable calculations, a product
designer could write a simple specification that placed an almost unlimited
number of atoms—reliably and predictably—making exactly the product that was
desired. (Background radiation is beyond the scope of this essay; it will
introduce failures and require redundancy at scales larger than about a micron,
but this should not require much additional
Operation of the manufactured product can be similarly planned from the bottom
up. If the smallest operations happen in a predictable way at a predictable
time, then higher-level operations can be built on top of the low-level
functionality. This is not the only way to implement high-level functionality,
of course. Biology uses statistical processes and analog feedback loops to
implement its actions. Although this is more elegant and efficient in some ways,
it would be difficult to design systems that worked along these lines, and it is
not necessary. Digital operations can be made to happen in lockstep, and
aggregates of digital operations can be treated as reliable primitives for
higher levels. The more predictable a system is, the less sensing is required to
make it work as desired. Nanoscale sensing often is cited as a weak point in
nanomachine design, but in principle, nanomachines designed on digital
principles would not need any sensing in order to work reliably. In practice,
only a small amount of internal feedback would be required, which could be
provided by relatively crude sensors.
It is important to realize that digital design using levels of abstraction does
not imply increased complexity at higher levels. An assembly language
instruction that causes a billion transistor operations may be specified
completely with a paragraph of description. Its results may be very
intricate—may invoke a lot of diverse activity—but there is a crucial
distinction between intricacy and complexity.
Similarly, a high-level language instruction that invokes a billion assembly
language instructions may be understood completely at a glance. And so it goes,
through as many levels as are useful to the programmer/designer. As long as the
lower levels are reliable, the upper levels can be reliable, intricate (useful),
and simple (easy to use).
One of the most important features of molecular manufacturing is that its very
lowest level—the formation of molecules from precisely positioned building
blocks—is precise and reliable due to digital operations. Every level of
abstraction above the foundation of molecular fabrication can thus be equally
precise and reliable. Google, the World Wide Web, and modern video games all
have been engineered from molar numbers of transistor operations. In the same
way, masses of diverse, highly functional products will be engineered from
molecular fabrication operations.
Trends in Medicine
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
I just returned from a
Future Medical Forum conference where I spoke on the
nanotechnology panel. Other speakers covered topics such as device design,
regulation, setting prices for products, future trends in medical research, and
more. Much of what I heard confirmed ideas I've had about where medicine could
go once it was enabled by molecular manufacturing—but
it seems that some things are happening already. A number of these trends will
disrupt the medical industry. Thus, molecular manufacturing should reinforce the
direction medicine is going—but that direction will not always be comfortable
for medical companies.
I had some interesting conversations with speakers on the Design panel. They
confirmed that rapid prototyping of complete products would speed their work
significantly. They did not seem upset at the prospect of learning to use such a
powerful capability. At one point, I asked one of them: "Let me spin you a
science fiction story. Sometime in the future, people are coming to you for body
modifications to make their lives easier. Things like extensible fingers—sort of
a lightweight Inspector Gadget. Your job is to figure out how to design these
things." His response: "That would be totally cool!"
Norbert Reidel of Baxter spoke about trends in medical research and treatment.
His talk confirmed what I have been expecting: as we gain the ability to gather
increasing amounts of information about a person's biological state, we will be
able to make research and treatment more personal. Today, clinical trials with
placebos are used to tell statistically what works on a broad population. In the
future, we'll be able to move away from clinical trials as a way to tell what
works statistically, and toward individually designed treatment protocols based
on individual genetic makeup and other personal data. His talk was full of
phrases like "in-life research" and "adaptive trials" and "personal medicine." I
asked him whether the ability to gather lots of medical data would make it
possible to research the effects of daily life, such as diet and activities. He
said yes, but the bigger problem would be getting people to act on the results;
he mentioned a doctor who frequently prescribed "a pair of sneakers" but found
that the prescription usually was not filled.
I was most struck by a talk on globalization. The speaker, Brian Firth, is
Cordis's vice president for Medical Affairs and Health Economics Worldwide.
Brian structured his talk around a book by Shell (yes, the oil company):
Shell Global Scenarios to 2025 [PDF]. The scenarios are built around three
major forces: security, market efficiency, and social cohesion. Readers who are
familiar with CRN's Three Systems theory will be
noticing that the first two forces are very similar to the Guardian and
Commercial systems that we, following Jane Jacobs, have identified as major
systems of action in today's world. The third force, social cohesion, appears to
be almost unrelated to our Informational system. But Firth's talk mainly focused
on the first two, so it covered familiar ground.
I find it significant that Firth discussed a lot of what would seem to be Market
issues under Security. He spoke extensively about factors affecting the price of
medical devices. For example, buyers are starting to notice that devices can
cost four times as much in one country as in another. Devices are sometimes
bought in inexpensive regions and then shipped to areas where they are
expensive. These factors would seem to indicate the Market at work—but Firth
listed them all under Security. Apparently, the reasoning is: companies that
control a market don't have to work at being efficient; instead, they have to
defend their territory. Monopolies tend to be more Guardian. Several other
things in Firth's talk, such as his emphasis on (development) risk justifying
luxurious returns, sounded more Guardian than Commercial.
Firth's talk was one of the first, so it influenced my thinking throughout the
rest of the conference. Medicine today is essentially a fight to maintain a
reasonably healthy status quo. Stasis is a good thing; any change from health is
disease, which is to be combated. This is a very Guardian worldview. In the
Guardian system, those who are best at fighting the enemy deserve high praise,
luxuries, and a valuable "territory" that they can own. Efficiency is not a
Guardian value. In fact, Guardians traditionally try to avoid commercial and
market transactions. Firth's discussion of market forces was purely pessimistic,
focusing on the bad things would happen if the market made medical device
companies unprofitable—including less luxurious conferences.
Is there a connection between the Guardian approach to disease, and the Guardian
approach to the business side of medicine? I strongly suspect that there is.
People get used to thinking in a certain style. In addition to their natural
approach to disease, the reverence—and suspicion—that doctors receive from the
public could help to set the tone for a Guardian mindset. Then, any change in
doctors' ability to treat patients could threaten their ability to maintain the
more-or-less healthy status quo. Medical companies could easily become
comfortable with a regulatory environment that makes it easy to maintain
So, what will molecular manufacturing do to the
status quo? It will certainly challenge it. The first challenge may be a wave of
broad-spectrum diagnostic devices that would provide enough information to allow
computer-equipped researchers to know the state of the body, moment to moment
and in detail. The ability to diagnose disease is one of the primary medical
mysteries. Broad-spectrum molecular detectors already are being developed in the
form of DNA chips. As they become less expensive and more widely available, and
as a database relating DNA measurements to physiological conditions is created,
diagnosis will become less of a medical skill and more automated.
With real-time diagnosis comes the ability to treat more aggressively and even
experimentally without increasing risk, and to identify effective treatments
more rapidly. Instead of waiting weeks or even years to see whether secondary
disease symptoms appear, a treatment's direct effects could be detected almost
as soon as the treatment is delivered. Discovering unsuspected impacts on health
will be a lot easier, leading to increased ability to avoid unhealthy situations
and an increased rate of discovery (or rediscovery) of "folk" remedies.
If doctors traditionally fight a zero-sum battle to prevent disease as long as
possible, this implies that a new ability to increase health beyond nominal
might turn the whole medical profession on its head. I discussed this
observation with a conference attendee; the next day, he gave me a copy of
Spontaneous Healing by Dr. Andrew Weil. Weil begins with the observation
that in ancient Greece, there were two health-related professions: doctors,
whose patron was the god of medicine, and healers, whose patron was the goddess
of health. Doctors combated disease; healers advised people on how to support
their body's natural health status. This seems to confirm my observation about
medicine's focus on combating disease, but the ancient Greek healers still
stopped at the goal of maintaining health.
What would happen if science developed the ability to make people healthier than
healthy? What if medicine could change from fighting disease to actually
improving the lives of healthy people? The first question is whether the
existing medical infrastructure would be able to adjust. Doctors have opposed
advances in the past, including, for example,
anesthesia for childbirth. Perhaps doctors will continue to focus on
fighting disease. Unfortunately, they may also fight the advances that
researchers outside the medical system will make with increasing frequency.
If not doctors, then what group could implement the new hyper-health
technologies? In the Middle Ages, medical duties were divided between doctors
and barber-surgeons. Barbers were used to using their sharp blades in close
proximity to people's bodies, and most likely it was a natural progression to
progress to minor surgery like lancing boils. Meanwhile, the original
Hippocratic Oath actually forbade doctors from cutting people. I'm told that
tension between surgeons and other medical doctors remains to this day. So, what
might be the modern equivalent of barber-surgeons?
There is a business that already does voluntary body modification. They are used
to working on, and in, the human body with small tools. They are frequented by
people who are accustomed to ignoring authority. I'm speaking, of course, of
tattoo parlors. When a complete surgical robot can be squeezed into something
the size of a tattoo needle or even an acupuncture needle, perhaps tattoo
parlors will be among the first to adopt it. There may be a natural progression
from decorating the surface of the body to improving other aspects. This is not
quite a prediction—tattoo parlors may not be interested in practicing medicine;
the medical industry may successfully ban such attempts; and others, notably
alternative medicine practitioners, also have experience with needles. But it is
a scenario that's worth thinking about. It could happen.
Trends already developing in medicine will be strengthened by molecular
manufacturing. Studying molecular manufacturing and
its implications may provide useful insights into technological drivers of
medical change. Although not all the change will come from molecular
manufacturing, it does present a package of technological capabilities that will
be obvious drivers of change, and can be used to understand more subtle changes
coming from other sources.
Who remembers analog computers?
Chris Phoenix, Director of Research, Center for Responsible Nanotechnology
Far back in the misty dawn of time, around 1950 or so, there were two kinds of
computers. One was the now-familiar digital computer, doing computations on
hard-edged decimal or binary numbers—the forerunner of today's PC's. The other
kind of computer was the analog computer. At the time, analog computers were far
more powerful than digital computers. So, why did digital computers come to
replace analog, and what lessons does that hold for nanotechnology? The answer
can be found in several properties of digital computers—precision, abstraction,
and high-throughput production of components—that will also be found in
molecular manufacturing systems.
Molecular manufacturing proposes to build useful products by building molecules
using mechanical processes under computer control. A few molecular construction
techniques, repeated many times, would be able to build a wide array of
molecular shapes. These shapes could be used in functional nanosystems, such as
sensors, computers, and motors. The nanosystems could be combined into useful
products—even kilogram-scale or larger products containing vast numbers of
nanosystems built and assembled under automated control.
This type of nanotechnology is sometimes criticized by nanotechnologists working
in other areas. Critics say that the approach is unnatural, and therefore will
be inefficient and of limited utility. The trouble with this argument is that
digital computers are unnatural in similar ways. If this argument were correct,
then digital computers should never have been able to supplant analog computers.
Digital vs. Analog Computers
Both digital and analog computers represent numerical values by means of
electricity in wires. In an analog computer, the voltage or current in a single
wire could represent a number. Most digital computers have only two meaningful
values per wire: either high or low voltage. In a digital computer, dozens of
wires are needed to represent each number.
Analog computers thus had several major advantages over digital computers. A
single, fast, compact analog circuit with just a few inputs and components could
add, multiply, and even integrate and differentiate. A digital computer might
require hundreds or even thousands of components to do the equivalent
operations. In addition to the larger number of wires and components, the
digital computer must spend energy in order to constrain the signal in each wire
to its discrete value. Circuits in analog computers could be set up to directly
model or simulate actual physical processes of interest, whereas a digital
computer is limited to abstract numbers that can never fully represent
continuously varying quantities.
A digital computer has only a few advantages, but they turned out to be
decisive. The first advantage is the precision of the internal signals. A value
represented by a continuously varying physical quantity can only be as precise
as the components that produce, transmit, and utilize the physical signal, and
the signal—and the value—will inevitably lose precision with each operation.
Because a digital computer performs operations on abstract numbers represented
by discrete voltage levels, the operations can proceed without any loss of
precision. Unlike an analog computer, a digital computer can easily trade energy
for entropy, copying or processing a value billions of times with no loss of
(Legalistic physicists may object here that even digital computers are subject
to a minimum error rate imposed by entropy. In practice, this error rate can be
made as small as desired—a very small expenditure of energy allows billions of
operations per second for billions of years without a single mistake.)
The second advantage of digital computers is their abstraction—the fact that a
number stored in digital format has no direct connection to any physical value.
This was listed as a liability above, since an analog computer deals directly
and efficiently in physical values. But by adding enough wires, a digital
computer can do things that an analog computer simply cannot hope to achieve. A
sheaf of wires with voltages of five, zero, zero, five, and zero volts has no
apparent connection to a value of 56.25%, whereas a wire with 56.25 volts has an
obvious connection--one that can be used easily in analog electronic
computation. But by adding more wires to the digital sheaf, a digital computer
can precisely represent values with an unlimited number of digits. A few dozen
wires can represent numeric values with more precision than any analog component
Abstraction also allows digital computers to perform a broader range of
computational tasks. An analog computer would be incapable of storing and
searching a string of text. There is no analog equivalent of the letter 'a'. In
a digital computer, 'a' can simply be defined as the number 65, 'b' as 66, and
so on—or whatever numbers are convenient. Although an analog computer could be
built that could remember 'a' as 65 volts, 'b' as 66 volts, and so on, after a
few operations the voltages would drift and the text would become garbled.
Because digital computers can store numbers with no loss of precision, a string
of text can be stored indefinitely as a string of arbitrary numbers, processed
and manipulated as desired, and finally converted back to human-readable text.
An additional abstraction is to store the instructions for the computer's
operation as a sequence of numbers. Instead of building a computer for a fixed
sequence of operations, such as multiplying two numbers and then adding a third,
the sequence can be modified by an additional set of numbers indicating the
order of operations. These controlling numbers can be stored and used to modify
the computer's operation without physical re-wiring. Sequences of instructions
can be selected based on newly derived results of calculations. This abstraction
makes digital computers general-purpose machines, able to implement any
calculation. By 1950, even ENIAC, one of the first digital computers, had been
retrofitted to be controlled by stored numbers that were easily changed.
All of these abstractions require a lot of wires and circuits. A general-purpose
computer could be built out of vacuum tubes, as ENIAC was. However, this was
quite expensive. Transistors were smaller, more efficient, and more reliable.
Although their signal-processing characteristics were quite different from
vacuum tubes, this did not matter to digital computers as it would have mattered
to analog computers; all that was needed was a switch that could be set either
on or off, not a precise signal-processing function over an entire range of
analog signal. As time went on, transistors were shrunk until dozens, then
thousands, then millions, could be integrated into a single package the size of
a coin. Parallel manufacturing methods made this possible. A pattern of wires or
transistors could be imposed in parallel on a block of silicon by shining light
through a mask, similar to exposing a photograph. A single exposure could define
thousands or millions of features. A single mask could make thousands or
millions of computer chips. Today, hundreds of transistors can be bought for the
price of a single grain of rice. The simplest general-purpose
computers—microcontrollers—still have only a few thousand transistors, but the
most complex and high-performance chips now have billions of transistors.
The first computers, digital as well as analog, were used to perform
calculations relating to physical systems. As digital computers became more
flexible, they were applied to other types of problems, such as processing
symbols including databases of numbers and strings of text. Computer-driven user
interfaces became increasingly complex, and computers became foundational to
infrastructures such as banking, telecommunications, and the Internet. In the
last decade or two, things have come full circle: digital computers are now used
for processing a wide variety of analog signals, including sound and video.
These signals are processed in real time, for tasks as diverse as synthesizing
music and controlling factories. Digital computers have become so inexpensive
and powerful that it is usually better to convert an analog signal to digital as
soon as possible, process it through the seemingly inefficient digital methods,
and then convert it back to analog at the last second before it is used. This is
becoming true even for signals that do not need to be processed flexibly: rather
than include a few analog processing components, it is often cheaper to include
an entire digital computer just for one fixed signal-processing task.
Nanoscale Technologies and Molecular Manufacturing
In the last few decades, the advance of technology has begun to address things
too small to see even with a traditional microscope. New kinds of microscopes
that do not use light are creating pictures of molecules and even individual
atoms. Industrial processes are being developed to manufacture particles smaller
and more precise than anything that could be built with traditional machining.
New analytical tools, including computer simulations, are providing new
information about what is going on at these scales—and the results are often
useful as well as interesting. New solar cells, cancer treatments, computer
technologies, and cosmetics are only a few of the applications that are being
These nanoscale technologies share many of the strengths and weaknesses of
analog computer components. Each technology performs a useful function, such as
detecting cancer cells or adding strength to plastics. However, they are not
general-purpose. Each new material or structure must be researched and developed
for a limited set of applications. Each technology forms one functional
component of a larger product. Today's nanoscale technologies are like analog
computing elements: each one does a single thing, and it does it elegantly and
efficiently by interacting directly with physical phenomena.
A digital computer hides the physical phenomenon of voltage under the
abstraction of signal, at a level below even individual numbers. A signal in a
wire is seen, not as a voltage, but as a 1 or a 0. It takes many 1's and 0's to
make a single number. At any higher level, the fact of voltage is ignored, and
designers are free to think in abstractions. Similarly, molecular manufacturing
proposes to hide the physical phenomenon of chemistry under the abstraction of
structure and mechanical function, at a level below even individual molecules. A
molecule would be designed according to its desired shape, and the construction
steps would be planned as needed to build it. Obviously, this bypasses a lot of
possibilities for elegant functioning. And in practice, molecules could be
designed to take advantage of electronic and quantum effects as well as
mechanical functions. But at least initially, it seems likely that designers
will keep their task as simple as possible.
Digital computers and molecular manufacturing both rely on precision. A signal
that drifts away from a value of 0 or 1 is restored to its proper value (by
spending a small amount of energy) and so can be stored indefinitely. The
restoring function is highly non-linear: anything less than 0.5 is forced to 0,
and anything above 0.5 is forced to 1. Fortunately, molecular manufacturing has
access to a similar source of precision. The force between atoms is highly
non-linear. Two atoms placed a distance apart will attract each other up to a
certain point, at which the force changes from attractive to repulsive. If they
are pushed past that barrier, they may (depending on their type) reach another
region of much stronger attraction. Thus a pair of atoms can be either bonded
(joined together closely and strongly) or unbonded (weakly attracted), and the
energy required to form or break a bond—to push atoms through the repulsive
region— typically is large in comparison to the energy available from thermal
noise at room temperature. Because atoms of each type are exactly identical,
their bonds are extremely predictable; each molecule of oxygen or propane is
exactly the same. A molecule forms a very precise structure, even if built with
an imprecise process. Again, the precision comes at the cost of a small amount
of energy. (Thermal and quantum noise add a statistical distortion to the
precise shape. For highly crosslinked molecules, this distortion can be much
less than the width of a single atom.)
The precision of molecular structure means that a molecular manufacturing system
could build a structure that is an exact duplicate. Today's manufacturing
techniques are approximate—precision is lost at each step, and must be recovered
by specialized techniques. A robot that tried to build another robot would spend
a lot of time polishing and grinding and measuring. Maintaining precision would
require many different sensors and tools. But a system that built a
molecular-scale robot would not have to do any of that. Simply putting the atoms
and molecules in approximately the right place would cause them to snap into
their bonded configuration, in a very predictable and repeatable structure. Of
course, if they are too far away from their proper position, they will bond
incorrectly. Some accuracy is still required, but beyond a certain point, the
product will be essentially perfect, and inaccuracy will only cost energy rather
than product quality. Building a copy of a physical object—including a molecular
manufacturing system—can be as precise as copying a computer file.
Digital computers have become ubiquitous because they are so inexpensive to
manufacture. Billions of transistors—signal processing elements—can be made in
parallel with a single set of process steps. Molecular manufacturing also will
rely on parallel manufacture. Because small devices work more rapidly, the
manufacturing system should be made be as small as possible—perhaps only a few
hundred atoms wide. This is small enough to be built by a single molecular
manufacturing system in a reasonable period of time—probably less than an hour.
It is also too small, if it were working alone, to build any useful amount of
product. But because precision is not lost in molecular manufacturing
operations, a single system could build exact copies, each of which builds exact
copies, and so on for as many duplications as needed to produce kilogram-scale
manufacturing systems capable of building kilograms of product per hour.
Precision also allows the manufacturing process to be completely automated. Not
counting licensing and other forms of artificial scarcity, the main cost of
products—including duplicate manufacturing systems—would be raw materials and
energy. An individual nanoscale molecular manufacturing system would be quite a
lot cheaper than a transistor; in fact, all the products of molecular
manufacturing, including macroscale manufacturing systems, could have a
production cost of a few dollars per kilogram.
Interfacing with the Real World
Digital computers deal with the analog "real" world via specialized circuits
that convert from digital to analog and vice-versa. In theory, a digital
computer could include analog processing elements, doing some operations by
"efficient" analog methods. In practice, although a few hybrid computers were
built, such approaches are not part of modern computer practice. Instead, analog
values are converted to digital as early as possible, processed digitally, and
converted back to analog as late as possible. In fact, for some applications,
the signal need never be converted back; devices such as stepper motors and
techniques such as pulse width modulation are driven directly by digital
Some products of molecular manufacturing, such as medical devices and
manufacturing systems, will have to deal with unknown and sometimes unstructured
molecules. Biological systems frequently let molecules mix together, bump
around, and join and react according to complex and finely tuned affinities.
Molecular manufacturing, by contrast, probably will find it most convenient to
bind molecules to solid receptors so that their structure and orientation is
known precisely, and then work on them using “digital” predictable operations.
In some cases, this may take more volume, time, and energy than biological
methods. In other cases, it will be more efficient. A major advantage will be
ease of design: when the position of molecules is fixed and known, it becomes
easier to engineer desired reactions and prevent undesired reactions. Preventing
random interactions between molecules should also allow new kinds of reactions
to be developed that could not work in traditional chemistry.
Today's nanoscale technologies are comparable to analog computers: they deal
directly and elegantly with physical phenomena. However, digital computers have
replaced analog computers in almost every instance, and have expanded to perform
many tasks that would be impossible with analog methods. In the same way that
digital computers attain greater flexibility, lower cost, and easier design by
abstracting away from physical phenomena, molecular manufacturing will be able
to take advantage of the precision of atoms and their bonds to build nanoscale
manufacturing systems capable of making a wide variety of products. It remains
to be seen whether molecular manufacturing methods will supplant or only
complement other nanoscale technologies, but the history of computers suggests
that such an outcome is possible.
Powering Civilization Sustainably
by Chris Phoenix, CRN Director of Research
Most products, and almost all high-tech products, use energy.
manufacturing is expected to build a large quantity of products, and the
energy use of those products raises several interesting technical questions.
Energy must come from a source, be stored and transmitted, be transformed from
one form into another, and eventually be used; the use will generate waste heat,
which must be removed. Encompassing all of these stages are questions of
efficiency and power budget. Several factors may limit desired uses of energy,
including availability of energy, removal of heat, and collective side effects
of using large amounts of energy.
The use of fossil fuels as an energy source is problematic for many reasons. The
supply, and more importantly the rate of extraction, is limited. The source may
be politically troublesome. Burning of fossil carbon adds carbon dioxide to the
atmosphere. Some forms of energy, such as coal and diesel fuel, add pollutants
to the atmosphere in addition to the carbon.
Nuclear energy has a different set of problems, including political opposition
and nuclear weapons proliferation. It is alleged that modern techniques for
pre-processing, use, and post-processing of fission fuel can largely avoid
disposal problems and can release less radiation into the environment than
burning an equivalent amount of coal; it remains to be seen whether
non-engineering problems can be overcome.
Today, solar energy is diffuse, fluctuating, expensive to collect, and difficult
to store. Solar collectors built by nanofactories should be far less expensive
than today's solar cells. The diffuse nature of solar energy may be less
problematic if solar collectors do not have to compete for ground area.
solar-powered airplanes such as the Helios, successor to the existing
Centurion, are already planned for round-the-clock flight using onboard energy
storage. With lighter and less expensive construction, a high-altitude airplane,
flying above the weather that troubles ground-based collection, could capture
far more solar energy than it needed to stay aloft.
A fleet of solar collection airplanes could capture as much energy as desired,
providing a primary power source for terrestrial use--once the energy was
delivered, converted, and stored for easy access, as explained below. Their high
altitude also would provide convenient platforms for communication and
planet-watching applications, serving military, civilian, and scientific
purposes. Although individual planes would be too high to see from the ground,
if flown in close formation they could provide partial shade to an area,
modulating its microclimate and perhaps providing a tool for influencing weather
(e.g. removing heat from the path of a hurricane).
Robert Freitas calculated (Nanomedicine,
Volume I, 6.5.7) that for a future population of 10 billion, each person
would be able to use perhaps only 100 kW without their aggregate heat
dissipation causing damage to the Earth's climate. An automobile's engine can
deliver 100 kW of useful power today (100 kW = 134 HP), while producing several
times that much waste heat. This indicates that power usage cannot be assumed to
be unlimited in a post-MM world. Because a lot of power will probably be
allocated to governmental projects, and wealthy people will presumably use more
power than average, I will assume that world power usage will equal a trillion
kW, with a typical person using ten kW--about what the average European consumes
today. (Americans use about twice as much.)
Energy Storage and Transmission
In chapter 6 of
Nanomedicine I, Freitas analyzes energy storage (section 6.2),
conversion (6.3), and transmission (6.4). The highest density non-nuclear energy
storage involves stretching or rearranging covalent chemical bonds. Diamond, if
it could be efficiently oxidized, would provide 1.2x1011 J/m3.
Methanol's density is almost an order of magnitude lower: 1.8x1010
J/m3 (5000 kWh/m3). In theory, a stretched diamond spring
could provide an energy density of up to 2x1010 J/m3,
slightly better than methanol, and not quite as good as a diamond flywheel (5x1010
Human civilization currently uses about 1 quadrillion BTU, or 1018 J, per day;
somewhat over ten billion kW--about 1% of the maximum environmentally-sound
level. This indicates that many people today use significantly less than even
one kW, which is impressive considering that the human body requires about 100 W
To store a typical (future) personal daily energy requirement of 10 kW-days in a
convenient form such as methanol or diamond springs would require about 50
liters of material, 1/20 of a cubic meter. To store the entire daily energy
supply of our future civilization would require 5 billion cubic meters of
An efficient and compact way to transmit energy is through a rapidly rotating
diamond rod, which can carry about a gigawatt per square centimeter (Nanomedicine
22.214.171.124). A person's daily power could be transmitted through a
one-square-millimeter rod in a little less than a second. On the other hand, in
order to transfer all of civilization's future budget of 1015 W, 100
m2 of rotating diamond rods would be needed. To transfer this energy
halfway around the planet (20,000 km) would require two billion cubic meters of
diamond, which is quite feasible given a carbon-based exponential molecular
manufacturing technology. (The atmosphere contains 5x1014 kg of
carbon, and two billion cubic meters of diamond would weigh 7x1012 kg.)
Solar Collection Infrastructure
Let's go back to the idea of using high-altitude aircraft to collect solar
energy. In space, the sun shines at 1366 W/m2. Considering the
inefficiency of solar cells, the angle of the sun (it may be hard to fly the
airplane at odd angles to make the solar collectors directly face the sun all
through the day), and nighttime, the wing surface may collect only about 100 W/m2
on average. The Centurion solar airplane has a wing area of 153 m2, which would collect about 1
billion J/day. To store that much power would require about 232 kg of diamond
springs; the weight of Centurion when configured for flight to 80,000 ft is 863
It seems, then, that a fleet of 100 billion light-weight auto-piloted aircraft,
each making contact with the Earth for a few seconds every few days to transfer
its stored power, would be able to provide the full 1015 W that the
Earth's civilization would be able to use sustainably. (Remember that a billion
J can be transferred through a 1 cm2 rod in 1 second. Several other
power transfer methods could be used instead.) The total wing area would be
about ten million square kilometers--about 2% of the Earth's surface area. The
total mass would be about 3x1013 kg, about 6% of the carbon in the
Earth's atmosphere. Of course,
removing this much carbon from the atmosphere would be a very good idea.
As calculated in my paper,
Design of a Primitive Nanofactory, building a kg of diamond might require as
much as 200 kWh, or 7x108 J. (Special-purpose construction of large simple
diamond shapes such as springs and aircraft structure could probably be done a
lot more efficiently.) Thus, in a day, an airplane could collect more than
enough energy to build another airplane. While flying for a day, it would also
have the opportunity to collect a lot of carbon dioxide. The energy cost to
convert carbon dioxide to suitable feedstock would be a small fraction of the
200 kWh/kg construction cost, since most of that cost went for computation
rather than chemistry. Thus it seems that the airplane fleet could in theory be
doubled each day, requiring only a little over a month to double from 1 airplane
to 100 billion.
Energy Use, Transformation, and Efficiency
Energy can come in many forms, such as mechanical energy, electrical energy,
light, heat, and chemical energy. Today, energy is most easily stored in
chemical form and transported in chemical or electrical form. (Actually, the
ease of chemical storage comes largely from the fact that we find it already in
that form. Manufacturing energy-rich chemicals from any other form of energy is
quite difficult, costly, and inefficient with today's technology.)
Energy has a wide variety of uses, including transportation, powering computers,
illumination, processing materials, and heating or cooling. In general,
applications that are implemented with molecular manufacturing can be at least
as efficient as today's technology.
With molecular manufacturing, it will be possible to build extremely dense
conversion systems. Much of today's technology runs on electricity, and
electromechanical conversion (motors and generators) can be built extremely
small, with rotors less than 100 nm across. This is good news because such
systems increase in power density as they shrink. A nanoscale motor/generator
could have a power density of 1015 W/m3. This means that these components will
take almost negligible volume in almost any conceivable product.
There's even more good news. Nanomachines should lose less energy to friction as
they are operated more slowly. Thus, if some of their astronomical power density
is traded for efficiency--incorporating one hundred times as many motors, and
running them 1/100 as fast--then the efficiency, already probably pushing 99%,
will become even better. This means that most products will have far less
internal waste heat to get rid of than if they were built with today's
Today's laptop computer might be replaced with one that contained millions of
high-performance CPU's working in parallel--while using less power. This is
because today's computers are quite inefficient; they spend huge amounts of
energy pushing electrons back and forth in sufficient quantities to maintain a
clean signal, and the energy of each signal is thrown away billions of times per
second. Nano-built computers will have better ways of retaining signals, and
will be designed to re-use much of the energy that is thrown away in today's
designs. It is safe to say that a nano-built computer could provide more
processing power than today's programmers would know what to do with, without
using more than a tiny fraction of the personal power budget.
Modern food production is a major resource drain--not only fossil fuels for
machinery and fertilizer, but also water, topsoil, and land area, plus the costs
of associated pollution. Much of this drain could be eliminated by enclosing
agriculture in inexpensive greenhouses with automation. Further efficiency
improvements could be achieved by a gradual switch to manufactured food;
although it would have seemed science-fictional just a few decades ago, people
today are already eating "energy bars" and other high-tech food products that
have little in common with natural food.
The biggest power source in the world today is fossil fuel. This is usually
burned and used to run heat engines, which inevitably throw away more than half
the energy as waste heat. Fuel cells are not heat engines, and are not limited
by Carnot efficiency. Today, fuel cells are finicky, fragile, and expensive.
However, nanofactory-built fuel cells should be less fragile, more compact, and
certainly cheaper. In addition, direct chemomechanical conversion should be
possible for at least some fuels, and may be reasonably efficient.
Because fuel poses storage and safety problems, and needs an air supply, it
seems likely that many nano-built products will use mechanical power storage,
which can be recharged and discharged quickly and efficiently. As noted above,
the power density of diamond springs is about as good as some liquid fuels--far
superior to batteries.
Several authors, including Eric Drexler, Josh Hall, and Robert Freitas have
pointed out that large masses of nanomachinery may generate far too much waste
heat to be cooled conveniently--or at all. However, the same high power density
that reduces the allowable mass of nanomachinery also means that only small
quantities will be needed to implement functionality equivalent to that found in
today's products. In fact, nano-built products will typically be quite a bit
more efficient. Instead of the mass of active nanomachinery, a more useful
metric is the power generated by the machinery.
To achieve the same results as today's products, nano-built products will have
to handle less heat, because they will be more efficient. This is especially
true in the case of fuel-burning engines, since no nano-built product will need
to use a heat engine; instead, they will be able to store mechanical energy
directly, or at the worst will use a compact and efficient fuel cell.
Products that interact energetically with the environment, such as water pumps
and vehicles, will still need a lot of energy to overcome friction (and probably
turbulence) and accomplish their task. However, their internal mechanisms will
only be transforming the energy they use, not converting much of it to heat.
Energy that is used to overcome fluid resistance will typically be carried away
by the fluid; only in extreme cases, such as supersonic airplanes, do products
suffer significant structural heating.
Molecular manufacturing will provide the capability to engage in planet-scale
engineering, such as building a new petawatt solar-gathering capability in a
month or so. This could be used to provide perhaps 100 times more energy than we
use today--as much as we can safely use without harming the environment. The
collected energy could be delivered in a near-continuous stream, close to where
it was needed. Even if divided with a moderate degree of inequity, there should
be enough energy for everyone on the planet to enjoy a Western standard of
Many of today's applications can be made significantly more efficient. In
particular, the waste associated with fuel-burning engines and power plants can
be eliminated. However, the energy cost associated with transportation is likely
to remain high, especially since new technology will enable greater speed.