CRN Science & Technology
Essays - 2004
stages of acceptance: 1) this is worthless nonsense; 2) this is an interesting,
but perverse, point of view; 3) this is true, but quite unimportant; 4) I always
— Geneticist J.B.S. Haldane,
on the stages scientific theory goes through
Each issue of the
features a brief article explaining technical aspects of advanced
nanotechnology. They are gathered in these archives for your review. If you have comments
or questions, please
Sub-wavelength Imaging (January 2004)
2. Nucleic Acid
Engineering (February 2004)
Power of Molecular Manufacturing (March 2004)
Science vs. Engineering vs. Theoretical Applied Nanotechnology (April 2004)
of Entropy (May 2004)
Engineering, Biology, and
Nanotechnology (June 2004)
to Basics (July 2004)
Off-Grid With Molecular Manufacturing (August 2004)
Coping with Nanoscale Errors
Many Options for
Molecular Manufacturing (November 2004)
Planar Assembly—A better way
to build large nano-products (December 2004)
2005 Essays Archive
2006 Essays Archive
2007 Essays Archive
2008 Essays Archive
Planar Assembly—A better way to build
by Chris Phoenix, CRN Director of Research
This month's essay is adapted from
a paper I wrote recently for my
NIAC grant, explaining why planar assembly, a new way to build large
products from nano-sized building blocks, is better and simpler than convergent
Molecular manufacturing promises to build large quantities of nano-structured
material, quickly and cheaply. However, achieving this requires very small
machines, which implies that the parts produced will also be small. Combining
sub-micron parts into kilogram-scale machines will not be trivial.
In Engines of Creation (1986),
Drexler suggested that large products could be built by self-contained
micron-scale "assembler" units that would combine into a scaffold, take raw
materials and fuel from a special fluid, build the product around themselves,
and then exit the product, presumably filling in the holes as they left. This
would require a lot of functionality to be designed into each assembler, and a
lot of software to be written.
In Nanosystems (1992), Drexler developed a simpler idea: convergent
assembly. Molecular parts would be fabricated by mechanosynthesis, then placed
on assembly lines, where they would be combined into small assemblages. Each
assemblage would move to a larger line, where it would be combined with others
to make still larger concretions, and so on until a kilogram-scale product was
built. This would probably be a lot simpler than the self-powered scaffolding of
Engines, but implementing automated assembly at many different scales for many
different assemblages would still be difficult.
In 1997, Ralph Merkle published
a paper, "Convergent Assembly", suggesting that the parts to be assembled
could have a simple, perhaps even cubical shape. This would make the assembly
automation significantly less complex. In 2003, I published a
very long paper analyzing many operational and architectural details of a
kilogram-per-hour nanofactory. However, despite 80 pages of detail, my factory
was limited to joining cubes to make larger cubes. This imposed severe limits on
the products it could produce.
In 2004, a collaboration between Drexler and former engineer John Burch resulted
in the resurrection of an idea that was touched on in Nanosystems:
instead of joining small parts to make bigger parts through several levels,
add small parts directly to a surface of the full-sized product,
extruding the product [38 MB movie] from the assembly plane. It turns
out that this does not take as long as you'd expect; in fact, the speed of
deposition (about a meter per hour) should not depend on the size of the parts,
even for parts as small as a micron in size.
Problems with Earlier Methods
In studying molecular manufacturing, it is common to find that problems are
easier to solve than they initially appeared. Convergent assembly requires
robotics in a wide range of scales. It also needs a large volume of space for
the growing parts to move through. In a simple cube-stacking design, every large
component must be divisible along cube boundaries. This imposes constraints on
either the design or the placement of the component relative to the cube matrix.
Another set of problems comes from the need to handle only cubes. Long skinny
components have to be made in sections and joined together, and supported within
each cube. Furthermore, each face of each cube must be stiff, so as to be joined
to the adjacent cube. This means that products will be built solid: shells or
flimsy structures would require interior scaffolding.
If shapes other than cubes are used, assembly complexity quickly increases,
until a nanofactory might require many times more programming and design than a
modern "lights-out" factory.
However, planar assembly bypasses all these problems.
The idea of planar assembly is to take small modules, all roughly the same size,
and attach them to a planar work surface, the working plane of the product under
construction. In some ways, this is similar to the concept of 3D inkjet-style
prototyping, except that there are billions of inkjets, and instead of ink
droplets, each particle would be molecularly precise and could be full of
intricate machinery. Also, instead of being sprayed, they would be transported
to the workpiece in precise and controlled trajectories. Finally, the workpiece
(including any subpieces) would be gripped at the growing face instead of
requiring external support.
Small modules supplied by any of a variety of fabrication technologies would be
delivered to the assembly plane. The modules would all be of a size to be
handled by a single scale of robotic placement machinery. This machinery would
attach them to the face of a product being extruded from the assembly plane. The
newly attached modules would be held in place until yet newer modules were
attached. Thus, the entire face under construction serves as a "handle" for the
growing product. If blocks are placed face-first, they will form tight
parallel-walled holes, making it hard to place additional blocks; but if the
blocks are placed corner-first, they will form pyramid-shaped holes for
subsequent blocks to be placed into. Depending on fastening method, this may
increase tolerance of imprecision and positional variance in placement.
The speed of this method is counterintuitive; one would expect that the speed of
extrusion would decrease as the module size decreased. But in fact, the speed
remains constant. For every factor of module size decrease, the number of
placement mechanisms that can fit in an area increases as the square of that
factor, and the operation speed increases by the same factor. These balance the
factor-cubed increase in number of modules to be placed. This analysis breaks
down if the modules are made small enough that the placement mechanism cannot
scale down along with the modules. However, sub-micron kinematic systems are
already being built via both MEMS and biochemistry, and robotics built by
molecular manufacturing should be better. This indicates that sub-micron modules
can be handled.
Advantages of Planar Assembly
This approach requires only one level of modularity from nanosystems to
human-scale products, so it is simpler to design. Blocks (modules) built by a
single fabrication system can be as complex as that system can be programmed to
produce. Whether the feedstock producing system uses direct covalent deposition
or guided self-assembly to build the nanoblocks, the programmable feature size
will be sub-nanometer to a few nanometers. Since a single fabrication system can
produce blocks larger than 100 nanometers, a fair amount of complexity (several
motors and linkages, a sensor array, or a small CPU) could be included in a
Programmable, or at least parameterized, (or at worst case, limited-type)
modules would then be aggregated into large systems and "smart materials".
Because of the molecular precision of the nanoblocks, and because of the
inter-nanoblock connection, these large-scale and multi-scale components could
be designed without having to worry about large-scale divisions and fasteners,
which are a significant issue in the convergent assembly approach (and also in
Support of large structures will be much easier in planar assembly than in
convergent assembly. In simplistic block-based convergent assembly, each
structure (or cleaved subpart thereof) must be embedded in a block. This makes
it impossible to build a long thin structure that is not supported along each
segment of its length, at least by scaffolding.
In planar assembly, such a structure can be extruded and held at the base even
if it is not held anywhere else along its length. The only constraint is the
strength of the holding mechanism vs. the forces (vibration and gravity) acting
on the system; these forces are proportional to the cube of size, and rapidly
become negligible at smaller scales. In addition, the part that must be
positioned most precisely—the assembly plane—is also the part that is held.
Positional variance at the end of floppy structures usually will not matter,
since nothing is being done there; in the rare cases where it is a problem,
collapsible scaffolds or guy wires can be used. (The temporary scaffolds used in
3D prototyping have to be removed after manufacture, so are not the best design
for a fully automated system.)
This indicates that large open-work structures can be built with this method.
Unfolding becomes much less of an issue when the product is allowed to have
major gaps and dangling structures. The only limit on this is that extrusion
speed is not improved by sparse structures, so low-density structures will take
longer to build than if built using convergent assembly.
Surface assembly of sub-micron blocks places a major stage of product assembly
in a very convenient realm of physics. Mass is not high enough to make inertia,
gravity, or vibration a serious problem. (The mass of a one-micron cube is about
a picogram, which under 100 G acceleration would experience a nanoNewton of
force. This is comparable to the force required to detach 1 square nanometer of
van der Waals adhesion (tensile strength 1 GPa, Nanosystems 9.7.1). Resonant
frequencies will be on the order of MHz, which is easy to isolate/damp.)
Stiffness, which scales adversely with size, is significantly better than at the
nanoscale. Surface forces are also not a problem: large enough to be convenient
for handling—instead of grippers, just put things in place and they will
stick—but small enough that surfaces can easily be separated by machinery. (The
problems posed by surface forces in MEMS manipulation are greatly exacerbated by
the crudity of surfaces and actuation in current technology. Nanometer-scale
actuators can easily modulate or supplement surface forces to allow convenient
attachment and release.)
Sub-micron blocks are large enough to contain thousands or even millions of
features: dozens to thousands of moving parts. But they are small enough to be
built directly out of molecules, benefiting from the inherent precision of this
approach as well as nanoscale properties including superlubricity. If blocks can
be assembled from smaller parts, then block fabrication speed can improve.
Centimeter-scale products can benefit from the ability to directly build
large-scale structures, as well as the fine-grained nature of the building
blocks (note that a typical human cell is 10,000-20,000 nm wide). For most
purposes, the building blocks can be thought of as a continuous smooth material.
Partial blocks can be placed to make the surfaces smoother—molecularly smooth,
except perhaps for joints and crystal atomic layer steps.
Modular Design Constraints
Although there is room for some variability in the size and shape of blocks,
they will be constrained by the need to handle them with single-sized machinery.
A multi-micron monolithic subsystem would not be buildable with this
manufacturing system: it would have to be built in pieces and assembled by
simple manipulation, preferably mere placement. The "expanding ridge joint"
system, described in my
Nanofactory paper, appears to work for both strong mechanical joints and a
variety of functional joints.
Human-scale product features will be far too large to be bothered by sub-micron
grain boundaries. Functions that benefit from miniaturization (due to scaling
laws) can be built within a single block. Even at the micron scale, where these
constraints may be most troublesome, the remaining design space is a vast
improvement over what we can achieve today or through existing technology
Sliding motion over a curved unlubricated surface will not work well if the
surface is composed of blocks with 90 degree corners, no matter how small they
are. However, there are several approaches that can mitigate this problem.
First, there is no requirement that all blocks be complete; the only requirement
is that they contain enough surface to be handled by assembly robotics and
joined to other blocks. Thus an approximation of a smooth curved surface with no
projecting points can be assembled from prismatic partial-cubes, and a better
approximation (marred only by joint lines and crystal steps) can be achieved if
the fabrication method allows curves to be built. Hydrodynamic or molecular
lubrication can be added after assembly; some lubricant molecules might be built
into the block faces during fabrication, though this would probably have limited
service life. Finally, in clean joints, nanoscale machinery attached to one
large surface can serve as a standoff or actuator for another large surface,
roughly equivalent to a forest of traction drives.
The grain scale may be large enough to affect some optical systems. In this
case, joints like those between blocks can be built at regular intervals within
the blocks, decreasing the lattice spacing and rendering it invisible to wave
original NIAC paper for discussion of factory architecture and extrusion
Conclusion and Further Work
Surface assembly is a powerful approach to constructing meter-scale products
from sub-micron blocks, which can themselves be built by individual fabrication
systems implementing molecular manufacturing or directed self-assembly. Surface
assembly appears to be competitive with, and in many cases preferable to, all
previously explored systems for general-purpose manufacture of large products.
It is hard to find an example of a useful device that could not be built with
the technique, and the expected meter-per-hour extrusion rate means that even
large products could be built in their final configuration (as opposed to
What this means is that, once we have the ability to build billion-atom
(submicron) blocks of nanomachinery, it will be straightforward to combine them
into large products. The opportunities and problems of molecular manufacturing
can develop even faster than was previously thought.
Many Options for Molecular Manufacturing
by Chris Phoenix, CRN Director of Research
Molecular manufacturing is the use of programmable chemistry to build
exponential manufacturing systems and high-performance products. There are
several ways this can be achieved, each with its own benefits and drawbacks.
This essay analyzes the definition of molecular manufacturing and describes
several ways to achieve the requirements.
Exponential Manufacturing Systems
An exponential manufacturing system is one that can, within broad limits, build
additional equivalent manufacturing systems. To achieve that, the products of
the system must be as intricate and precise as the original. Although there are
ways to make components more precise after initial manufacture, such as milling,
lapping, and other forms of machining, these are wasteful and add complications.
So the approach of molecular manufacturing is to build components out of
extremely precise building blocks—molecules
and atoms, which have completely deterministic structures. Although thermal
noise will cause temporary variations in shape, the average shape of two
components with identical chemical structures will also be identical, and
products can be made with no loss of precision relative to the factories.
The intricacy of a product is limited by its inputs. Self-assembled
nanotechnology is limited by this: the intricacy of the product has to be built
into the components ahead of time. There are some molecular components such as
DNA that can hold quite a lot of information. But if those are not used—and
even if they are—the
manufacturing system will be much more flexible if it includes a programmable
manipulation function to move or guide parts into the right place.
Programmable Chemistry: Mechanosynthesis
Chemistry is extremely flexible, and extremely common; every waft of smoke
contains hundreds or thousands of carbon compounds. But a lot of chemistry
happens randomly and produces intricate but uncontrolled mixtures of compounds.
Other chemistry, including crystal growth, is self-templating and can be very
precise, but produces only simple results. It takes special techniques to make
structures using chemistry that are both intricate and well-planned.
There are several different ways, at least in theory, that atoms can be joined
together in precise chemical structures. Individual reactive groups can be
fastened to a growing part. Small molecules can be strung together like beads in
a necklace. It's been proposed that small molecules can be placed like bricks,
building 3D shapes with the building blocks fastened together at the edges or
corners. Finally, weak parts can be built by self-assembly—subparts
can be designed to match up and fall into the correct position. It may be
possible to strengthen these parts chemically after they are assembled.
Mechanosynthesis is the term for building large parts by fastening a few atoms
at a time, using simple reactions repeated many times in programmable positions.
So far, this has been demonstrated for only a few chemical reactions, and no
large parts have been built yet. But it may not take many reactions to complete
general-purpose toolbox that can be used in the proper sequence and position
to build arbitrary shapes with fairly small feature sizes.
The advantage of a mechanosynthetic approach is that it allows direct
fabrication of engineered shapes, and very high bond densities (for strength).
There are two disadvantages. First, the range of molecular patterns that can be
built may be small, at least initially—the
shapes may be quite programmable, but lack the molecular subtlety of
biochemistry. This may be alleviated as more reactions are developed. Second,
mechanosynthesis will require rather intricate and precise machinery—of
a level that will be hard to build without mechanosynthesis. This creates a
to build the first fabrication machine. Scanning probe microscopes have the
required precision, or one of the lower-performance machine-building
alternatives described in this essay may be used to build the first
Programmable Chemistry: Polymers and Possibilities
Biopolymers are long heterogeneous molecules borrowed from biology. They are
formed from a menu of small molecules called monomers stuck end-to-end in a
sequence that can be programmed. Different monomers have different parts
sticking out the sides, and some of these parts are attracted to the side parts
of other monomers. Because the monomer joining is flexible, these attractive
parts can pull the whole polymer molecule into a "folded" configuration that is
more or less stable. Thus the folded shape can be indirectly programmed by
choosing the sequence of monomers. Nucleic acid shapes (DNA and RNA) are a lot
easier to program than protein shapes.
Biopolymers have been studied extensively, and have a very flexible chemistry:
it's possible to build lots of different features into one molecule. However,
protein folding is complex (not just complicated, but inherently hard to
predict), so it's only recently become possible to design a sequence that will
produce a desired shape. Also, because there's only one chemical bond between
the monomers, biopolymers can't be much stronger than plastic. And because the
folded configurations hold their shapes by surface forces rather than strong
bonds, the structures are not very stiff at all, which makes engineering more
difficult. Biopolymers are constructed (at least to date) with bulk chemical
processes, meaning that it's possible to build lots of copies of one intricate
shape, but harder to build several different engineered versions. (Copying by
bacteria, and construction of multiple random variations, don't bypass this
limitation.) Also, reactants have to be flushed past the reaction site for each
monomer addition, which takes significant time and leads to a substantial error
A new kind of polymer has
just been developed. It's based on amino acids, but the bonds between them
are stiff rather than floppy. This means the folded shape can be directly
engineered rather than emerging from a complex process. It also means the
feature size should be smaller than in proteins, and the resulting shapes should
be stiffer. This appears to be a good candidate for designing near-term
molecular machine systems, since relatively long molecules can be built with
standard solution chemistry. At the moment, it takes about an hour to attach
each monomer to the chain, so a machine with many thousands of features would
not be buildable.
There's a theorized approach that's halfway between mechanosynthesis and polymer
synthesis. The idea is to use small homogeneous molecules that can be guided
into place and then fastened together. Because this requires lower precision,
and may use a variety of molecules and fastening techniques, this may be a
useful bootstrapping approach. Ralph Merkle
wrote a paper on it a few years ago.
A system that uses solution chemistry to build parts can probably benefit from
mechanical control of that chemistry. Whether by deprotecting only selected
sites to make them reactive, or mechanically protecting some sites while leaving
others exposed, or moving catalysts and reactants into position to promote
reactions at chosen sites, a fairly simple actuator system may be able to turn
bulk chemistry into programmable chemistry.
Living organisms provide one possible way to use biopolymers. If a well-designed
stretch of DNA is inserted into bacteria, then the bacteria will make the
corresponding protein; this can either be the final product, or can work with
other bacterial systems or transplanted proteins. (The bacteria also duplicate
the DNA, which may be the final product.) However, this is only semi-controlled
due to complex interactions within the bacterial system. Living organisms
dedicate a lot of structure and energy to dealing with issues that engineered
systems won't have to deal with, such as metabolism, maintaining an immune
system, food-seeking, reproduction, and adapting to environmental perturbations.
The use of bacteria as protein factories has already been accomplished, but the
use of bacteria-produced biopolymers for engineered-shape products has only been
done in a very small number of cases (e.g. Shih's recent octahedra [PDF];
in this case it was DNA, not protein), and only for relatively simple shapes.
Manufacturing Systems, Again
Now that we have some idea of the range of chemical manipulations, we can look
at how those chemical shapes can be joined into machines. Machines are important
because some kind of machine will be necessary to translate programmed
information into mechanical operations. Also, the more functions that can be
implemented by nano-fabricated machines, the fewer will have to be implemented
by expensive, conventionally manufactured hardware.
A system with the ability to build intricate parts by mechanosynthesis or small
building blocks probably will be able to use the same equipment to move those
shapes around to assemble machines, since the latter function is probably
simpler and doesn't require much greater range of motion. A system based on
biopolymers could in theory rely on self-assembly to bring the molecules
together. However, this process may be slow and error-prone if the molecules are
large and many different ones have to come together to make the product. A bit
of mechanical assistance, grabbing molecules from solution and putting them in
their proper places while protecting other places from incorrect molecules
dropping in, would introduce another level of programmability.
Any of these operations will need actuators. For simple systems, binary
actuators working ratchets should be sufficient. Several kinds of
electrochemical actuators have been developed in recent months. Some of these
may be adaptable for electrical control. For initial bootstrapping, actuators
controlled by flushing through special chemicals (e.g. DNA strands) may work,
although quite slowly. Magnetic and electromagnetic fields can be used for quite
precise steering, though these have to be produced by larger external equipment
and so are probably only useful for initial bootstrapping. Mechanical control by
varying pressure has also been proposed for intermediate systems.
In order to scale up to handle large volumes of material and make large
products, computational elements and eventually whole computers will have to be
built. The nice thing about computers is that they can be built using anything
that makes a decent switch. Molecular electronics, buckytube transistors, and
interlocking mechanical systems are all candidates for computer logic.
High Performance Products
The point of molecular manufacturing is to make valuable products. Several
things can make a product valuable. If it's a computer circuit, then smaller
component size leads to faster and more efficient operation and high circuit
density. Any kind of molecular manufacturing should produce very small feature
sizes; thus, almost any flavor of molecular manufacturing can be expected to
make valuable computers. A molecular manufacturing system that can make all the
expensive components of its own machinery should also drive down manufacturing
cost, increasing profit margins for manufacturers and/or allowing customers to
budget for more powerful computers.
Strong materials and compact motors can be useful in applications where weight
is important, such as aerospace hardware. If a kilowatt or even just a hundred
watt motor can fit into a cubic millimeter, this will be worth quite a lot of
money for its weight savings in airplanes and space ships. Even if raw materials
cost $10,000 a kilogram, as some biopolymer ingredients do, a cubic millimeter
weighs about a milligram and would cost about a penny. Of course this
calculation is specious since the mounting hardware for such a motor would
surely weigh more than the motor itself. Also, it's not clear whether biopolymer
or building-block styles of molecular manufacturing can produce motors with
anywhere near this power density; and although the scaling laws are pretty
straightforward, nothing like this has been built or even simulated in detail in
Once a process is developed that can make strong programmable shapes out of
simple cheap chemicals, then product costs may drop precipitously.
Mechanosynthesis is expected to achieve this, as shown by the preliminary work
on closed-cycle mechanosynthesis starting with acetylene. No reaction cycle of
comparable cost has been proposed for solution chemistry, but it seems likely
that one can be found, given that some polymerizable molecules such as sugar are
This essay has surveyed numerous options for molecular manufacturing. Molecular
manufacturing requires the ability to inject programmability for engineering,
but this can be done at any of several stages. For scalability, it also requires
the ability to build nanoscale machines capable of building their duplicates.
There are several options for machines of various compositions and in various
At the present time, no self-duplicating chemical-building molecular machine has
been designed in detail. However, given the range of options, it seems likely
that a single research group could tackle this problem and build at least a
partial proof of concept device—perhaps
one that can do only limited chemistry, or a limited range of shapes, but is
Subsequent milestones would include:
1) Not relying on flushing sequences of chemicals past the machine
2) Machines capable of general-purpose manufacturing
3) Structures that allow several machines to cooperate in building large
4) Building and incorporating control circuits
Once these are achieved, general-purpose molecular manufacturing will not be far
away. And that will allow the pursuit of more ambitious goals, such as machines
that can work in gas (instead of solution) or vacuum for greater mechanical
efficiency. Working in inert gas or vacuum also provides a possible pathway (one
of several) to what may be the ultimate performer: products built by
mechanosynthesis out of carbon lattice.
Coping with Nanoscale Errors
by Chris Phoenix, CRN Director of Research
There is ample evidence that MIT’s
Center for Bits and Atoms is directed by a genius. Neil Gershenfeld has
pulled together twenty research groups from across campus. He has inspired them
to produce impressive results in fields as diverse as biomolecule motors and
cheap networked light switches. Neil teaches a wildly popular course called "How
to make (almost) anything", showing techies and non-techies alike how to use
rapid prototyping equipment to make projects that they themselves are interested
in. And even that is just the start. He has designed and built "Fab Labs"—rooms
with only $20,000 worth of rapid-prototyping equipment, located in remote areas
of remote countries, that are being used to make crucial products. Occasionally
he talks to rooms full of military generals about how installing networked
computers can defuse a war zone by giving people better things to do than fight.
Neil Gershenfeld says that there is no way to build large complex
nanosystems using traditional engineering, I listen very carefully. I have been
thinking that large-scale nano-based products can be designed and built entirely
with traditional engineering. But he probably knows my field better than I do.
Is it possible that we are both right? I've read his statements very carefully
several times, and I think that in fact we don't disagree. He is talking about
large complex nanosystems, while I am talking about large simple
The key question is errors. Here's what Neil says about errors: "That, in turn,
leads to what I'd say is the most challenging thing of all that we're doing. If
you take the last things I've mentioned—printing
logic, molecular logic, and eventually growing, living logic—it
means that we will be able to engineer on Avogadro scales, with complexity on
the scale of thermodynamics. Avogadro's number, 1023, is the number
of atoms in a macroscopic object, and we'll eventually create systems with that
many programmable components. The only thing you can say with certainty about
this possibility is that such systems will fail if they're designed in any way
we understand right now."
In other words, errors accumulate rapidly, and when working at the nanoscale,
they can and do creep in right from the beginning. A kilogram-scale system
composed of nanometer-scale parts will have on the order of
100,000,000,000,000,000,000,000 parts. And even if by some miracle it is
manufactured perfectly, at least one of those parts will be damaged by
background radiation within seconds of manufacture.
Of course, errors plague the large crude systems we build today. When an
airplane requires a computer to stay in the air, we don't use one computer—we
use three, and if one disagrees with the other two, we take it offline and
replace it immediately. But can we play the same trick when engineering with
Avogadro numbers of parts? Here's Neil again: "Engineers still use the math of a
few things. That might do for a little piece of the system, like asking how much
power it needs, but if you ask about how to make a huge chip compute or a huge
network communicate, there isn't yet an Avogadro design theory."
Neil is completely right: there is not yet an Avogadro design theory. Neil is
working to invent one, but that will be a very difficult and probably lengthy
task. If anyone builds a
nanofactory in the next five or ten years, it will have to be done with "the
math of a few things." But how can this math be applied to Avogadro numbers of
Consider this: Every second, 100,000,000 transistors in your computer do
2,000,000,000 operations; there are 7,200 seconds in a two-hour movie; so to
play a DVD, about 1021 signal-processing operations have to take
place flawlessly. That's pretty close to Avogadro territory. And playing DVDs is
not simple. Those transistors are not doing the same thing over and over; they
are firing in very complicated patterns, orchestrated by the software. And the
software, of course, was written by a human.
How is this possible, and why doesn't it contradict Neil? The answer is that
computer engineering has had decades of practice in using the "math of a few
things." The people who design computer chips don't plan where every one of
those hundred million transistors goes. They design at a much higher level,
using abstractions to handle transistors in huge organized collections of
collections. Remember that Neil talked about "complexity on the scale of
thermodynamics." But there is nothing complex about the collections of
transistors. Instead, they are merely complicated.
The difference between complication and complexity is important.
Roughly speaking, a system is complex if the whole is greater than the sum of
its parts: if you can't predict the behavior that will emerge just from knowing
the individual behavior of separated components. If a system is not complex,
then the whole is equal to the sum of the parts. A straightforward list of
features will capture the system's behavior. In a complicated system, the list
gets longer, but no less accurate. Non-complex systems, no matter how
complicated, can in principle be handled with the math of a few things. The
complications just have to be organized into patterns that are simple to
specify. The entire behavior of a chip with a hundred million transistors can be
described in a single book. This is true even though the detailed design of the
road map of the wires—would
take thousands of books to describe.
Neil talked about one other very important concept. In signaling, and in
computation, it is possible to erase errors by spending energy. A computer could
be designed to run for a thousand years, or a million, without a single error.
There is a threshold of error rates below which the errors can be reliably
corrected. Now we have the clues we need to see how to use the math of a few
things to build complicated non-complex systems out of Avogadro numbers of
When I was writing my paper on "Design
of a Primitive Nanofactory", I did calculations of failure rates. In order
for quadrillions of sub-micron mechanisms to all work properly, they would have
to have failure rates of about 10-19. This is pretty close to (the
inverse of) Avogadro's number, and is essentially impossible to achieve. The
failure rate from background radiation is as high as 10-4. However, a
little redundancy goes a long way. If you build one spare mechanism for every
eight, the system will last somewhat longer. This still isn't good enough; it
turns out you need seven spares for every eight. And things are still small
enough that you have to worry about radiation in the levels above, where you
don't have redundancy. But adding spare parts is in the realm of the math of a
few things. And it can be extended into a workable system.
The system is built out of levels of levels of levels: each level is composed of
several similar but smaller levels. This quasi-fractal hierarchical design is
not very difficult, especially since each level takes only half the space of the
next higher level. With many similar levels, is it possible to add a little bit
of redundancy at each level? Yes, it is, and it works very well. If you add one
spare part for every eight at each level, you can keep the failure rate as low
as you like—with
one condition: the initial failure rate at the smallest stage has to be below
3.2%. Above that number, and one-in-eight redundancy won't help sufficiently—the
errors will continue to grow. But if the failure rate starts below 3.2%, it will
decrease at each higher redundant stage.
This analysis can be applied to any system where inputs can be redundantly
combined. For example, suppose you are combining the output of trillions of
small motors to one big shaft. You might build a tree of shafts and gears. And
you might make each shaft breakable, so that if one motor or collection of
motors jams, the other motors will break its shaft and keep working. This system
can be extremely reliable.
There is a limitation here: complex products can't be built this way. In effect,
this just allows more efficient products to be built in today's design space.
But that is good enough for a start: good enough to rebuild our infrastructure,
powerful enough to build horrific weapons in great quantity, high-performance
with the redundancy—to
give us access to space; and generally capable of producing the mechanical
systems that molecular
Living Off-Grid With Molecular
by Chris Phoenix, CRN Director of Research
Living off-grid can be a challenge. When energy and supplies
no longer arrive through installed infrastructure, they must be collected and
stored locally, or done without. Today this is done with lead-acid batteries,
expensive water-handling systems, and so on. All these systems have limited
capacities. Conversely, living on-grid creates a distance between production and
consumption that makes it easy to ignore the implications of excessive resource
manufacturing can make off-grid living more practical, with clean local
production and easy managing of local resources.
For this essay, I will assume a molecular manufacturing technology based on
mechanosynthesis of carbon lattice. A bio-inspired nanotechnology would share
many of the same advantages. Carbon lattice (including diamond) is about 100
times as strong as steel per volume, and carbon is one-sixth as dense. This
implies that a structure made of carbon would weigh at most 1% of the weight of
a steel structure. This is important for several reasons, including cost and
portability. However, in most things made of steel, much of the material is
resisting compression, which requires far more bulk than resisting the same
amount of tension. (It's easier to crumple a steel bar than to pull it apart.)
When construction in fine detail doesn't cost any extra, it's possible to
convert compressive stress to tensile stress by using trusses or pressurized
tanks. So it'll often be safe to divide current product weight by 1,000. The
cost of molecular-manufactured carbon lattice might be $20 per kg ($10 per
pound) at today's electricity prices, and drop rapidly as nanofactories are
improved and nano-manufactured solar cells are deployed. This makes it very
competitive with steel as a structural material.
A two or three order of magnitude improvement in material properties, and a six
order of magnitude improvement in cost per feature and compactness of motors and
computers, allows the design of completely new kinds of products. For example, a
large tent or a small inflatable boat may weigh 10 kilograms. But building with
advanced materials, this is equal to 1,000 or even 10,000 kilograms: a house or
a yacht. Likewise, a small airplane or seaplane might weigh 1,000 kg today. A 10
kg full-sized collapsible airplane is not implausible; today's hang gliders
weigh only 30-40 kg, and they're built out of aluminum and nylon. Such an
airplane would be easy to store and cheap to build, and could of course be
powered by solar-generated fuel.
Today, equipment and structures must be maintained and their surfaces protected.
This generates a lot of waste and uses a lot of paint and labor. But, as the
saying goes, diamonds are forever. This is because in a diamond all the atoms
are strongly bonded to each other, and oxygen (even with the help of salt) can't
pull one loose to start a chemical reaction. Ultraviolet light can be blocked by
a thin surface coating molecularly bonded to the structure during construction.
So diamondoid structures would require no maintenance to prevent corrosion.
Also, due to the strongly bonded surfaces, it appears that nanoscale machines
will be immune to ordinary wear. A machine could be designed to run without
maintenance for a century.
Can molecular manufacturing build all the needed equipment? It appears so;
carbon is an extremely versatile atom. It can be a conductor, semiconductor, or
insulator; opaque or transparent; it can make inorganic (and indigestible)
substances like diamond and graphite, but with a few other readily available
atoms, it can make incredibly complex and diverse organic chemicals. And don't
forget that a complete self-contained molecular manufacturing system can be
quite small. So any needed equipment or products could be made on the spot, out
of chemicals readily available from the local environment. A self-contained
factory sufficient to supply a family could be the size of a microwave oven.
When a product is no longer wanted, it can be burned cleanly, being made
entirely of light atoms. It is worth noting that extraction of rare minerals
from ecologically or politically sensitive areas would become largely
Power collection and storage would require a lot fewer resources. A solar cell
only has to be a few microns thick. Lightweight expandable or inflatable
structures would make installation easy and potentially temporary. Energy could
be stored as hydrogen. The solar cells and the storage equipment could be built
by the on-site nanofactory.
The same goes for solar water distillers, and tanks and greenhouses for growing
fish, seaweed, algae, or hydroponic gardening. Water can also be purified
electrically and recovered from greenhouse air, and direct chemical food
production using cheap microfluidics will probably be an early post-nanofactory
development. With food, fuel, and equipment all available locally, there would
be very little need to ship supplies from centralized production facilities, and
water use per person could be much less than with open-air agriculture and
today's problems with handling wastewater.
The developed nations today have a massive and probably unsustainable ecological
footprint. Because production is so decentralized, it is hard to observe the
impact of consumer choices. And because only a few areas of land are convenient
for transportation or ideal for agriculture, unhealthy patterns of land use have
developed. Economies of scale encourage large infrastructures. But nano-built
equipment benefits from other economies, so off-site production and distribution
will become less efficient than local productivity. Someone living off-grid will
be able literally to see their own ecological footprint, simply by looking at
the land area they have covered with solar cells and greenhouses. Cheap sensors
will allow monitoring of any unintentional pollution—though there will be fewer
pollution sources with clean manufacturing of maintenance-free products.
Cheap high-bandwidth communication without wires would require a new
infrastructure, but it would not be hard to build one. Simply sending up small
airplanes with wireless networking equipment would allow wireless communication
for hundreds of miles.
Incentive for theft might decrease, since people could more quickly and easily
build what they want for themselves rather than stealing other people's homemade
Molecular manufacturing should make it very easy to disconnect from today's
industrial grid. Even with relatively primitive (early) molecular manufacturing,
people could have far better quality of life off-grid than in today's slums,
while doing significantly less ecological damage. Areas that are difficult to
live in today could become viable living space. Although this would increase the
spread of humans over the globe, it would reduce the use of intensive
agriculture, centralized energy production, and transportation; the ecological
tradeoffs appear favorable. (With careful monitoring of waste streams, this
argument may even apply to ocean living.)
Everything written here also could apply to displaced persons. Instead of
refugee camps where barely adequate supplies are delivered from outside and
crowding leads to increased health problems, relatively small amounts of land
would allow each family (or larger social group) to be self-sufficient. This
would not mitigate the tragedy of losing their homes, but would avoid
compounding the tragedy by imposing the substandard or even life-threatening
living conditions of today's refugee camps.
Of course, this essay has only considered the technical aspects of off-grid
living. The practical feasibility depends on a variety of social and political
issues. Many people enjoy living close to neighbors. Various commercial
interests may not welcome the prospect of people withdrawing from the current
consumer lifestyle. Owners of nanofactory technology may charge licensing fees
too high to permit disconnection from the money system. Some environmental
groups may be unwilling to see large-scale settlement of new land areas or the
ocean, even if the overall ecological tradeoff were positive. But the
possibility of self-sufficient off-grid living would take some destructive
pressure off of a variety of overpopulated and over-consuming societies.
Although it is not a perfect alternative, it appears to be preferable in many
instances to today's ways of living and using resources.
by Chris Phoenix, CRN Director of Research
Scaling laws are extremely simple observations about how
physics works at different sizes. A well-known example is that a flea can jump
dozens of times its height, while an elephant can't jump at all. Scaling laws
tell us that this is a general rule: smaller things are less affected by
gravity. This essay explains how scaling laws work, shows how to use them, and
discusses the benefits of tinyness with regard to speed of operation, power
density, functional density, and efficiency—four very important factors in the
performance of any system.
Scaling laws provide a very simple, even simplistic approach to understanding
the nanoscale. Detailed engineering requires more intricate calculations. But
basic scaling law calculations, used with appropriate care, can show why
technology based on nanoscale devices is expected to be extremely powerful by
comparison with either biology or modern engineering.
Let's start with a scaling-law analysis of muscles vs. gravity in elephants and
fleas. As a muscle shrinks, its strength decreases with its cross-sectional
area, which is proportional to length times length. We write that in shorthand
as strength ~ L2. (If you aren't comfortable with 'proportional to',
just think 'equals': strength = L squared.) But the weight of the muscle is
proportional to its volume: weight ~ L3. This means that strength vs.
weight, a crude indicator of how high an organism can jump, is proportional to
area divided by volume, which is L2 divided by L3 or L-1
(1/L). Strength-per-weight gets ten times better when an organism gets ten times
smaller. A nanomachine, nearly a million times smaller than a flea, doesn't have
to worry about gravity at all. If the number after the L is positive, then the
quantity becomes larger or more important as size increases. If the number is
negative, as it is for strength-per-weight, then the quantity becomes larger or
more important as the system gets smaller.
Notice what just happened. Strength and mass are completely different kinds of
thing, and can't be directly compared. But they both affect the performance of
systems, and they both scale in predictable ways. Scaling laws can compare the
relative performance of systems at different scales, and the technique works for
any systems with the relevant properties—the strength of a steel cable scales
the same as a muscle. Any property that can be summarized by a scaling factor,
like weight ~ L3, can be used in this kind of calculation. And most
importantly, properties can be combined: just as strength and weight are
components of a useful strength-per-weight measure, other quantities like power
and volume can be combined to form useful measures like power density.
An insect can move its legs back and forth far faster than an elephant. The
speed of a leg while it's moving may be about the same in each animal, but the
distance it has to travel is a lot less in the flea. So frequency of operation ~
L-1. A machine in a factory might join or cut ten things per second.
The fastest biochemical enzymes can perform about a million chemical operations
Power density is a very important aspect of machine performance. A basic law of
physics says that power is the same as force times speed. And in these terms,
force is basically the same as strength. Remember that strength ~ L2.
And we're assuming speed is constant. So power ~ L2: something 10
times as big will have 100 times as much power. But volume ~ L3, so
power per volume or power density ~ L-1. Suppose an engine 10 cm on a
side produces 1,000 watts of power. Then an engine 1 cm on a side should produce
10 watts of power: 1/100 of the ten-times-larger engine. Then 1,000 1-cm engines
would take the same volume as one 10-cm engine, but produce 10,000 watts. So
according to scaling laws, by building 1,000 times as many parts, and making
each part 10 times smaller, you can get 10 times as much power out of the same
mass and volume of material. This makes sense—remember that frequency of
operation increases as size decreases, so the miniature engines would run at ten
times the RPM.
Notice that when the design was shrunk by a factor of 10, the number of parts
increased by a factor of 1,000. This is another scaling law: functional density
~ L-3. If you can build your parts nanoscale, a million times
smaller, then you can pack in a million, million, million, or 1018
more parts into the same volume. Even shrinking by a factor of 100, as in the
difference between today's computer transistors and molecular electronics, would
allow you to cram a million times more circuitry into the same volume. Of
course, if each additional part costs extra money, or if you have to repair the
machines, then using 1,000 times as many parts for 10 times the performance is
not worth doing. But if the parts can be built using a massively parallel
process like chemistry, and if reliability is high and the design is
fault-tolerant so that the collection of parts will last for the life of the
product, then it may be very much worth doing—especially if the design can be
shrunk by a thousand or a million times.
An internal combustion engine cannot be shrunk very far. But there's another
kind of motor that can be shrunk all the way to nanometer scale. Electrostatic
forces—static cling—can make a motor turn. As the motor shrinks, the power
density increases; calculations show that a nanoscale electrostatic motor may
have a power density as high as a million watts per cubic millimeter. And at
such small scales, it would not need high voltage to create a useful force.
Such high power density will not always be necessary. When the system has more
power than it needs, reducing the speed of operation (and thus the power) can
reduce the energy lost to friction, since frictional losses increase with
increased speed. The relationship varies, but is usually at least linear—in
other words, reducing the speed by a factor of ten reduces the frictional energy
loss by at least that much. A large-scale system that is 90% efficient may
become well over 99.9% efficient when it is shrunk to nanoscale and its speed is
reduced to keep the power density and functional density constant.
Friction and wear are important factors in mechanical design. Friction is
proportional to force: friction ~ L2. This implies that frictional
power is proportional to the total power used, regardless of scale. The picture
is less good for wear. Assuming unchanging pressure and speed, the rate of
erosion is independent of scale. However, the thickness available to erode
decreases as the system shrinks: wear life ~ L, so a nanoscale system plagued by
conventional wear mechanisms might have a lifetime of only a few seconds.
Fortunately, a non-scaling mechanism comes to the rescue here. Chemical covalent
bonds are far stronger than typical forces between sliding surfaces. As long as
the surfaces are built smooth, run at moderate speed, and can be kept perfectly
clean, there should be no wear, since there will never be a sufficient
concentration of heat or force to break any bonds. Calculations and preliminary
experiments have shown that some types of atomically precise surfaces can have
Of course, all this talk of shrinking systems should not obscure the fact that
many systems cannot be shrunk all the way to the nanoscale. A new system design
will have its own set of parameters, and may perform better or worse than
scaling laws would predict. But as a first approximation, scaling laws show what
we can expect once we develop the ability to build nanoscale systems:
performance vastly higher than we can achieve with today's large-scale machines.
For more information on scaling laws and nanoscale systems, including discussion
of which laws are accurate at the nanoscale, see
Nanosystems, chapter 2.
by Chris Phoenix, CRN Director of Research
Light comes in small chunks called photons, which generally act like waves. When
a drop falls into a pool of water, one or more peaks surrounded by troughs move
across the surface. It's easy to describe a single wave: the curvy shape between
one peak and the next. Multiple waves are just as easy. But what is the meaning
of a fractional wave? Chop out a thin slice of a wave and set it moving across
the water: it would almost immediately collapse and turn into something else.
For most purposes, fractional waves can't exist. So it used to be thought that
microscopes and projection systems could not focus on a point smaller than half
a wavelength. This was known as the diffraction limit.
There are now more than half a dozen ways to beat the so-called diffraction
limit. This means that we can use light to look at smaller features, and also to
build smaller things out of light-sensitive materials. And this will be a big
help in doing advanced nanotechnology. The wavelength of visible light is
hundreds of nanometers, and a single atom is a fraction of one nanometer. The
ability to beat the diffraction limit gets us a lot closer to using an
incredibly versatile branch of physics—electromagnetic radiation—to access the
Here are some ways to overcome the diffraction limit:
There's a chemical that glows if it's hit with one color of light, but if it's
also hit with a second color, it doesn't. Since each color has a slightly
different wavelength, focusing two color spots on top of each other will create
a glowing region smaller than either spot.
There are plastics that harden if hit with two photons at once, but not if hit
with a single photon. Since two photons together are much more likely in the
center of a focused spot, it's possible to make plastic shapes with features
smaller than the spot.
Now this one is really interesting. Remember what we said about a fractional
wave collapsing and turning into something else? Not to stretch the analogy too
far, but if light hits objects smaller than a wavelength, a lot of fractional
waves are created, which immediately turn into “speckles” or “fringes.” You can
see the speckles if you shine a laser pointer at a nearby painted (not
reflecting!) surface. Well, it turns out that a careful analysis of the speckles
can tell you what the light bounced off of—and you don't even need a laser.
A company called
Angstrovision claims to be doing something similar, though
they use lasers. They say they'll soon have a product that can image 4x12x12
nanometer features at three frames per second, with large depth of field, and
without sample preparation. And they expect that their product will improve
High energy photons have smaller wavelengths, but are hard to work with. But a
process called “parametric downconversion” can split a photon into several
“entangled” photons of lower energy. Entanglement is spooky physics magic that
even we don't fully understand, but it seems that several entangled photons of a
certain energy can be focused to a tighter spot than one photon of that energy.
A material's “index of refraction” indicates how much it bends light going
through it. A lens has a high index of refraction, while vacuum is lowest. But
certain composite materials can have a negative index of refraction. And it
turns out that a slab of such material can create a perfect image—not
diffraction-limited—of a photon source. This field is advancing fast: last time
we looked, they hadn't yet proposed that photonic crystals could display this
A single atom or molecule can be a tiny source of light. That's not new. But if
you scan that light source very close to a surface, you can watch very small
areas of the surface interact with the “near-field effects”. Near-field effects,
by the way, are what's going on while speckles or fringes are being created. And
scanning near-field optical microscopy (SNOM, sometimes NSOM) can build a
light-generated picture of a surface with only a few nanometers resolution.
Finally, it turns out that circularly polarized light can be focused a little
bit smaller than other types. (Sorry, we couldn't find the link for that one.)
Some of these techniques will be more useful than others. As researchers develop
more and more ways to access the nano-scale, it will rapidly get easier to build
and study nanoscale machines.
Nucleic Acid Engineering
by Chris Phoenix, CRN Director of Research
The genes in your cells are made up of deoxyribonucleic acid,
or DNA: a long, stringy chemical made by fastening together a bunch of small
chemical bits like railroad cars in a freight train. The DNA in your cells is
actually two of these strings, running side by side. Some of the small chemical
bits (called nucleotides) like to stick to certain other bits on the opposite
string. DNA has a rather boring structure, but the stickiness of the nucleotides
can be used to make far more interesting shapes. In fact, there's a whole field
of nanotechnology investigating this, and it may even lead to an early version
of molecular manufacturing.
Take a bunch of large wooden beads, some string, some magnets,
and some small patches of hook-and-loop fastener (called Velcro when the lawyers
aren't watching). Divide the beads into four piles. In the first pile, attach a
patch of hooks to each bead. In the second pile, attach a patch of loops. In the
third pile, attach a magnet to each bead with the north end facing out. And in
the fourth pile, attach a magnet with the south end exposed. Now string together
with a random sequence of beads—for example,
1) Hook, Loop, South, Loop, North, North, Hook.
If you wanted to make another sequence stick to it, the best
pattern would be:
2) Loop, Hook, North, Hook, South, South, Loop.
Any other sequence wouldn't stick as well: a pattern of:
3) North, North, North, South, North, Loop, South
would stick to either of the other strands in only two places.
Make a few dozen strings of each sequence. Now throw them all
in a washing machine and turn it on. Wait a few minutes, and you should see that
strings 1) and 2) are sticking together, while string 3) doesn't stick to
anything. (No, I haven't tried this; but I suspect it would make a great science
But we can do more than make the strings stick to each other:
we can make them fold back on themselves. Make a string of:
N, N, N, L, L, L, L, H, H, H, H, S, S, S
and throw it in the washer on permanent press, and it should
double over. With a more complex pattern, you could make a cross:
NNNN, LLLLHHHH, LNLNSHSH, SSLLNNHH, SSSS
The NNNN and SSSS join, and each sequence between the commas
doubles over. You get the idea: you can make a lot of different things match up
by selecting a sequence from just four letter choices. Accidental matches of one
or two don't matter, because the agitation of the water will pull them apart
again. But if enough of them line up, they'll usually stay stuck.
Just like the beads, there are four different kinds of
nucleotides in the chain or strand of DNA. Instead of North, South, Hook, and
Loop, the nucleotide chemicals are called Adenine, Thiamine, Guanine, and
Cytosine, abbreviated A, T, G, and C. Like the beads, A will only stick to T,
and G will only stick to C. (You may recognize these letters from the movie
GATTACA.) We have machines that can make DNA strands in any desired sequence. If
you tell the machine to make sequences of ACGATCTCGATC and TGCTAGAGCTAG, and
then mix them together in water with a little salt, they will pair up. If you
make one strand of ACGATCTCGATCGATCGAGATCGT—the first, plus the second backward—it
will double over and stick to itself. And so on. (At the molecular scale, things
naturally vibrate and bump into each other all the time; you don't need to throw
them in a washing machine to mix them up.)
Chemists have created a huge menu of chemical tricks to play
with DNA. They can make one batch of DNA, then make one end of it stick to
plastic beads or surfaces. They can attach other molecules or nanoparticles to
either end of a strand. They can cut a strand at the location of a certain
sequence pattern. They can stir in other DNA sequences in any order they like,
letting them attach to the strands. They can attach additional chemicals to each
nucleotide, making the DNA chain stiffer and stronger.
A DNA strand that binds to another but has an end hanging
loose can be peeled away by a matching strand. This is enough to build
molecular tweezers that open and close. We can watch them work by attaching
molecules to the ends that only fluoresce (glow under UV light) when they're
Remember that DNA strands can bind to themselves as well as to
each other. And you can make several strands with many different sticky sequence
patches to make very complex shapes. Just a few months ago, a
very clever team managed to build an octahedron out of only one long strand
and five short ones. The whole thing is only 22 nanometers wide—about the
distance your fingernails grow in half a minute.
So far, this article has been a review of fact. This next part
is speculation. If we can build a pre-designed structure, and make it move as we
want, we can—in theory, and with enough engineering work—build a molecular
robot. The robot would not be very strong, or very fast, and certainly not very
big. But it might be able to direct the fabrication of other, more complex
devices—things too complex to be built by pure self-assembly. And there's one
good thing about working with molecules: because they are so small, you can make
trillions of them for the price of one. That means that whatever they do can be
done by the trillions—perhaps even fast enough to be useful for manufacturing
large products such as computer chips. The products would be repetitive, but
even repetitive chips can be quite valuable for some applications. Individual
control of adjacent robots would allow even more complex systems to be built.
And with a molecular-scale DNA robot, it might be possible to guide the
fabrication of smaller and stiffer structures, leading eventually to direct
mechanical control of chemistry—the ultimate goal of molecular manufacturing.
This has barely scratched the surface of what's being done
with DNA engineering. There's also RNA (ribonucleic acid) and PNA (peptide
nucleic acid) engineering, and the use of RNA as an enzyme- or antibody-like
molecular gripper. Not to mention the recent discovery of RNA interference which
has medical and research uses: it can fool a cell into stopping the production
of an unwanted protein, by making it think that that protein's genes came from a
Nucleic acid engineering looks like a good possibility for
building a primitive variety of nanorobotics. Such products would be
significantly less strong than products built of diamondoid, but are still
likely to be useful for a variety of applications. If this technology is
developed before diamondoid nanotech, it may provide a gentler introduction to
the power of molecular manufacturing.
The Power of
by Chris Phoenix, CRN Director of Research
So what's the big deal about molecular manufacturing? We have lots of kinds of
nanotechnology. Biology already makes things at the molecular level. And won't
it be really hard to get machines to work in all the weirdness of nanoscale
The power of molecular manufacturing is not obvious at first. This article
explains why it's so powerful — and why this power is often overlooked. There
are at least three reasons. The first has to do with programmability and
complexity. The second involves self-contained manufacturing. And the third
involves nanoscale physics, including chemistry.
It seems intuitively obvious that a manufacturing system can't make something
more complex than itself. And even to make something equally complex would be
very difficult. But there are two ways to add complexity to a system. The first
is to build it in: to include lots of levers, cams, tracks, or other shapes that
will make the system behave in complicated ways. The second way to add
complexity is to add a computer. The computer's processor can be fairly simple,
and the memory is extremely simple — just an array of numbers. But software
copied into the computer can be extremely complex.
If molecular manufacturing is viewed as a way of building complex mechanical
systems, it's easy to miss the point. Molecular manufacturing is programmable.
In early stages, it will be controlled by an external computer. In later stages,
it will be able to build nanoscale computers. This means that the products of
molecular manufacturing can be extremely complex — more complex than the
mechanics of the manufacturing system. The product design will be limited only
Chemists can build extremely complex molecules, with thousands of atoms
carefully arranged. It's hard to see the point of building even more complexity.
But the difference between today's chemistry and programmable mechanochemistry
is like the difference between a pocket calculator and a computer. They can both
do math, and an accountant may be happy with the calculator. But the computer
can also play movies, print documents, and run a Web browser. Programmability
adds more potential than anyone can easily imagine — we're still inventing new
things to do with our computers.
The true value of a self-contained manufacturing system is not obvious at first
glance. One objection that's raised to molecular manufacturing is, “Start
developing it — if the idea is any good, it will generate valuable spin-offs.”
The trouble with this is that 99% of the value may be generated in the last 1%
of the work.
Today, high-tech intricate products like computer chips may cost 10,000 or even
100,000 times as much as their raw materials. We can expect the first nanotech
manufacturing systems to contain some very high-cost components. That cost will
be passed on to the products. If a system can make some of its own parts, then
it may decrease the cost somewhat. If it can make 99% of its own parts (but 1%
is expensive), and 99% of its work is automated (but 1% is skilled human labor),
then the cost of the system — and its products — may be decreased by 99%. But
that still leaves a factor of 100 or even 1,000 between the product cost and the
raw materials cost.
However, if a manufacturing system can make 100% of its parts, and can
build products with 100% automation, then the cost of duplicate factories
drops precipitously. The cost of building the first factory can be spread over
all the duplicates. A nanofactory, packing lots of functionality into a
self-contained box, will not cost much to maintain. There's no reason (aside
from profit-taking and regulation) why the cost of the factory shouldn't drop
almost as low as the cost of raw materials. At that point, the cost of the
factory would add almost nothing to the cost of its products. So in the advance
from 99% to 100% self-contained manufacturing, the product cost could drop by
two or three orders of magnitude. This would open up new applications for the
factory, further increasing its value.
This all implies that a ten billion dollar development program might produce a
trillion dollars of value — but might not produce even a billion dollars worth
of spin-offs until the last few months. All the value is delivered at the end of
the program, which makes it hard to fund under current American business models.
A factory that's 100% automated and makes 100% of all its own parts is hard to
imagine. People familiar with today's metal parts and machines know that they
wear out and require maintenance, and it's hard to put them together in the
first place. But as nanoscientists keep reminding us, the nanoscale is
different. Molecular parts have squishy surfaces, and can bend without breaking
or even permanently deforming. This requires extra engineering to make stiff
systems, but diamond (among other possibilities) is stiff enough to do the job.
The squishiness helps when it's time to fit parts together: robotic assembly
requires less precision. Bearing surfaces can be built into the parts, and run
dry. And because molecular parts (unlike metals) can have every atom bonded
strongly in its place, they won't flake apart under normal loads like metal
Instead of being approximately correct, a molecular part will be either perfect
— having the correct chemical specification — or broken. Instead of wearing
steadily away, machines will break randomly — but very rarely. Simple redundant
design can keep a system working long after a significant fraction of its
components have failed, since any machine that's actually broken will not be
worn at all. Paradoxically, because the components break suddenly, the system as
a whole can degrade gracefully, while not requiring maintenance. It should not
be difficult to design a nanofactory capable of manufacturing thousands of times
its own mass before it breaks.
To achieve this level of precision, it's necessary to start with perfectly
identical parts. Such parts do not exist in today's manufacturing universe. But
atoms are, for most purposes, perfectly identical. Building with individual
atoms and molecules will produce molecular parts as precise as their component
atoms. This is a natural fit for the two other advantages described above —
programmability, and self-contained automated manufacturing. Molecular
manufacturing will exploit these advantages to produce a massive, unprecedented,
almost incalculable improvement over other forms of manufacturing.
Science vs. Engineering vs. Theoretical Applied
by Chris Phoenix, CRN Director of Research
When scientists want an issue to go away, they are as
political as anyone else. They attack the credentials of the observer. They
change the subject. They build strawman attacks, and frequently even appear to
convince themselves. They form cliques. They tell their students not to even
read the claims, and certainly not to investigate them. Each of these tactics is
being used against molecular manufacturing.
When facing a scientific theory they disagree with, scientists are supposed to
try to disprove it by scientific methods. Molecular manufacturing includes a
substantial, well-grounded, carefully argued, conservative body of work. So why
do scientists treat it as though it were pseudoscience, deserving only political
attack? And how should they be approaching it instead? To answer this, we have
to consider the gap between science and engineering.
Scientists do experiments and develop theories about how the world works.
Engineers apply the most reliable of those theories to get predictable results.
Scientists cannot make reliable pronouncements about the complex "real world"
unless their theory has been field-tested by engineering. But once a theory is
solid enough to use in engineering, science has very little of interest to say
about it. In fact, the two practices are so different that it's not obvious how
they can communicate at all. How can ideas cross the gap from untested theory to
In Appendix A of
Nanosystems, Eric Drexler describes an activity he calls "theoretical
applied science" or "exploratory engineering". This is the bridge between
science and engineering. In theoretical applied science, one takes the best
available results of science, applies them to real-world problems, and makes
plans that should hopefully work as desired. If done with enough care, these
plans may inspire engineers (who must of course be cautious and conservative) to
try them for the first time.
The bulk of Appendix A discusses ways that theoretical applied science can be
practiced so as to give useful and reliable results, despite the inability to
confirm its results by experiment:
For example, all classes of device that would violate the
second law of thermodynamics can immediately be rejected. A more stringent rule,
adopted in the present work, rejects propositions if they are inadequately
substantiated, for example, rejecting all devices that would require materials
stronger than those known or described by accepted physical models. By adopting
these rules for falsification and rejection, work in theoretical applied science
can be grounded in our best scientific understanding of the physical world.
Drexler presents theoretical applied science as a way of
studying things we can't build yet. In the last section, he ascribes to it a
very limited aim: "to describe lower bounds to the performance achievable with
physically possible classes of devices." And a limited role: "In an ideal world,
theoretical applied science would consume only a tiny fraction of the effort
devoted to pure theoretical science, to experimentation, or to engineering." But
here I think he's being too modest. Theoretical applied science is really the
only rigorous way for the products of science to escape back to the real world
by inspiring and instructing engineers.
We might draw a useful analogy: exploratory engineers are to scientists as
editors are to writers. Scientists and writers are creative. Whatever they
produce is interesting, even when it's wrong. They live in their own world,
which touches the real world exactly where and when they choose. And then along
come the editors and the exploratory engineers. "This doesn't work. You need to
rephrase that. This part isn't useful. And wouldn't it be better to explain it
this way?" Exploratory engineering is very likely to annoy and anger scientists.
To the extent that exploratory engineering is rigorously grounded in science,
scientists can evaluate it — but only in the sense of checking its
calculations. An editor should check her work with the author. But she should
not ask the author whether he thinks she has improved it; she should judge how
well she did her job by the reader's response, not the writer's. Likewise, if
scientists cannot show that an exploratory engineer has misinterpreted
(misapplied) their work or added something that science cannot support, then the
scientists should sit back and let the applied engineers decide whether the
theoretical engineering work is useful.
Molecular manufacturing researchers practice exploratory engineering: they
design and analyze things that can't be built yet. These researchers have spent
the last two decades asking scientists to either criticize or accept their work.
This was half an error: scientists can show a mistake in an engineering
calculation, but the boundaries of scientific practice do not allow scientists
to accept applied but unverified results. To the extent that the results of
theoretical applied science are correct and useful, they are meant for
engineers, not for scientists.
Drexler is often accused of declaring that nanorobots will work without ever
having built one. In science, one shouldn't talk about things not yet
demonstrated. And engineers shouldn't expect support from the scientific
community — or even from the engineering community, until a design is proved.
But Drexler is doing neither engineering nor science, but something in between;
he's in the valuable but thankless position of the cultural ambassador, applying
scientific findings to generate results that may someday be useful for
If as great a scientist as Lord Kelvin can be wrong about something as mundane
and technical as heavier-than-air flight, then lesser scientists ought to be
very cautious about declaring any technical proposal unworkable or worthless.
But scientists are used to being right. Many scientists have come to think that
they embody the scientific process, and that they personally have the ability to
sort fact from fiction. But this is just as wrong as a single voter thinking he
represents the country's population. Science weeds out falsehood by a slow and
emergent process. An isolated scientist can no more practice science than a lone
voter can practice democracy.
The proper role of scientists with respect to molecular manufacturing is to
check the work for specific errors. If no specific errors can be found, they
should sit back and let the engineers try to use the ideas. A scientist who
declares that molecular manufacturing can't work without identifying a specific
error is being unscientific. But all the arguments we've heard from scientists
against molecular manufacturing are either opinions (guesses) or vague and
unsupported generalities (hand-waving).
The lack of identifiable errors does not mean that scientists have to accept
molecular manufacturing. What they should do is say "I don't know," and wait to
see whether the engineering works as claimed. But scientists hate to say "I
don't know." So we at CRN must say it for them: No scientist has yet
demonstrated a substantial problem with molecular manufacturing; therefore, any
scientist who says it can't work probably is behaving improperly and should be
challenged to produce specifics.
The Bugbear of Entropy
by Chris Phoenix, CRN Director of Research
Entropy and thermodynamics are often cited as a reason why
diamondoid mechanosynthesis can't work. Supposedly, the perfection of the
designs violates a law of physics that says things always have to be imperfect
and cannot be improved.
It has always been obvious to me why this argument was wrong.
The argument would be true for a closed system, but nanomachines always have an
energy source and a heat sink. With an external source of energy available for
their use, they can certainly build near-perfect structures without violating
thermodynamics. This is clear enough that I've always assumed that people
invoking entropy were either too ignorant to be critics, or willfully blind.
It appears I was wrong. Not about the entropy, but about the
John A. N. (JAN) Lee. He's a professor of computer science at Virginia Tech,
has been vice president of the Association for Computing Machinery, has written
a book on computer history, etcetera. He's obviously intelligent and
well-informed. And yet, he makes the same mistake about entropy—not in relation
to nanotech, but in relation to Babbage, who designed the first modern computer
in the early 1800's.
online history of Babbage, he asserts, "the limitations of Newtonian physics
might have prevented Babbage from completing any Analytical Engine." He points
out that Newtonian mechanics has an assumption of reversibility, and it wasn't
until decades later that the Second Law of Thermodynamics was discovered and
entropy was formalized. Thus, Babbage was working with an incomplete
understanding of physics.
Lee writes, "In Babbage's design for the Analytical Engine,
the discrete functions of mill (in which 'all operations are performed') and
store (in which all numbers are originally placed, and, once computed, are
returned) rely on this supposition of reversibility." But, says Lee,
"information cannot be shuttled between mill and store without leaking, like
faulty sacks of flour. Babbage did not consider this, and it was perhaps his
greatest obstacle to building the engine."
Translated into modern computer terms, Lee's statement reads,
"Information cannot be shuttled between CPU and RAM without leaking, like faulty
sacks of flour." The fact that my computer works as well as it does shows that
there's something wrong with this argument.
In a modern computer, the signals are digital; each one is
encoded as a voltage in a wire, above or below a certain threshold. Transistors
act as switches, sensing the incoming voltage level and generating new voltage
signals. Each transistor is designed to produce either high or low voltages. By
the time the signal arrives at its destination, it has indeed "leaked" a little
bit; it can't be exactly the same voltage. But it'll still be comfortably within
the "high" or "low" range, and the next transistor will be able to detect the
digital signal without error.
This does not violate thermodynamics, because a little energy
must be spent to compensate for the uncertainty in the input signal. In today's
designs, this is a small fraction of the total energy required by the computer.
I'm not even sure that engineers have to take it into account in their
calculations, though as computers shrink farther it will become important.
In Babbage's machine, information would move from place to
place by one mechanism pushing on another. Now, it's true that entropy indicates
a slightly degraded signal—meaning that no matter how precisely the machinery
was made, the position of the mechanism must be slightly imprecise. But a fleck
of dust in a bearing would degrade the signal a lot more. In other words, it
didn't matter whether Babbage took entropy into account or even knew about it,
as long as his design could tolerate flecks of dust.
Like a modern computer, Babbage's machine was designed to be
digital. The rods and rotors would have distinct positions corresponding to
encoded numbers. Mechanical devices such as detents would correct signals that
were slightly out of position. In the process of correcting the system, a little
bit of energy would be dissipated through friction. This friction would require
external energy to overcome, thus preserving the Second Law of thermodynamics.
But by including mechanisms that continually corrected the tiny errors in
position caused by fundamental uncertainty (along with the much larger errors
caused by dust and wear), Babbage's design would never lose the important,
digitally coded information. And, as in modern computers, the entropy-related
friction would have been vastly smaller than friction from other sources.
Was Babbage's design faulty because he didn't take entropy
into account? No, it was not. Mechanical calculating machines already existed,
and worked reliably. Babbage was an engineer; he used designs that worked. There
was nothing very revolutionary in the mechanics of his design. He didn't have to
know about atoms or quantum mechanics or entropy to know that one gear can push
another gear, that there will be some slop in the action, that a detent can
restore the signal, and that all this requires energy to overcome friction.
Likewise, the fact that nanomachines cannot be 100% perfect 100% of the time is
no more significant than the quantum-mechanical possibility that part of your
brain will suddenly teleport itself elsewhere, killing you instantly.
Should Lee have known that entropy was not a significant
factor in Babbage's designs, nor any kind of limitation in their effectiveness?
I would have expected him to realize that any digital design with a power supply
can beat entropy by continually correcting the information. After all, this is
fundamental to the workings of electronic computers. But it seems Lee didn't
extend this principle from electronic to mechanical computers.
The point of this essay is not to criticize Lee. There's no
shame in a scientist being wrong. Rather, the point is that it's surprisingly
easy for scientists to be wrong, even in their own field. If a computer
scientist can be wrong about the effects of entropy on an unfamiliar type of
computer, perhaps we shouldn't be too quick to blame chemists when they are
likewise wrong about the effects of entropy on nanoscale machinery. If a
computer scientist can misunderstand Babbage's design after almost two
centuries, we shouldn't be too hard on scientists who misunderstand the
relatively new field of molecular manufacturing.
But by the same token, we must realize that chemists and
physicists talking about molecular manufacturing are even more unreliable than
computer scientists talking about Babbage. Despite the fact that Lee knows about
entropy and Babbage did not, Babbage's engineering was more reliable than Lee's
science. How true it is that "A little learning is a dangerous thing!"
There are several constructive ways to address this problem.
One is to continue working to educate scientists about how physics applies to
nanoscale systems and molecular manufacturing. Another is to educate
policymakers and the public about the limitations of scientific practice and the
fundamental difference between science and engineering. CRN will continue to
pursue both of these courses.
Engineering, Biology, and Nanotechnology
by Chris Phoenix, CRN Director of Research
The question of whether a computer can think is no more
interesting than the question of whether a submarine can swim.
A dog can herd sheep, smell land mines, pull a sled, guide a blind person, and
even warn of oncoming epileptic seizures.
A computer can calculate a spreadsheet, typeset a document, play a video,
display web pages, and even predict the weather.
The question of which one is 'better' is silly. They're both incredibly useful,
and both can be adapted to amazingly diverse tasks. The dog is more adaptable
for tasks in the physical world—and does not require engineering to learn a new
task, only a bit of training. But the closest a dog will ever come to displaying
web pages is fetching the newspaper.
Engineering takes a direct approach to solving tasks that can be described with
precision. If the engineering is sound, the designs will work as expected.
Engineered designs can then form the building blocks of bigger systems.
Precisely mixed alloys make uniform girders that can be built into reliable
bridges. Computer chips are so predictable that a million different computers
running the same computer program can reliably get the same result. For simple
problems, engineering is the way to go.
Biology has never taken a direct approach, because it has never had a goal.
Organisms are not designed for their environment; they are simply the best tiny
fraction of uncountable attempts to survive and replicate. Over billions of
years and a vast spectrum of environments and organisms, the process of trial
and error has accumulated an awesome array of solutions to an astonishing
diversity of problems.
Until recently, biology has been the only agent that was capable of making
complicated structures at the nanoscale. Not only complicated structures, but
self-reproducing structures: tiny cells that can use simple chemicals to make
more cells, and large organisms made of trillions of cells that can move,
manipulate their environment, and even think. (The human brain has been called
the most complex object in the known universe.) It is tempting to think that
biology is magic. Indeed, until the mid-1800's, it was thought that organic
chemicals could not be synthesized from inorganic ones except within the body of
a living organism.
The belief that there is something magical or mystical about life is called
vitalism, and its echoes are still with us today. We now know that any
organic chemical can be made from inorganic molecules or atoms. But just last
year, I heard a speaker—at a futurist conference, no less—advance the theory
that DNA and protein are the only molecules that can support self-replication.
Likewise, many people seem to believe that the functionality of life, the way it
solves problems, is somehow inherently better than engineering: that life can do
things inaccessible to engineering, and the best we can do is to copy its
techniques. Any engineering design that does not use all the techniques of
biology is considered to be somehow lacking.
If we see people scraping and painting a bridge to avoid rust, we may think how
much better biology is than engineering: the things we build require
maintenance, while biology can repair itself. Then, when we see a remora
cleaning parasites off a shark, we think again that biology is better than
engineering: we build isolated special-purpose machines, while biology develops
webs of mutual support. But in fact, the remora is performing the same function
as the bridge painters. If we want to think that biology is better, it's easy to
find evidence. But a closer look shows that in many cases, biology and
engineering already use the same techniques.
Biology does use some techniques that engineering generally does not. Because
biology develops by trial and error, it can develop complicated and finely-tuned
interactions between its components. A muscle contracts when it's signaled by
nerves. It also plays a role in maintaining the proper balance of nutrients in
the blood. It generates heat, which the body can use to maintain its
temperature. And the contraction of muscles helps to pump the lymph. A muscle
can do all this because it is made of incredibly intricate cells, and embedded
in a tightly-integrated body. Engineered devices tend to be simpler, with one
component performing only one function. But there are exceptions. The engine of
your car also warms the heater. And the electricity that it generates to run its
spark plugs and fuel pump also powers the headlights.
Complexity deserves a special mention. Many non-engineered systems are complex,
while few engineered systems are. A complex system is one where slightly
different inputs can produce radically different outputs. Engineers like things
simple and predictable, so it's no surprise that engineers try to avoid
complexity. Does this mean that biology is better? No, biology usually avoids
complexity too. Even complex systems are predictable some of the time—otherwise,
they'd be random. Biology is full of feedback loops with the sole function of
keeping complex systems from running off the rails. And it's not as though
engineered devices are incapable of using complexity. Turbulence is complex. And
turbulence is a great way to mix substances together. Your car's engine is
finely sculpted to create turbulence to mix the fuel and the air.
Biology flirts with complexity in a way that engineering does not. Protein
folding, in which a linear chain of peptides folds into a 3D protein shape, is
complex. If you change a single peptide in the protein, it will often fold to a
very similar shape—but sometimes will make a completely different one. This is
very useful in evolving systems, because it allows a single system to produce
both small and large changes. But we like our products to be predictable: we
would not want one in a thousand cars sold to have five wheels, just so we could
test if five was better than four. Evolution is beginning to be used in design,
but it probably will never be used in the manufacture of final products.
Copying the techniques of life is called biomimesis. There's nothing
wrong with it, in moderation. Airplanes and birds both have wings. But airplane
wings do not have feathers, and airplanes do not digest seeds in mid-air for
fuel. Biology has developed some techniques that we would do well to copy. But
human engineers also have developed some techniques that biology never invented.
And many of biology's techniques are inefficient or simply unnecessary in many
situations. Sharks might not need remoras if they shed their skin periodically,
as some trees and reptiles do.
The design of nanomachines and nanosystems has been a focus of controversy. Many
scientists think that nanomachines should try to duplicate biology: that the
techniques of biology are the best, or even the only, techniques that can work
at the nanometer scale. Certainly, the size of a device will have an effect on
its function. But the device's materials also have an effect. The materials of
biology are quite specialized. Just a few chemicals, arranged in different
patterns, are enough to make an entire organism. But organic chemicals are not
the only kind of chemicals that can make nanoscale structures. Organics are not
very stiff; they vibrate and even change shape. They float in water, and the
vibrations move chemicals through the water from one reaction site to another.
A few researchers have proposed building systems out of a different kind of
chemistry and machinery. Built of much stiffer materials, and operating in
vacuum or inert gas rather than water, it would be able to manufacture
substances that biology cannot, such as diamond. This has been widely
criticized: how could stiff molecular machines work while fighting the
vibrations that drive biological chemicals from place to place? But in fact,
even in a cell, chemicals are often actively transported by molecular motors
rather than being allowed to diffuse randomly. And even the stiff machine
designs use vibration when it is helpful; for example, a machine designed to
bind and move molecules might jam if it grabbed the wrong molecule, and Drexler
has calculated that thermal noise could be effective at un-jamming it. (See
Engineering and biology alike are very good at ignoring effects that are
irrelevant to their function. Engineers often find it easier to build systems a
little bit more robustly, so that no action is necessary to keep them working as
designed in a variety of conditions. Biology, being more complicated and
delicate, often has to actively compensate or resist things that would disrupt
its systems. So for the most part, stiff machines would not 'fight'
vibrations—they'd simply ignore them.
Biology still has a few tricks we have not learned. Embryology, immunology, and
the regulation of gene expression still are largely mysterious to us. We have
not yet built a system with as much complexity as an insect, so we cannot know
whether there are techniques we haven't even noticed yet that help the insect
deal with its environment effectively. But even with the tricks we already know,
we can build machines far more powerful—for limited applications—than biology
could hope to match. (How many horses would fit under the hood of a
300-horsepower sports car?) These tricks and techniques, with suitable choices
and modifications, will work fine even at the molecular scale. Engineering and
biology techniques overlap substantially, and engineering already has enough
techniques to build complete machine systems—even self-contained manufacturing
systems—out of molecules. This may be threatening to some people who would
rather see biology retain its mystery and preeminence. But at the molecular
level, biology is just machines and structures.