What follows is a two-part exchange between the authors on the topic of fluctuation theorems and the origins of life.
Part 1. Brian Miller
The second law of thermodynamics asserts that, within a closed system, changes in entropy are always positive.1 Entropy is often understood as a crude measure of disorder. On this level of analysis, the second law has a certain degree of phenomenological support. Things do become disordered. For all that, its rigorous justification has been no easy task. In the late nineteenth century, Josef Loschmidt observed that if Newtonian mechanics is time-reversible, there is no obvious way in which to derive the second law of thermodynamics from its assumptions. To be sure, Ludwig Boltzmann had demonstrated in his famous H-theorem that, in progressing from a nonequilibrium to an equilibrium state, a physical system must increase in entropy. The argument is fine. It is its premises that Loschmidt queried, and, in particular, the assumption of molecular chaos, or Stosszahlansatz, which cannot be derived from Newtonian mechanics itself. If this is so, then neither can the second law of thermodynamics.
The advent of various fluctuation theorems in the 1990s has brought this issue closer to resolution. These theorems make possible the quantitative analysis of systems driven far from equilibrium.2 The theorems are based on the instantaneous dissipation function Ω(Γ) and its integral, the standard dissipation function.3 These functions are closely related to the entropy production rate, and thus to the entropy increase associated with the evolution of an ensemble of microstates.4 Let the vector Γ represent a microstate of the time-dependent generalized coordinates and momenta for the molecules in a given system.5 The instantaneous dissipation function is a triplet <Γ, f(Γ), Λ(Γ)> comprising the rate of change of Γ, its probability distribution function f(Γ), and the phase space expansion factor, Λ(Γ).6 It is the instantaneous dissipation function that measures how the probability distribution is spreading, and it is this that corresponds to the system’s increase in internally generated entropy.7 Whereupon, there are the following obvious relationships:
,
,
and
.
A number of interesting theorems now follow. The Evans–Searles fluctuation theorem shows that, in dissipative systems, entropy can go in reverse.8 The probability that a system is in state A, when divided by the probability that it is in state –A, is positive:
.
The relevant probabilities drop exponentially with the magnitude of the entropy decrease.9 The dynamics of individual particles in a given microstate may well be reversible, but microstates move statistically toward an increase in entropy.10 This is tolerably close to Boltzmann’s original point of view; but it has the effect of making the second law of thermodynamics something less than an exact law of nature.
The Crooks fluctuation theorem was developed to study systems acted upon by a non-dissipative field or force in which a system moves from an initial state to a final state with a different equilibrium free energy. The work, W, performed by the system is equal to the change in free energy, ΔF.11 If the transition occurs away from equilibrium, some of the applied work will be lost as heat. Let A and –A designate work performed in the forward and time-reversed directions. The ratio of probabilities over A and –A, the theorem affirms, is
,
where β is the inverse of the initial temperature of the system and the thermal bath surrounding it times Boltzmann’s constant. The heat released during the transition into the thermal bath is A – ΔF. The argument begins by identifying the entropy production, ω, for a single trajectory as
ω = lnρ[Γ(0)] – lnρ[Γ(τ)] – βQ[Γ(τ), λ(τ)],
where ρ[Γ(0)] and ρ[Γ(τ)] are the initial and final probability distributions, λ(τ) represents the control parameter defining the transition, and Q is the heat absorbed from the thermal bath. The probability distribution, where E is the internal energy of the microstate, is
.
The sum is taken over all of the microstates. The free energy F(β, λ) = β–1ln∑Γexp[–βE(Γ, λ)]. When the probability distribution equation is substituted into the equation for entropy production, the first law of thermodynamics yields
ω = –βΔF + βW,
where W is the work performed on the system. Crooks’s theorem follows by substitution in the Evans–Searles theorem. Trajectories that dissipate heat are more probable than the reverse since they correspond to those that generate entropy. Crooks’s theorem shows that there is a finite probability that, while work is performed on the system, the increase in free energy can exceed the amount of applied work: it is possible that A – ΔF be negative. As a result, heat will be absorbed from the bath and converted to free energy. The probability drops exponentially with the magnitude of the heat absorbed.12
Several fluctuation theorems and associated relationships have been verified through experiments using optical tweezers.13 The Crooks theorem was demonstrated by repeatedly measuring the work associated with the unfolding and refolding of an RNA hairpin and an RNA three-helix junction, and comparing the force distributions with free-energy changes.14 A similar experiment was performed on a molecule of RNA to test Jarzynski’s equality,15
.
Tweezers were applied to a colloidal particle to verify the Evans–Searles fluctuation theorem by measuring the particle’s positions as the strength of an optical trap was increased.16 Experiments have also applied the optical technique and the Crooks fluctuation theorem to the study of bio-motors to assist in force measurements.17
The Crooked Made Straight
Jeremy England has attempted to use the Crooks fluctuation theorem to explain the origins of life. Systems driven far from equilibrium, he argues, could self-organize into states that efficiently absorb energy from an external drive and then dissipate it back into the environment.18 Life efficiently extracts energy from resources in the environment and dissipates heat back into the surroundings, so the same thermodynamic principles that guide the evolution of his models might explain the origin of life.19
The argument is not unreasonable.
England groups individual microstates into a macrostate I in the initial energy landscape and a macrostate II in the final energy landscape. He then derives the following relationships:20
and
.
The variable λ(t) corresponds to a controlled, time-dependent parameter that defines the transition between the initial, λ(0), and final, λ(τ), energy landscapes.21 It might correspond to the position of a piston compressing a cylinder filled with gas or the strength of an applied electrical field. The denominator of the left-hand side of the equation is the probability, πτ, of the transition from I to II. It corresponds to the first half of the driving cycle. The numerator is the probability of the time-reversed transition. It corresponds to the second half of the driving cycle.
The right-hand side of the equation is the average exponential of the total entropy change over all trajectories from I to II. Initial and final probabilities are designated by pi and pf. The total entropy change is equal to the internal change in entropy and the heat released into the bath.22
England then derives the following relationship:23
.
This equation relates the ratio of the probabilities for the two possible transitions to three terms. The first represents the difference in entropy between II and III. Just as in classical thermodynamics, systems tend toward greater entropy. The second term corresponds to the ratio of the probabilities for the reverse transitions between II and I and between III and I. The bar signifies that the trajectories are time-reversed; the momenta are going backward. This term has an interpretation in terms of activation barriers to transitions. If the barrier for one of the transitions is lower than the other, then that transition can more easily proceed both in the forward and reverse directions.24 As a result, a state which can easily transition into other states might itself represent a likely endpoint for reverse transitions, so certain sets of transitions could proceed back and forth far more frequently. The last term contains exponentials of the difference between the work the external drive performs on the system and the change in equilibrium free energies between the initial and final macrostates. This difference expresses the work, Wd, dissipated into the thermal bath. The term implies that transitions will proceed with the highest probability if they extract the largest amounts of energy from the drive and then dissipate it away. Combined with the previous term, it suggests that systems will self-organize into clusters of states that most efficiently absorb and dissipate energy, somewhat like a pendulum being driven at the resonance frequency.
It is with this analogy in mind that England has investigated the extent to which systems driven far from equilibrium demonstrate resonance behavior. In one experiment, he models particles in microstates separated by barriers.25 Both the energy of certain microstates and of barriers can change due to an applied electric field. The particles tend to hop into states where the transition dissipates more energy, confirming one of England’s predictions.
In another experiment, England simulated a toy chemical reaction network in which several particles randomly form and break harmonic spring bonds.26 The particles reside in a viscous medium, held at constant temperature, that applies a drag force dissipating away their kinetic energy. The forming and breaking of bonds follow a stochastic switching process whose rates are governed by the Arrhenius relationship where the rate drops exponentially with the activation energy. One of the particles is driven by a sinusoidal force with a specific driving frequency for each trial. One might expect the network to resonate with the driving force if it matched the normal modes of the undriven system.27 The simulations demonstrated that the positions of particles and bonding combinations often cluster around resonance states where greater amounts of energy are absorbed from the drive and then released into the environment. The system self-adjusts so its normal modes match the driving frequency. States which efficiently dissipate energy at one iteration step tend to transition to states that also efficiently dissipate energy.
The results perfectly match the theoretical predictions.
In a third experiment, England simulated a chemical network that continuously absorbed work from the environment. Energy is directed toward forcing reactions to new equilibrium values for the chemical concentrations.28 The strength of each drive is a function of the concentrations of twenty-five different species. England compares the forcing terms to the activity of high-energy molecules in cellular metabolism. As a consequence, the setup does not strictly follow Crooks’s formalism, which assumes that the forcing results from physical forces or applied fields. The applied chemical work operates similarly to the work imparted from standard fields, so the results of this study are still relevant.29 Each simulation trial starts with randomized initial concentrations and forcing functions that are generated as random functions of the concentrations of the different species. The system is then allowed to evolve until the network reaches a dynamical fixed point where all concentrations remain constant. The transition history is then analyzed to determine the evolutions and final strengths of the forcing terms. The results from series of trials demonstrate that the reaction network often self-adjusts, so the forcing takes on extremely high values. The system spontaneously fine-tunes itself to extract work from the environment and dissipate energy back through strongly driven chemical reactions.30
This result, like those from the previous experiments, supports England’s contention that systems driven far from equilibrium often self-organize to absorb and dissipate energy.
On the Other Hand
It is a commonplace of criticism that the spontaneous formation of a living system by chance is unlikely. How unlikely? Fred Hoyle put the odds at roughly 1 in 1040,000. These odds he compared to the chance of a tornado plowing through a junkyard and assembling a Boeing 747.31 Hoyle’s tornado has enjoyed an existence all its own. It crops up as a simile in every discussion dedicated to the origins of life. In thermodynamics, probability and entropy are companionable concepts. The probability of a given state, in a thermodynamic ensemble, is proportional to the number, N, of its configurations; its entropy, to log(N).32 Natural systems tend to move from lower to higher states of entropy. Nature abhors low odds. This is the simple meaning of the second law of thermodynamics. With Hoyle’s 747 having lumbered down various runways, nearly all origins of life (OOL) researchers came to recognize that the appearance of the first cell could not have been a matter of sheer dumb luck.33 For all that, some systems do, in fact, move from high- to low-entropy states. Ice occupies a lower energy state than water.34 No violation of the second law is involved. Changes are always exothermic: heat is released. When the books are balanced, an exothermic increase in entropy always exceeds an endothermic decrease. The entropy of the universe increases, just as Boltzmann foretold.
But the simplest functional cell has both lower entropy35 and higher energy than its prebiotic precursors, or even its building blocks.36 The atoms in a cell are arranged in highly specified low-entropy configurations, and they are comprised of high-energy chemical bonds. These are unnatural circumstances. Natural systems never both decrease in entropy and increase in energy—not at the same time.
This is no compelling argument, but it is a suggestive observation.
Physicists and chemists often combine the entropy and energy of a system to form a definition of its free energy.37 For spontaneous processes, a change in free energy is always negative. Harold Morowitz estimated the probability that a bacterial cell might have originated through thermal fluctuations, and determined that the probability of spontaneously going from low to high, when every other system was spontaneously going from high to low, was on the order of one part in ten to the power of a hundred billion.38 This number represents a least upper bound since it measures the smallest increase in free energy needed to form a bacterial cell. And, of course, the probability is essentially zero.
Nature would not have allowed a cell to form near-equilibrium.
If not near-equilibrium, then, perhaps, far from it? In what are termed nonequilibrium dissipative systems, energy or mass enters the system and then leaves, and this flow spontaneously generates the order characteristic of the roll patterns in boiling water, the funnel of a tornado, or the wave patterns in the Belousov–Zhabotinsky reaction.39 Self-organizing patterns are often seen in nonlinear systems.40 Transitions occur when a control parameter crosses a critical value. Biologists, at once, suggested that something similar might explain the origin of life.41 England’s framework has offered new hope that the thermodynamic challenges could be overcome as a result of the new principles of self-organization identified in his experiments.
England’s simulation of particles hopping barriers demonstrates how transitions are more favored if they dissipate greater amounts of energy. And his model of a network of springlike bonds forming and breaking demonstrates how such networks can self-organize to better absorb and then dissipate energy. But the key challenge in OOL research lies in explaining how a system could extract energy from the environment and then direct it toward increasing its internal free-energy non-transiently. England’s experiments do not address this concern, for the energy absorbed is almost immediately dissipated away.
Experiments on chemically driven reaction networks do demonstrate a progression toward higher free-energy states, but at the expense of incorporating in various models the core components of a fully formed cell. All cells accomplish the goal of maintaining a high-free-energy state against opposing thermodynamic forces by employing complex molecular machinery and finely tuned chemical networks to convert one form of energy from the environment into high-energy molecules. The energy from the breakdown of these energy-currency molecules is directed toward powering targeted chemical reactions. The energy coupling is accomplished through complex enzymes and other proteins comprised of information-rich amino-acid sequences.42
England’s forcing terms are simply proxies for molecular engines and enzymes in the living cell. England directly compares the concentration dependences of the forcing terms to cellular metabolic pathways with feedback control.43 This research does not explain the OOL because it assumes that the central challenges of energy production and its coupling to specific sets of processes have already been solved.
England’s derived mathematical relationship for the ratio of the reverse to forward probabilities for transitions demonstrates that a system will tend to move in the direction of increasing total entropy. This is not controversial. Still, the formation of a cell represents a dramatic decrease in total entropy. England’s relationship can be rewritten using Jensen’s inequality and other straightforward steps to yield
PCell < eΔSeβΔQ,
where PCell is the probability for a cell forming from simple chemicals.44 The ΔS term represents the corresponding drop in internal entropy. The ΔQ term represents the energy that must be absorbed from the environment during the formation of the cell. Both ΔS and ΔQ take on large negative values. Note how the probability equals those from the Evans–Searles and Crooks theorems multiplied together. The chance for a cell forming in a nonequilibrium system can now be estimated. Determining the drop in entropy associated with abiogenesis is extremely challenging, so few have attempted rigorous calculations. Morowitz performed a crude estimate of the reduction in a cell associated with the formation of macromolecules. His approximation was on the order of 0.1 cal/deg-gm.45 This quantity corresponds in a bacterium46 to an entropy reduction of greater than 1010 nats, which yields a probability, ignoring the heat term, of less than one in ten to the power of a billion.
Calculating the component of the probability associated with the absorption of heat requires estimating the difference in free energy between the original molecules and a minimally complex cell, and evaluating the amount of work performed by plausible external drives which could have contributed to the free-energy increase. Many proposals have been offered for how various processes could assist in imparting the needed energy. Examples of hypothesized energy sources include meteorite crashes,47 moving mica sheets,48 shock waves,49 and proton gradients.50 None of these sources could have generated more than a tiny fraction of the required free energy.
To illustrate the disparity, the power production density of the simplest known cell for only maintenance is on the order of 100,000 μW/g.51 In any OOL scenario, the protocell would have to generate at least this amount in the latter stages52 just to overcome the thermodynamic drive back toward equilibrium, and even greater amounts would be required for replication.53 In contrast, a leading proposal for energy production centers on proton gradients in hydrothermal vents. Experimental simulations of vents under ideal conditions54 only generate small quantities of possible precursors to life’s building blocks, and the corresponding power production density is on the order of 0.001 μW/g.55 This quantity is considerably greater than what could practically be transferred to any stage of abiogenesis,56 yet it is still eight orders of magnitude too small even to prevent a protocell from quickly degrading back to simple, low-energy molecules.
All other proposals fare no better.
Consequently, most of the free-energy increase in cell formation would have required heat to be extracted directly from the environment. The minimum amount to form a bacterium was estimated by Morowitz at around 10–9 joules (1010 eV).57 This value represents more than the equivalent, if scaled, of a bathtub of water at room temperature spontaneously absorbing enough heat from the surroundings to start boiling. The resulting probability for life to form, excluding the needed reduction in entropy, calculates to less than one in ten to the power of a hundred billion.
These miniscule probabilities do not improve if the process takes place in multiple steps separated by extended periods of time.58 In fact, the challenges actually increase if each step toward life does not proceed immediately after the previous one, for the chances of the system moving toward higher entropy (or lower free energy) are far greater than moving in a life-friendly direction. Any progress could be completely squandered by a few thermal fluctuations or chemical interactions.
England’s theoretical framework signifies quantitatively that the origin of life is just as improbable in systems driven away from equilibrium as those near equilibrium.
His research illustrates the necessity for systems even remotely similar to life to possess the equivalent of both engines and enzymes. As a prime example, the springlike bond experiment incorporates a driving force that constantly adds energy to the system. This drive serves the equivalent role of the molecular engines in cells which convert external energy sources into energy-currency molecules. And the springs which transfer energy between the driven particle and the other particles in the network serve the equivalent role of enzymes which channel energy from the breakdown of the energy-currency molecules to target chemical reactions and other cellular functions. Both components are essential for the simulation to function, just as they are required to access the energy needed in nearly every stage of cell formation.
Enzymes are essential for energetically favorable reactions since most reactions are too slow to drive cellular operations. Enzymes accelerate the reactions’ turnover rates by factors typically between 108 and 1010, and the increase in many cases is significantly higher.59 Without enzymes, the concentration of a substrate would typically need to be millions of times greater to maintain a comparable reaction rate. This is not a plausible scenario.60 In addition, individual steps in the chemical pathways to synthesize life’s building blocks and then to link them together require multiple, mutually exclusive reaction conditions, so no environment could support more than a few required steps.61 The enzymes create the necessary nano-environments in their active sites to support their targeted reactions, so a multitude of diverse reactions can be maintained in the same cellular microenvironment simultaneously. They are required both to specify and power the correct set of processes.
The Path Forward
A minimally complex free-living cell requires hundreds of tightly regulated enzyme-enabled reactions.62 If even one enzyme were missing, all metabolic processes would cease, and the system would head irreversibly back toward equilibrium. England’s research does not explain how such a complex, specified system could originate.63 Both the proteins that constitute an engine’s building blocks and enzymes represent sequences of amino acids that contain large quantities of functional information.64 The amino acids must be arranged in the right order in the same way the letters in a sentence must be arranged properly to convey its intended meaning. This arrangement is crucial for the chains to fold into the correct three-dimensional structures to properly perform their assigned functions.65 This information is essential for constructing and maintaining the cell’s structures and processes.66 Until origins researchers address the central role of information, the origin of life will remain shrouded in mystery.67
Part 2. Jeremy England
Several things could be meant when calling something an explanation of life’s origins. The most straightforward would presumably be a collection of overwhelming evidence for a detailed, step-by-step account of all the molecular events that occurred—where and under what conditions—to build all the basic characteristics of cellular life as it exists now. Much as we crave stories like this, there is not likely to ever be one for this question. Almost by definition, the dance of life’s earliest molecular precursors would presumably have been so fragile and minuscule that the prospect of decisively interpreting physical materials in the present into some kind of reconstructed picture of the past is frustratingly dim. Consider: DNA in water falls apart spontaneously on the timescale of millions of years, and RNA and protein degrade thousands of times faster. With all the chemical transformation that generally gets accomplished in geologic time, trying to run the movie backward on cellular life based on reasoned inferences quickly fans out into widely divergent speculations. This does not mean scientists cannot speculate. But it does definitively rule out settling on one, well-justified consensus.
If scientists will not get to paleontologize life’s beginnings in the way they have the dinosaurs, then what kind of success might be more within reach? The easiest one is the philosophical parlor trick: if the universe is large and diverse enough, then many things can happen, and people capable of asking where life came from will be found exclusively in the part of the universe where life did happen. So here we are, and the details of how are an afterthought.
It is worth examining what exactly is unsatisfying about this sort of dodge. Even if we accept its basic reasoning, the question we are really curious about can still be asked, namely, is our part of the universe rare or typical? Granted, life is here, but should that be thought of as a freak occurrence or a normal one? The essence of what we may most want to know about life has to do with its probability.
Imagine that someone claims life originated through ideal, unbiased, random thermal fluctuations that caused, through a fluke of unimaginably small likelihood, basic building blocks to combine and form a protein-based cell with a DNA-based genome. Such an explanation would be no explanation at all, because resorting to the effects of extremely unlikely events frees one to say that almost anything could have happened.
An acorn found under an oak tree is easy to take as a matter of course because I know that acorns start out high up in oak trees and that gravity pulls them down. The high probability of an acorn ending up at the foot of the tree that gave it birth makes all acorns thus disposed unremarkable. Life, by contrast, seems remarkable because we do not see it spring into being all the time. Its presence cannot be viewed as a commonplace. If our best naive ideas of how matter might get shaped into such exquisitely special arrangements make it seem like wildly rare events had to occur to get this result, then we are left feeling like we have yet to understand something fundamental.
It was from this point of view that I started thinking about the origins of life. The main way to make progress on the question has mostly to do with adjusting our sense of what is likely and possible. Life performs many actions for which its architecture seems to be highly specialized: it harvests energy, repairs and copies itself, predicts its world, and creatively adapts to environmental change. These feats are difficult to pull off if done by a random arrangement of building blocks. The proper question to ask then is: how does matter become arranged into the special shapes that are good at these actions?
Precisely on that point, I have been trying to argue that people should recognize a significant opportunity coming into view. If it could be shown that there are fairly generic physical conditions under which the hallmarks of lifelikeness become like acorns under the oak tree—that is, normal and expected occurrences in the conditions where they emerge—we would talk differently about what seems mysterious or not about life’s shadowy past. Right now, we can theorize about how the flow of energy through matter reshapes it over time. We also can carry out experiments that may serve as proofs of principle. Having done so to some extent, we may not know in more detail what happened a long time ago, but we may wonder less about it.
In his statement above, Brian Miller thoughtfully expresses the view that this way of thinking has not made much progress so far. In the course of making this case, he says a great many things that are true. He gives a fair, high-level account of some recent progress in thinking about nonequilibrium statistical mechanics. It is true that fluctuation theorems have opened the door to proving general relationships between dissipation and fluctuation that hold far from thermal equilibrium. It is also true that I have proposed a way of manipulating one of these results into a form that reveals part of the thermodynamics of lifelike self-organization. My interlocutor has to be given tremendous credit for engaging with the right equation in the right paper,68 which has sadly turned out to be an uncommon feat among those reacting skeptically to the idea of dissipative adaptation.69
Consider this equation:70
.
There is a lot to unpack here, because this is basically a generalization of the relationship between probability and free energy for a driven, nonequilibrium system. That more basic statement in thermal equilibrium would look like this:
.
In both cases, the relations compare different relative likelihoods of outcomes for the dynamic reshaping of matter under conditions where energy flows in and out of the system. In the latter case (equilibrium), however, the only energy allowed to flow is heat, whereas in the former work energy is also supplied from external sources.
The simpler equilibrium equation is valuable for understanding the behavior of different fluids and solids. In this balance, comparing the effect of entropy to that of energy does not provide any single indication that applies in all circumstances: if the entropy change for a process is large enough, higher-entropy states are more likely, but if the energy drop for the process is large enough, then low-energy, and possibly low-entropy, states may be more likely. In a given system, this tradeoff may be controlled by the temperature. This is why ice is stable when it is cold, but becomes less stable than the higher-entropy liquid state when the temperature rises.
The example illustrates how a general thermodynamic relation of this kind can call attention to different scenarios that might happen. Sometimes, with detailed factors being what they are, the likely outcome may be strongly dominated by what increases entropy, as with a vapor. Other times, the outcome will be dominated by a low-entropy outcome that lowers energy, as with ice. It is uncertain which will occur until we get down to brass tacks with the particular system at hand.
The generalization of free energy in its relation to probability that is shown above for a nonequilibrium scenario plays the same kind of role. It indicates, for instance, that in the far-from-equilibrium regime, sometimes what is most likely to happen is what can happen fastest, which is a move to a new state that is in relatively close proximity to the starting point. It also indicates that sometimes the likely outcome will be determined by how much more work reliably gets absorbed along the way.
There are a number of popular ways to misunderstand this statement. At least one of these misunderstandings comes across in Miller’s critique. First, the equation does not say that likely outcomes always have exceptional histories of reliably high absorption of work energy from the environment, any more so than the definition of equilibrium free energy requires that all thermal equilibrium states of water be ice. More importantly, the equation also does not indicate anything about the rate of energy absorption in the final outcome state: the history of work absorbed during dynamical evolution is not always equivalent to the propensity to absorb work after evolutionary change has occurred.
Given the relationship between work absorption and probability expressed, the equation suggests there may be a scenario in which a low-entropy, high-energy, stable state of matter may be a likely outcome of the dynamics, and if so, then it must involve a process that absorbs and dissipates a lot of energy from the environment. This scenario would be most interesting in cases where absorbing energy from the environment is difficult, because then it might be possible to see the signature of this energy flow in the exceptional capabilities of the structure that emerges.
I am being deliberately vague by using the word “signature,” because these events still break into cases.71 One way of seeing the mark of exceptionality in the energy-absorption properties of a system is to notice it is much better at extracting energy from the environment than a random rearrangement of its constituent parts. Another way would be to notice when the system becomes exceptionally bad at the same activity. In truth, both scenarios are realizable, and each best embodies a separate aspect of what is impressive about the physical properties of living things. Paradoxically, life is good at harvesting fuel from its environment,72 but also good at controlling the flow of energy through it so that it does not get shaken to smithereens by all the random physical insults that might result.73 Overall, all the general thermodynamic statement suggests is that, using intuition that the physics provides, we should try to define scenarios in which lifelike behaviors may become likely to emerge.
Returning to the example of ice, the fact that the lowering of energy can “pay for” a decrease in entropy at constant temperature does not provide any direction for how to accomplish such a goal. To make crystallization of this sort happen, the particles involved have to happen to have lower energy when they are closer together and carefully coarranged in space. One could have also imagined particles whose energy is lowered when they are farther apart, and there would be thermodynamic things to say about that. But getting a crystal out of it is a less straightforward matter.
So, too, with things more impressively lifelike than crystals: the thermodynamic equation just provides a hint of what to explore. It shows that dissipative adaptation is permissible. It ultimately takes more work to devise specific mechanisms of self-organization that result from more detailed assumptions about how all the particles interact.74 Both simulations and experiments need to be done to establish more clearly when any lifelike behaviors might be expected to generate from generic—“naive,” “inanimate”—initial conditions. What first studies, by myself and others,75 have revealed so far is encouraging: finely tuned, low-entropy states in driven matter spring up all over the place. They seem mainly to require a diverse space of possible arrangements for their constituent parts and an external energy source that has some amount of detailed pattern to it, as any specifically colored radiation or specifically structured chemical would. This is an encouraging indication that structures that mimic aspects of what life is good at might have been lying around in the tempestuous nonequilibrium world, acting as a toolbox for the first more lifelike thing to draw from. Still, all of this kind of speculation is much more a rallying cry than anything else, for the science of how lifelikeness emerges is just starting.
In the end, the most important thing to emphasize is that the time has come to start being empirical in this research. Back-of-the-envelope calculations of prohibitively great improbabilities like the one quoted from Morowitz in this piece’s partner have been around for a while. They invariably rely on straw-man assumptions. Let it be granted, once and for all: waiting for a thermal fluctuation at thermal equilibrium to slap together a living organism is not going to work.
So what? The first equation mentioned in my response implies that if scientists actually intend to compute the probability of forming a live cell, they should have to specify how long it takes while making reference to kinetic factors controlling the rates of reactions and to the amount energy absorbed and dissipated per unit of time by the system. If this sounds horribly complicated, it is. It should bring a humble understanding that such probabilities are unlikely to ever be computed accurately. Instead, our window onto thermodynamics should spur new empirical investigation. The theoretical relationship between probability and dissipated work is a thread that if pulled harder may yet unravel the whole tapestry. Coming into view is a gray spectrum of increasingly complex fine-tuning distinguishing the dust of the earth from life. Already, examples exist of structures that can form rapidly at high energy and low entropy and last for a long time, so long as they are fed with more energy of the type that generated them. That may not be life, but it surely is reason to hold back grand declarations about what is likely or impossible until we have better explored a new frontier.