Physics / Critical Essay

Vol. 5, NO. 2 / May 2020

What follows is a two-part exchange between the authors on the topic of fluctuation theorems and the origins of life.

## Part 1. Brian Miller

The second law of thermodynamics asserts that, within a closed system, changes in entropy are always positive.1 Entropy is often understood as a crude measure of disorder. On this level of analysis, the second law has a certain degree of phenomenological support. Things do become disordered. For all that, its rigorous justification has been no easy task. In the late nineteenth century, Josef Loschmidt observed that if Newtonian mechanics is time-reversible, there is no obvious way in which to derive the second law of thermodynamics from its assumptions. To be sure, Ludwig Boltzmann had demonstrated in his famous H-theorem that, in progressing from a nonequilibrium to an equilibrium state, a physical system must increase in entropy. The argument is fine. It is its premises that Loschmidt queried, and, in particular, the assumption of molecular chaos, or Stosszahlansatz, which cannot be derived from Newtonian mechanics itself. If this is so, then neither can the second law of thermodynamics.

The advent of various fluctuation theorems in the 1990s has brought this issue closer to resolution. These theorems make possible the quantitative analysis of systems driven far from equilibrium.2 The theorems are based on the instantaneous dissipation function Ω(Γ) and its integral, the standard dissipation function.3 These functions are closely related to the entropy production rate, and thus to the entropy increase associated with the evolution of an ensemble of microstates.4 Let the vector Γ represent a microstate of the time-dependent generalized coordinates and momenta for the molecules in a given system.5 The instantaneous dissipation function is a triplet <Γ, f(Γ), Λ(Γ)> comprising the rate of change of Γ, its probability distribution function f(Γ), and the phase space expansion factor, Λ(Γ).6 It is the instantaneous dissipation function that measures how the probability distribution is spreading, and it is this that corresponds to the system’s increase in internally generated entropy.7 Whereupon, there are the following obvious relationships:

$\mathrm{\Omega }\left(\mathbf{\Gamma }\right)=-\frac{\stackrel{˙}{\mathbf{\Gamma }}\left(\mathbf{\Gamma }\right)}{f\left(\mathbf{\Gamma }\right)}\bullet \frac{\partial f\left(\mathbf{\Gamma }\right)}{\partial \mathbf{\Gamma }}-\mathrm{\Lambda }\left(\mathbf{\Gamma }\right)$,

${\mathrm{\Omega }}_{t}\left(\mathbf{\Gamma }\right)\equiv {\int }_{0}^{t}ds\mathrm{\Omega }\left({S}^{S}\mathbf{\Gamma }\right)\equiv \mathrm{ln}\left(\frac{f\left(\mathbf{\Gamma };0\right)}{f\left({S}^{t}{M}^{t}\mathbf{\Gamma };0\right)}\right)-{\int }_{0}^{t}\mathrm{\Lambda }\left({S}^{S}\mathbf{\Gamma }\right)ds$,

and

$\Delta \left(\mathbf{\Gamma }\right)=\frac{d}{d\mathbf{\Gamma }}\bullet \stackrel{˙}{\mathbf{\Gamma }}$.

A number of interesting theorems now follow. The Evans–Searles fluctuation theorem shows that, in dissipative systems, entropy can go in reverse.8 The probability that a system is in state A, when divided by the probability that it is in state –A, is positive:

$\frac{p\left(\mathrm{s}=\mathrm{A}\right)}{p\left(\mathrm{s}=-\mathrm{A}\right)}={e}^{\mathrm{A}}$.

The relevant probabilities drop exponentially with the magnitude of the entropy decrease.9 The dynamics of individual particles in a given microstate may well be reversible, but microstates move statistically toward an increase in entropy.10 This is tolerably close to Boltzmann’s original point of view; but it has the effect of making the second law of thermodynamics something less than an exact law of nature.

The Crooks fluctuation theorem was developed to study systems acted upon by a non-dissipative field or force in which a system moves from an initial state to a final state with a different equilibrium free energy. The work, W, performed by the system is equal to the change in free energy, ΔF.11 If the transition occurs away from equilibrium, some of the applied work will be lost as heat. Let A and –A designate work performed in the forward and time-reversed directions. The ratio of probabilities over A and –A, the theorem affirms, is

$\frac{{p}_{f}\left(\mathrm{W}=\mathrm{A}\right)}{{p}_{r}\left(\mathrm{W}=-\mathrm{A}\right)}={e}^{\beta \left(A-\Delta F\right)}$,

where β is the inverse of the initial temperature of the system and the thermal bath surrounding it times Boltzmann’s constant. The heat released during the transition into the thermal bath is A – ΔF. The argument begins by identifying the entropy production, ω, for a single trajectory as

ω = lnρ[Γ(0)] – lnρ[Γ(τ)] – βQ[Γ(τ), λ(τ)],

where ρ[Γ(0)] and ρ[Γ(τ)] are the initial and final probability distributions, λ(τ) represents the control parameter defining the transition, and Q is the heat absorbed from the thermal bath. The probability distribution, where E is the internal energy of the microstate, is

$\rho \left(\mathbf{\Gamma },\lambda \right)=\frac{{e}^{-\beta E\left(\mathbf{\Gamma },\lambda \right)}}{{\sum }_{\mathbf{\Gamma }}{e}^{-\beta E\left(\mathbf{\Gamma },\lambda \right)}}={e}^{\beta F\left(\beta ,\lambda \right)-\beta E\left(\mathbf{\Gamma },\lambda \right)}$.

The sum is taken over all of the microstates. The free energy F(β, λ) = β–⁠1ln∑Γexp[–βE(Γλ)]. When the probability distribution equation is substituted into the equation for entropy production, the first law of thermodynamics yields

ω = –βΔF + βW,

where W is the work performed on the system. Crooks’s theorem follows by substitution in the Evans–Searles theorem. Trajectories that dissipate heat are more probable than the reverse since they correspond to those that generate entropy. Crooks’s theorem shows that there is a finite probability that, while work is performed on the system, the increase in free energy can exceed the amount of applied work: it is possible that A ⁠–⁠ ΔF be negative. As a result, heat will be absorbed from the bath and converted to free energy. The probability drops exponentially with the magnitude of the heat absorbed.12

Several fluctuation theorems and associated relationships have been verified through experiments using optical tweezers.13 The Crooks theorem was demonstrated by repeatedly measuring the work associated with the unfolding and refolding of an RNA hairpin and an RNA three-helix junction, and comparing the force distributions with free-energy changes.14 A similar experiment was performed on a molecule of RNA to test Jarzynski’s equality,15

$⟨{e}^{-\beta W}⟩={e}^{-\Delta F}$.

Tweezers were applied to a colloidal particle to verify the Evans–Searles fluctuation theorem by measuring the particle’s positions as the strength of an optical trap was increased.16 Experiments have also applied the optical technique and the Crooks fluctuation theorem to the study of bio-motors to assist in force measurements.17

Jeremy England has attempted to use the Crooks fluctuation theorem to explain the origins of life. Systems driven far from equilibrium, he argues, could self-organize into states that efficiently absorb energy from an external drive and then dissipate it back into the environment.18 Life efficiently extracts energy from resources in the environment and dissipates heat back into the surroundings, so the same thermodynamic principles that guide the evolution of his models might explain the origin of life.19

The argument is not unreasonable.

England groups individual microstates into a macrostate I in the initial energy landscape and a macrostate II in the final energy landscape. He then derives the following relationships:20

$\frac{{\pi }_{\tau }\left[\stackrel{-}{\mathrm{I}\mathrm{I}}\to \stackrel{-}{\mathrm{I}};\lambda \left(\tau -t\right)\right]}{{\pi }_{\tau }\left[\mathrm{I}\to \mathrm{I}\mathrm{I};\lambda \left(t\right)\right]}=⟨{e}^{-\Delta {S}_{tot}}⟩$

and

$\Delta {S}_{tot}\left[\mathbf{\Gamma }\left(t\right)\right]=\mathrm{ln}\left[\frac{{p}_{i}\left(\mathbf{\Gamma }\left(0\right)\right)}{{p}_{f}\left(\mathbf{\Gamma }\left(t\right)\right)}\right]+\beta \Delta Q\left[\mathbf{\Gamma }\left(t\right)\right]$.

The variable λ(t) corresponds to a controlled, time-dependent parameter that defines the transition between the initial, λ(0), and final, λ(τ), energy landscapes.21 It might correspond to the position of a piston compressing a cylinder filled with gas or the strength of an applied electrical field. The denominator of the left-hand side of the equation is the probability, πτ, of the transition from I to II. It corresponds to the first half of the driving cycle. The numerator is the probability of the time-reversed transition. It corresponds to the second half of the driving cycle.

The right-hand side of the equation is the average exponential of the total entropy change over all trajectories from I to II. Initial and final probabilities are designated by pi and pf. The total entropy change is equal to the internal change in entropy and the heat released into the bath.22

England then derives the following relationship:23

$\mathrm{ln}\left[\frac{{\pi }_{\tau }^{fwd}\left[\mathrm{I}\to \mathrm{I}\mathrm{I}\right]}{{\pi }_{\tau }^{fwd}\left[\mathrm{I}\to \mathrm{I}\mathrm{I}\mathrm{I}\right]}\right]=\Delta {S}_{\mathrm{I}\mathrm{I},\mathrm{I}\mathrm{I}\mathrm{I}}+\left[\frac{{\pi }_{\tau }^{rev}\left[\stackrel{-}{\mathrm{I}\mathrm{I}}\to \stackrel{-}{\mathrm{I}}\right]}{{\pi }_{\tau }^{rev}\left[\stackrel{-}{\mathrm{I}\mathrm{I}\mathrm{I}}\to \stackrel{-}{\mathrm{I}}\right]}\right]-\mathrm{ln}\left[\frac{{⟨\mathrm{e}\mathrm{x}\mathrm{p}\left(-\mathcal{B}{\mathrm{W}}_{\mathrm{d}}\right)⟩}_{\mathrm{I}\to \mathrm{I}\mathrm{I}}^{fwd}}{{⟨\mathrm{e}\mathrm{x}\mathrm{p}\left(-\mathcal{B}{\mathrm{W}}_{\mathrm{d}}\right)⟩}_{\mathrm{I}\to \mathrm{I}\mathrm{I}\mathrm{I}}^{fwd}}\right]$.

This equation relates the ratio of the probabilities for the two possible transitions to three terms. The first represents the difference in entropy between II and III. Just as in classical thermodynamics, systems tend toward greater entropy. The second term corresponds to the ratio of the probabilities for the reverse transitions between II and I and between III and I. The bar signifies that the trajectories are time-reversed; the momenta are going backward. This term has an interpretation in terms of activation barriers to transitions. If the barrier for one of the transitions is lower than the other, then that transition can more easily proceed both in the forward and reverse directions.24 As a result, a state which can easily transition into other states might itself represent a likely endpoint for reverse transitions, so certain sets of transitions could proceed back and forth far more frequently. The last term contains exponentials of the difference between the work the external drive performs on the system and the change in equilibrium free energies between the initial and final macrostates. This difference expresses the work, Wd, dissipated into the thermal bath. The term implies that transitions will proceed with the highest probability if they extract the largest amounts of energy from the drive and then dissipate it away. Combined with the previous term, it suggests that systems will self-organize into clusters of states that most efficiently absorb and dissipate energy, somewhat like a pendulum being driven at the resonance frequency.

It is with this analogy in mind that England has investigated the extent to which systems driven far from equilibrium demonstrate resonance behavior. In one experiment, he models particles in microstates separated by barriers.25 Both the energy of certain microstates and of barriers can change due to an applied electric field. The particles tend to hop into states where the transition dissipates more energy, confirming one of England’s predictions.

In another experiment, England simulated a toy chemical reaction network in which several particles randomly form and break harmonic spring bonds.26 The particles reside in a viscous medium, held at constant temperature, that applies a drag force dissipating away their kinetic energy. The forming and breaking of bonds follow a stochastic switching process whose rates are governed by the Arrhenius relationship where the rate drops exponentially with the activation energy. One of the particles is driven by a sinusoidal force with a specific driving frequency for each trial. One might expect the network to resonate with the driving force if it matched the normal modes of the undriven system.27 The simulations demonstrated that the positions of particles and bonding combinations often cluster around resonance states where greater amounts of energy are absorbed from the drive and then released into the environment. The system self-adjusts so its normal modes match the driving frequency. States which efficiently dissipate energy at one iteration step tend to transition to states that also efficiently dissipate energy.

The results perfectly match the theoretical predictions.

In a third experiment, England simulated a chemical network that continuously absorbed work from the environment. Energy is directed toward forcing reactions to new equilibrium values for the chemical concentrations.28 The strength of each drive is a function of the concentrations of twenty-five different species. England compares the forcing terms to the activity of high-energy molecules in cellular metabolism. As a consequence, the setup does not strictly follow Crooks’s formalism, which assumes that the forcing results from physical forces or applied fields. The applied chemical work operates similarly to the work imparted from standard fields, so the results of this study are still relevant.29 Each simulation trial starts with randomized initial concentrations and forcing functions that are generated as random functions of the concentrations of the different species. The system is then allowed to evolve until the network reaches a dynamical fixed point where all concentrations remain constant. The transition history is then analyzed to determine the evolutions and final strengths of the forcing terms. The results from series of trials demonstrate that the reaction network often self-adjusts, so the forcing takes on extremely high values. The system spontaneously fine-tunes itself to extract work from the environment and dissipate energy back through strongly driven chemical reactions.30

This result, like those from the previous experiments, supports England’s contention that systems driven far from equilibrium often self-organize to absorb and dissipate energy.

### On the Other Hand

It is a commonplace of criticism that the spontaneous formation of a living system by chance is unlikely. How unlikely? Fred Hoyle put the odds at roughly 1 in 1040,000. These odds he compared to the chance of a tornado plowing through a junkyard and assembling a Boeing 747.31 Hoyle’s tornado has enjoyed an existence all its own. It crops up as a simile in every discussion dedicated to the origins of life. In thermodynamics, probability and entropy are companionable concepts. The probability of a given state, in a thermodynamic ensemble, is proportional to the number, N, of its configurations; its entropy, to log(N).32 Natural systems tend to move from lower to higher states of entropy. Nature abhors low odds. This is the simple meaning of the second law of thermodynamics. With Hoyle’s 747 having lumbered down various runways, nearly all origins of life (OOL) researchers came to recognize that the appearance of the first cell could not have been a matter of sheer dumb luck.33 For all that, some systems do, in fact, move from high- to low-entropy states. Ice occupies a lower energy state than water.34 No violation of the second law is involved. Changes are always exothermic: heat is released. When the books are balanced, an exothermic increase in entropy always exceeds an endothermic decrease. The entropy of the universe increases, just as Boltzmann foretold.

But the simplest functional cell has both lower entropy35 and higher energy than its prebiotic precursors, or even its building blocks.36 The atoms in a cell are arranged in highly specified low-entropy configurations, and they are comprised of high-energy chemical bonds. These are unnatural circumstances. Natural systems never both decrease in entropy and increase in energy—not at the same time.

This is no compelling argument, but it is a suggestive observation.

Physicists and chemists often combine the entropy and energy of a system to form a definition of its free energy.37 For spontaneous processes, a change in free energy is always negative. Harold Morowitz estimated the probability that a bacterial cell might have originated through thermal fluctuations, and determined that the probability of spontaneously going from low to high, when every other system was spontaneously going from high to low, was on the order of one part in ten to the power of a hundred billion.38 This number represents a least upper bound since it measures the smallest increase in free energy needed to form a bacterial cell. And, of course, the probability is essentially zero.

Nature would not have allowed a cell to form near-equilibrium.

If not near-equilibrium, then, perhaps, far from it? In what are termed nonequilibrium dissipative systems, energy or mass enters the system and then leaves, and this flow spontaneously generates the order characteristic of the roll patterns in boiling water, the funnel of a tornado, or the wave patterns in the Belousov–Zhabotinsky reaction.39 Self-organizing patterns are often seen in nonlinear systems.40 Transitions occur when a control parameter crosses a critical value. Biologists, at once, suggested that something similar might explain the origin of life.41 England’s framework has offered new hope that the thermodynamic challenges could be overcome as a result of the new principles of self-organization identified in his experiments.

England’s simulation of particles hopping barriers demonstrates how transitions are more favored if they dissipate greater amounts of energy. And his model of a network of springlike bonds forming and breaking demonstrates how such networks can self-organize to better absorb and then dissipate energy. But the key challenge in OOL research lies in explaining how a system could extract energy from the environment and then direct it toward increasing its internal free-energy non-transiently. England’s experiments do not address this concern, for the energy absorbed is almost immediately dissipated away.

Experiments on chemically driven reaction networks do demonstrate a progression toward higher free-energy states, but at the expense of incorporating in various models the core components of a fully formed cell. All cells accomplish the goal of maintaining a high-free-energy state against opposing thermodynamic forces by employing complex molecular machinery and finely tuned chemical networks to convert one form of energy from the environment into high-energy molecules. The energy from the breakdown of these energy-currency molecules is directed toward powering targeted chemical reactions. The energy coupling is accomplished through complex enzymes and other proteins comprised of information-rich amino-acid sequences.42

England’s forcing terms are simply proxies for molecular engines and enzymes in the living cell. England directly compares the concentration dependences of the forcing terms to cellular metabolic pathways with feedback control.43 This research does not explain the OOL because it assumes that the central challenges of energy production and its coupling to specific sets of processes have already been solved.

England’s derived mathematical relationship for the ratio of the reverse to forward probabilities for transitions demonstrates that a system will tend to move in the direction of increasing total entropy. This is not controversial. Still, the formation of a cell represents a dramatic decrease in total entropy. England’s relationship can be rewritten using Jensen’s inequality and other straightforward steps to yield

PCell < eΔSeβΔQ,

where PCell is the probability for a cell forming from simple chemicals.44 The ΔS term represents the corresponding drop in internal entropy. The ΔQ term represents the energy that must be absorbed from the environment during the formation of the cell. Both ΔS and ΔQ take on large negative values. Note how the probability equals those from the Evans–Searles and Crooks theorems multiplied together. The chance for a cell forming in a nonequilibrium system can now be estimated. Determining the drop in entropy associated with abiogenesis is extremely challenging, so few have attempted rigorous calculations. Morowitz performed a crude estimate of the reduction in a cell associated with the formation of macromolecules. His approximation was on the order of 0.1 cal/deg-gm.45 This quantity corresponds in a bacterium46 to an entropy reduction of greater than 1010 nats, which yields a probability, ignoring the heat term, of less than one in ten to the power of a billion.

Calculating the component of the probability associated with the absorption of heat requires estimating the difference in free energy between the original molecules and a minimally complex cell, and evaluating the amount of work performed by plausible external drives which could have contributed to the free-energy increase. Many proposals have been offered for how various processes could assist in imparting the needed energy. Examples of hypothesized energy sources include meteorite crashes,47 moving mica sheets,48 shock waves,49 and proton gradients.50 None of these sources could have generated more than a tiny fraction of the required free energy.

To illustrate the disparity, the power production density of the simplest known cell for only maintenance is on the order of 100,000 μW/g.51 In any OOL scenario, the protocell would have to generate at least this amount in the latter stages52 just to overcome the thermodynamic drive back toward equilibrium, and even greater amounts would be required for replication.53 In contrast, a leading proposal for energy production centers on proton gradients in hydrothermal vents. Experimental simulations of vents under ideal conditions54 only generate small quantities of possible precursors to life’s building blocks, and the corresponding power production density is on the order of 0.001 μW/g.55 This quantity is considerably greater than what could practically be transferred to any stage of abiogenesis,56 yet it is still eight orders of magnitude too small even to prevent a protocell from quickly degrading back to simple, low-energy molecules.

All other proposals fare no better.

Consequently, most of the free-energy increase in cell formation would have required heat to be extracted directly from the environment. The minimum amount to form a bacterium was estimated by Morowitz at around 10–9 joules (1010 eV).57 This value represents more than the equivalent, if scaled, of a bathtub of water at room temperature spontaneously absorbing enough heat from the surroundings to start boiling. The resulting probability for life to form, excluding the needed reduction in entropy, calculates to less than one in ten to the power of a hundred billion.

These miniscule probabilities do not improve if the process takes place in multiple steps separated by extended periods of time.58 In fact, the challenges actually increase if each step toward life does not proceed immediately after the previous one, for the chances of the system moving toward higher entropy (or lower free energy) are far greater than moving in a life-friendly direction. Any progress could be completely squandered by a few thermal fluctuations or chemical interactions.

England’s theoretical framework signifies quantitatively that the origin of life is just as improbable in systems driven away from equilibrium as those near equilibrium.

His research illustrates the necessity for systems even remotely similar to life to possess the equivalent of both engines and enzymes. As a prime example, the springlike bond experiment incorporates a driving force that constantly adds energy to the system. This drive serves the equivalent role of the molecular engines in cells which convert external energy sources into energy-currency molecules. And the springs which transfer energy between the driven particle and the other particles in the network serve the equivalent role of enzymes which channel energy from the breakdown of the energy-currency molecules to target chemical reactions and other cellular functions. Both components are essential for the simulation to function, just as they are required to access the energy needed in nearly every stage of cell formation.

Enzymes are essential for energetically favorable reactions since most reactions are too slow to drive cellular operations. Enzymes accelerate the reactions’ turnover rates by factors typically between 108 and 1010, and the increase in many cases is significantly higher.59 Without enzymes, the concentration of a substrate would typically need to be millions of times greater to maintain a comparable reaction rate. This is not a plausible scenario.60 In addition, individual steps in the chemical pathways to synthesize life’s building blocks and then to link them together require multiple, mutually exclusive reaction conditions, so no environment could support more than a few required steps.61 The enzymes create the necessary nano-environments in their active sites to support their targeted reactions, so a multitude of diverse reactions can be maintained in the same cellular microenvironment simultaneously. They are required both to specify and power the correct set of processes.

### The Path Forward

A minimally complex free-living cell requires hundreds of tightly regulated enzyme-enabled reactions.62 If even one enzyme were missing, all metabolic processes would cease, and the system would head irreversibly back toward equilibrium. England’s research does not explain how such a complex, specified system could originate.63 Both the proteins that constitute an engine’s building blocks and enzymes represent sequences of amino acids that contain large quantities of functional information.64 The amino acids must be arranged in the right order in the same way the letters in a sentence must be arranged properly to convey its intended meaning. This arrangement is crucial for the chains to fold into the correct three-dimensional structures to properly perform their assigned functions.65 This information is essential for constructing and maintaining the cell’s structures and processes.66 Until origins researchers address the central role of information, the origin of life will remain shrouded in mystery.67

## Part 2. Jeremy England

Several things could be meant when calling something an explanation of life’s origins. The most straightforward would presumably be a collection of overwhelming evidence for a detailed, step-by-step account of all the molecular events that occurred—where and under what conditions—to build all the basic characteristics of cellular life as it exists now. Much as we crave stories like this, there is not likely to ever be one for this question. Almost by definition, the dance of life’s earliest molecular precursors would presumably have been so fragile and minuscule that the prospect of decisively interpreting physical materials in the present into some kind of reconstructed picture of the past is frustratingly dim. Consider: DNA in water falls apart spontaneously on the timescale of millions of years, and RNA and protein degrade thousands of times faster. With all the chemical transformation that generally gets accomplished in geologic time, trying to run the movie backward on cellular life based on reasoned inferences quickly fans out into widely divergent speculations. This does not mean scientists cannot speculate. But it does definitively rule out settling on one, well-justified consensus.

If scientists will not get to paleontologize life’s beginnings in the way they have the dinosaurs, then what kind of success might be more within reach? The easiest one is the philosophical parlor trick: if the universe is large and diverse enough, then many things can happen, and people capable of asking where life came from will be found exclusively in the part of the universe where life did happen. So here we are, and the details of how are an afterthought.

It is worth examining what exactly is unsatisfying about this sort of dodge. Even if we accept its basic reasoning, the question we are really curious about can still be asked, namely, is our part of the universe rare or typical? Granted, life is here, but should that be thought of as a freak occurrence or a normal one? The essence of what we may most want to know about life has to do with its probability.

Imagine that someone claims life originated through ideal, unbiased, random thermal fluctuations that caused, through a fluke of unimaginably small likelihood, basic building blocks to combine and form a protein-based cell with a DNA-based genome. Such an explanation would be no explanation at all, because resorting to the effects of extremely unlikely events frees one to say that almost anything could have happened.

An acorn found under an oak tree is easy to take as a matter of course because I know that acorns start out high up in oak trees and that gravity pulls them down. The high probability of an acorn ending up at the foot of the tree that gave it birth makes all acorns thus disposed unremarkable. Life, by contrast, seems remarkable because we do not see it spring into being all the time. Its presence cannot be viewed as a commonplace. If our best naive ideas of how matter might get shaped into such exquisitely special arrangements make it seem like wildly rare events had to occur to get this result, then we are left feeling like we have yet to understand something fundamental.

It was from this point of view that I started thinking about the origins of life. The main way to make progress on the question has mostly to do with adjusting our sense of what is likely and possible. Life performs many actions for which its architecture seems to be highly specialized: it harvests energy, repairs and copies itself, predicts its world, and creatively adapts to environmental change. These feats are difficult to pull off if done by a random arrangement of building blocks. The proper question to ask then is: how does matter become arranged into the special shapes that are good at these actions?

Precisely on that point, I have been trying to argue that people should recognize a significant opportunity coming into view. If it could be shown that there are fairly generic physical conditions under which the hallmarks of lifelikeness become like acorns under the oak tree—that is, normal and expected occurrences in the conditions where they emerge—we would talk differently about what seems mysterious or not about life’s shadowy past. Right now, we can theorize about how the flow of energy through matter reshapes it over time. We also can carry out experiments that may serve as proofs of principle. Having done so to some extent, we may not know in more detail what happened a long time ago, but we may wonder less about it.

In his statement above, Brian Miller thoughtfully expresses the view that this way of thinking has not made much progress so far. In the course of making this case, he says a great many things that are true. He gives a fair, high-level account of some recent progress in thinking about nonequilibrium statistical mechanics. It is true that fluctuation theorems have opened the door to proving general relationships between dissipation and fluctuation that hold far from thermal equilibrium. It is also true that I have proposed a way of manipulating one of these results into a form that reveals part of the thermodynamics of lifelike self-organization. My interlocutor has to be given tremendous credit for engaging with the right equation in the right paper,68 which has sadly turned out to be an uncommon feat among those reacting skeptically to the idea of dissipative adaptation.69

Consider this equation:70

$\mathrm{ln}\left[\frac{{\pi }_{\tau }^{fwd}\left[\mathrm{I}\to \mathrm{I}\mathrm{I}\right]}{{\pi }_{\tau }^{fwd}\left[\mathrm{I}\to \mathrm{I}\mathrm{I}\mathrm{I}\right]}\right]=-\Delta \mathrm{ln}⟨\frac{{p}_{f}}{{p}_{bz}}⟩+\mathrm{ln}\left[\frac{{\pi }_{\tau }^{rev}\left[\stackrel{-}{\mathrm{I}\mathrm{I}}\to \stackrel{-}{\mathrm{I}}\right]}{{\pi }_{\tau }^{rev}\left[\stackrel{-}{\mathrm{I}\mathrm{I}\mathrm{I}}\to \stackrel{-}{\mathrm{I}\mathrm{I}}\right]}\right]-\mathrm{ln}\left[\frac{{⟨\mathrm{e}\mathrm{x}\mathrm{p}\left(-\mathcal{B}{\mathrm{W}}_{\mathrm{d}}\right)⟩}_{\mathrm{I}\to \mathrm{I}\mathrm{I}}^{fwd}}{{⟨\mathrm{e}\mathrm{x}\mathrm{p}\left(-\mathcal{B}{\mathrm{W}}_{\mathrm{d}}\right)⟩}_{\mathrm{I}\to \mathrm{I}\mathrm{I}\mathrm{I}}^{fwd}}\right]$.

There is a lot to unpack here, because this is basically a generalization of the relationship between probability and free energy for a driven, nonequilibrium system. That more basic statement in thermal equilibrium would look like this:

$\mathrm{ln}\left[\frac{\pi \left[\mathrm{I}\mathrm{I}\right]}{\pi \left[\mathrm{I}\mathrm{I}\mathrm{I}\right]}\right]=-\beta \Delta {F}_{\mathrm{I}\mathrm{I},\mathrm{I}\mathrm{I}\mathrm{I}}$.

In both cases, the relations compare different relative likelihoods of outcomes for the dynamic reshaping of matter under conditions where energy flows in and out of the system. In the latter case (equilibrium), however, the only energy allowed to flow is heat, whereas in the former work energy is also supplied from external sources.

The simpler equilibrium equation is valuable for understanding the behavior of different fluids and solids. In this balance, comparing the effect of entropy to that of energy does not provide any single indication that applies in all circumstances: if the entropy change for a process is large enough, higher-entropy states are more likely, but if the energy drop for the process is large enough, then low-energy, and possibly low-entropy, states may be more likely. In a given system, this tradeoff may be controlled by the temperature. This is why ice is stable when it is cold, but becomes less stable than the higher-entropy liquid state when the temperature rises.

The example illustrates how a general thermodynamic relation of this kind can call attention to different scenarios that might happen. Sometimes, with detailed factors being what they are, the likely outcome may be strongly dominated by what increases entropy, as with a vapor. Other times, the outcome will be dominated by a low-entropy outcome that lowers energy, as with ice. It is uncertain which will occur until we get down to brass tacks with the particular system at hand.

The generalization of free energy in its relation to probability that is shown above for a nonequilibrium scenario plays the same kind of role. It indicates, for instance, that in the far-from-equilibrium regime, sometimes what is most likely to happen is what can happen fastest, which is a move to a new state that is in relatively close proximity to the starting point. It also indicates that sometimes the likely outcome will be determined by how much more work reliably gets absorbed along the way.

There are a number of popular ways to misunderstand this statement. At least one of these misunderstandings comes across in Miller’s critique. First, the equation does not say that likely outcomes always have exceptional histories of reliably high absorption of work energy from the environment, any more so than the definition of equilibrium free energy requires that all thermal equilibrium states of water be ice. More importantly, the equation also does not indicate anything about the rate of energy absorption in the final outcome state: the history of work absorbed during dynamical evolution is not always equivalent to the propensity to absorb work after evolutionary change has occurred.

Given the relationship between work absorption and probability expressed, the equation suggests there may be a scenario in which a low-entropy, high-energy, stable state of matter may be a likely outcome of the dynamics, and if so, then it must involve a process that absorbs and dissipates a lot of energy from the environment. This scenario would be most interesting in cases where absorbing energy from the environment is difficult, because then it might be possible to see the signature of this energy flow in the exceptional capabilities of the structure that emerges.

I am being deliberately vague by using the word “signature,” because these events still break into cases.71 One way of seeing the mark of exceptionality in the energy-absorption properties of a system is to notice it is much better at extracting energy from the environment than a random rearrangement of its constituent parts. Another way would be to notice when the system becomes exceptionally bad at the same activity. In truth, both scenarios are realizable, and each best embodies a separate aspect of what is impressive about the physical properties of living things. Paradoxically, life is good at harvesting fuel from its environment,72 but also good at controlling the flow of energy through it so that it does not get shaken to smithereens by all the random physical insults that might result.73 Overall, all the general thermodynamic statement suggests is that, using intuition that the physics provides, we should try to define scenarios in which lifelike behaviors may become likely to emerge.

Returning to the example of ice, the fact that the lowering of energy can “pay for” a decrease in entropy at constant temperature does not provide any direction for how to accomplish such a goal. To make crystallization of this sort happen, the particles involved have to happen to have lower energy when they are closer together and carefully coarranged in space. One could have also imagined particles whose energy is lowered when they are farther apart, and there would be thermodynamic things to say about that. But getting a crystal out of it is a less straightforward matter.

So, too, with things more impressively lifelike than crystals: the thermodynamic equation just provides a hint of what to explore. It shows that dissipative adaptation is permissible. It ultimately takes more work to devise specific mechanisms of self-organization that result from more detailed assumptions about how all the particles interact.74 Both simulations and experiments need to be done to establish more clearly when any lifelike behaviors might be expected to generate from generic—“naive,” “inanimate”—initial conditions. What first studies, by myself and others,75 have revealed so far is encouraging: finely tuned, low-entropy states in driven matter spring up all over the place. They seem mainly to require a diverse space of possible arrangements for their constituent parts and an external energy source that has some amount of detailed pattern to it, as any specifically colored radiation or specifically structured chemical would. This is an encouraging indication that structures that mimic aspects of what life is good at might have been lying around in the tempestuous nonequilibrium world, acting as a toolbox for the first more lifelike thing to draw from. Still, all of this kind of speculation is much more a rallying cry than anything else, for the science of how lifelikeness emerges is just starting.

In the end, the most important thing to emphasize is that the time has come to start being empirical in this research. Back-of-the-envelope calculations of prohibitively great improbabilities like the one quoted from Morowitz in this piece’s partner have been around for a while. They invariably rely on straw-man assumptions. Let it be granted, once and for all: waiting for a thermal fluctuation at thermal equilibrium to slap together a living organism is not going to work.

So what? The first equation mentioned in my response implies that if scientists actually intend to compute the probability of forming a live cell, they should have to specify how long it takes while making reference to kinetic factors controlling the rates of reactions and to the amount energy absorbed and dissipated per unit of time by the system. If this sounds horribly complicated, it is. It should bring a humble understanding that such probabilities are unlikely to ever be computed accurately. Instead, our window onto thermodynamics should spur new empirical investigation. The theoretical relationship between probability and dissipated work is a thread that if pulled harder may yet unravel the whole tapestry. Coming into view is a gray spectrum of increasingly complex fine-tuning distinguishing the dust of the earth from life. Already, examples exist of structures that can form rapidly at high energy and low entropy and last for a long time, so long as they are fed with more energy of the type that generated them. That may not be life, but it surely is reason to hold back grand declarations about what is likely or impossible until we have better explored a new frontier.

1. Physicists often debate the value in using the term “disorder” to depict entropy to the public. Some prefer to describe the increase in entropy as the “spreading” of energy. A particularly useful definition in the context of biology is the measure of uncertainty in a system, or the Shannon measure of information. See Arieh Ben-Naim, “Entropy, Shannon’s Measure of Information and Boltzmann’s H-Theorem,” Entropy 19, no. 2 (2017): 48, doi:10.3390/e19020048.
2. Edith Marie Sevick et al., “Fluctuation Theorems,” Annual Review of Physical Chemistry 59 (2008), doi:10.1146/annurev.physchem.58.032806.104555.
3. Denis Evans, Debra Searles, and Stephen Williams, Fundamentals of Classical Statistical Thermodynamics: Dissipation, Relaxation, and Fluctuation Theorems (Weinheim: Wiley-VCH, 2017), 50–53. The standard dissipation function is equated with the internally generated entropy in nats.
4. James Reid et al., “The Dissipation Function: Its Relationship to Entropy Production, Theorems for Nonequilibrium Systems and Observations on Its Extrema,” in Beyond the Second Law: Entropy Production and Non-Equilibrium Systems, ed. Roderick Dewar et al. (New York: Springer-Verlag, 2014), 31–47.
5. Some cited papers use X(t) instead of Γ
6. The expansion term, also known as the contraction term, is proportional to the heat exchange with the surrounding thermal bath. Sevick et al., “Fluctuation Theorems,” 14.
7. Jeffrey Phillips, “The Macro and Micro of It Is That Entropy Is the Spread of Energy,” The Physics Teacher 54, no. 6 (2016): 344–47, doi:10.1119/1.4961175.
8. Dissipative fields do not change the ground state energy of the system. Instead, the energy which enters the system due to these fields can completely turn into heat and diffuse out into the surrounding environment. Examples include the application of an electric field to a resistor or of light to a solution of interacting chemicals. In contrast, the energy imparted due to the application of a non-dissipative or elastic field can be stored in the system as potential energy. For instance, the application of an electric field to solid sodium chloride increases the potential energy corresponding to intermolecular forces between the constituent molecules. See Evans, Searles, and Williams, Fundamentals of Classical Statistical Thermodynamics, 23.
9. The probability density ratio for p(s)/p(–s) can be converted to an upper bound for the probability of entropy, s, taking on the negative value of –A or less by recognizing that p(s) < p(–s)eA for values of s less than –A. Therefore, P(s < –A) < P(s > A)eA < eA
10. Denis Evans and Debra Searles, “The Fluctuation Theorem,” Advances in Physics 51, no. 7 (2002): 1,529–85, doi:10.1080/00018730210155133. Some physicists have argued that the proposed solution is incomplete, and it must be combined with the fact that our universe started in a low-entropy state. See Paul Davies, “The Arrow of Time,” Astronomy and Geophysics 46, no. 1 (2005): 1.26–1.29, doi:10.1046/j.1468-4004.2003.46126.x.
11. The change in the Helmholtz free energy (F) equals the change in the internal energy of a system (U) minus the temperature times the change in entropy (S): ΔF = ΔU – TΔS. In a system where the volume is held constant, ΔF for spontaneous processes is always negative. If the pressure is held constant, then spontaneous processes correspond to a negative change in the Gibbs free energy (G) which is equal to the change in enthalpy (H) minus the temperature times the change in entropy: ΔG = ΔH – TΔS. The change in enthalpy represents the change in energy of a transition adjusted for the work performed on the environment due to a change in volume: ΔH = ΔU + PΔV. Positive changes in enthalpy represent heat being absorbed from the environment and negative changes represent heat being released. Physicists and chemical engineers typically work with the Helmholtz free energy while chemists typically work with the Gibbs free energy.
12. The same general approach can be applied to the Crooks fluctuation theorem as was applied to the Evans–Searles fluctuation theorem to demonstrate that the probability of the heat being absorbed with a value of Q or greater drops exponentially with Q.
13. Arthur Ashkin developed the technique of using a highly focused laser beam to apply a force upon microscopic particles to physically hold and move them. He won a Nobel Prize for developing the “optical tweezers” technique and applying it to biomolecules. His seminal paper is Arthur Ashkin et al., “Observation of a Single-Beam Gradient Force Optical Trap for Dielectric Particles,” Optics Letters 11, no. 5 (1986): 288–90, doi:10.1364/OL.11.000288.
14. Delphine Collin et al., “Verification of the Crooks Fluctuation Theorem and Recovery of RNA Folding Free Energies,” Nature 437, no. 7,056 (2005): 231–34, doi:10.1038/nature04061.
15. Jan Liphardt et al., “Equilibrium Information from Nonequilibrium Measurements in an Experimental Test of Jarzynski’s Equality,” Science 296, no. 5,574 (2002): 1,832–35, doi:10.1126/science.1071152.
16. James Reid et al., “Reversibility in Nonequilibrium Trajectories of an Optically Trapped Particle,” Physical Review E 70, no. 1 (2004): 016111, doi:10.1103/PhysRevE.70.016111.
17. Kumiko Hayashi, Mizue Tanigawara, and Jun-ichi Kishikawa, “Measurements of the Driving Forces of Bio-Motors Using the Fluctuation Theorem,” Biophysics (Nagoya-Shi, Japan) 8 (2012): 67–72, doi:10.2142/biophysics.8.67.
18. Nikolay Perunov, Robert Marsland, and Jeremy England, “Statistical Physics of Adaptation,” Physical Review X 6, no. 021036 (2016), doi:10.1103/PhysRevX.6.021036.
19. Jeremy England, “What Is Life-Lecture: Jeremy England,” Karolinska Institutet, September 11, 2014, YouTube video, 1:00:16.
20. Perunov, Marsland, and England, “Statistical Physics of Adaptation,” 4.
21. Gavin Crooks, “Entropy Production Fluctuation Theorem and the Nonequilibrium Work Relation for Free Energy Differences,” Physical Review E 60, no. 3 (1999): 2,721–26, doi:10.1103/PhysRevE.60.2721.
22. Udo Seifert, “Entropy Production along a Stochastic Trajectory and an Integral Fluctuation Theorem,” Physical Review Letters 95, no. 4 (2005): 040602, doi:10.1103/PhysRevLett.95.040602.
23. Jeremy England notes an error in the equation on the following line:
Miller has mistakenly taken the average outside the natural log and tried to simplify into Shannon entropy, when in fact this is not accurate, in general. The Shannon entropy is just the first term in a cumulant expansion of this ensemble average of the probability. More succinctly, ln <exp ln x> is not generally equal to <ln x>.

24. As an analogy imagine a flat road, which ends in a hill that rises to a certain height and then descends to an altitude below the original road. The drop in altitude between the initial road and final road would correspond to the amount of energy released during the transition and, thus, the tendency for the transition to proceed in that direction. The height of the hill above the initial road corresponds to the activation barrier or energy required to initiate the transition. If the barrier is very high, a transition would not be likely to move forward or in reverse regardless of how much lower the energy of the final state would be in comparison with the initial state. If the barrier is small, then both the forward and reverse transitions would be easier.
25. Perunov, Marsland, and England, “Statistical Physics of Adaptation.”
26. Tal Kachman, Jeremy Owen, and Jeremy England, “Self-Organized Resonance during Search of a Diverse Chemical Space,” Physical Review Letters 119, no. 3 (2017): 038001, doi:10.1103/PhysRevLett.119.038001.
27. Natural frequencies for the system to oscillate.
28. Jordan Horowitz and Jeremy England, “Spontaneous Fine-Tuning to Environment in Many-Species Chemical Reaction Networks,” Proceedings of the National Academy of Sciences 114, no. 29 (2017): 7,565–70, doi:10.1073/pnas.1700617114.
29. Christian Van den Broeck, “Stochastic Thermodynamics: A Brief Introduction,” Proceedings of the International School of Physics “Enrico Fermi, 184 (2014): 155–93, doi:10.3254/978-1-61499-278-3-155.
30. The rate of energy dissipated into the environment is proportional to the rate of a reaction times the difference in free energy between the reactants and the products.
31. Fred Hoyle, The Intelligent Universe (London: Michael Joseph, 1983). The calculation was based on the minimum required number of proteins in a minimally complex cell, and the probability of a functional protein emerging from a random sequence of amino acids.
32. Terrell Hill, An Introduction to Statistical Thermodynamics (Mineola, New York: Dover Publications, 1987), 26–29.
33. Jack Trevors and David Abel, “Chance and Necessity Do Not Explain the Origin of Life,” Cell Biology International 28, no. 11 (2004): 729–39, doi:10.1016/J.CELLBI.2004.06.006.
34. Xi Zhang et al., “Water’s Phase Diagram: From the Notion of Thermodynamics to Hydrogen-Bond Cooperativity,” Progress in Solid State Chemistry 43, no. 3 (2015): 71–81, doi:10.1016/j.progsolidstchem.2015.03.001.
35. Paul Davies, Elisabeth Rieper, and Jack Tuszynski, “Self-Organization and Entropy Reduction in a Living Cell,” BioSystems 111, no. 1 (2013): 1–10, doi:10.1016/j.biosystems.2012.10.005.
36. Michael Kaufmann, “On the Free Energy That Drove Primordial Anabolism,” International Journal of Molecular Sciences 10, no. 4 (2009): 1,853–71, doi:10.3390/ijms10041853.
37. Byung Chan Eu and Mazen Al-Ghoul, Chemical Thermodynamics: With Examples for Nonequilibrium Processes (Singapore: World Scientific, 2010), 93.
38. Harold Morowitz, Energy Flow in Biology (Oxford: Ox Bow Books, 1979), 66.
39. Atchara Sirimungkala et al., “Bromination Reactions Important in the Mechanism of the Belousov–Zhabotinsky System,” Journal of Physical Chemistry A 103, no. 8 (1999): 1,038–43, doi:10.1021/jp9825213.
40. Gregoire Nicolis and Ilya Prigogine, Self-Organization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations (New York: Wiley, 1977).
41. Brian Johnson and Sheung Kwan Lam, “Self-Organization, Natural Selection, and Evolution: Cellular Hardware and Genetic Software,” BioScience 60, no. 11 (2010): 879–85, doi:10.1525/bio.2010.60.11.4.
42. Bonnie Strait and T. Gregory Dewey, “The Shannon Information Entropy of Protein Sequences,” Biophysical Journal 71 no. 1 (1996): 148–55, doi:10.1016/S0006-3495(96)79210-X. For a discussion on how cellular information drives the reduce of entropy, see Yaşar Demirel, “Information in Biological Systems and the Fluctuation Theorem,” Entropy 16, no. 4 (2014): 1,931–48, doi:10.3390/e16041931.
43. For a computation analysis of regulation in cellular metabolism, see Ralf Steuer and Björn Junker, “Computational Models of Metabolism: Stability and Regulation in Metabolic Networks,” Advances in Chemical Physics 142 (2008): 105–251, doi:10.1002/9780470475935.ch3.
44. Jensen’s inequality states that for convex functions the average of the function of a variable is greater than the function of the average of the variable. Therefore, the average of the exponential of the total entropy term can be changed to the exponential of the average to calculate the lower bound of the probability: ⟨e–⁠ΔStot⟩ > e–⁠⟨ΔStot = e⟨⁠–⁠ln[pi]⟩⁠–⁠⟨⁠–⁠ln[pf]⟩⁠–⁠⟨βΔQ. The average of the negative log of the probability for microstates is the entropy in units where the Boltzmann constant has been set to 1. The term then reduces to e‑Δsβ⟨ΔQ = e‑ΔSeβ⟨ΔQ. The average symbol around ΔQ will be dropped, so ΔQ will now represent the average heat absorbed over possible microstate trajectories during the transition. The relationship can be inverted, so the ratio of the probability for the transition moving forward to moving backward is then less than eΔSeβΔQ. The forward transition is assumed to represent simple chemicals turning into a cell. The same approach applied to the Evans–Searles fluctuation theorem can then be applied to this relationship to convert the ratio of the probabilities to the probability for a transition to a state consistent with life—a state whose decrease in total entropy is comparable to or greater than that required for the formation of a cell.
45. Morowitz, Energy Flow in Biology, 97.
46. The approximate weight of a bacterium is 10–12g which yields a drop in entropy on the order of 10–13 joules/degree. Ron Sender, Shai Fuchs, and Ron Milo, “Revised Estimates for the Number of Human and Bacteria Cells in the Body,” PLoS Biology 14, no. 8 (2016): e1002533, doi:10.1371/journal.pbio.1002533.
47. Carsten Bolm et al., “Mechanochemical Activation of Iron Cyano Complexes: A Prebiotic Impact Scenario for the Synthesis of α-Amino Acid Derivatives,” Angewandte Chemie International Edition 130, no. 9 (2018), doi:10.1002/anie.201713109.
48. Helen Greenwood Hansma, “Possible Origin of Life between Mica Sheets: Does Life Imitate Mica?Journal of Biomolecular Structure & Dynamics 31, no. 8 (2013): 888–95, doi:10.1080/07391102.2012.718528.
49. Charles Cockell, “The Origin and Emergence of Life under Impact Bombardment,” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 361, no. 1,474 (2006): 1,845–56, doi:10.1098/rstb.2006.1908.
50. Nick Lane, John Allen, and William Martin, “How Did LUCA Make a Living? Chemiosmosis in the Origin of Life,” BioEssays 32, no. 4 (2010): 271–80, doi:10.1002/bies.200900131.
51. The simplest known organism is Mycoplasma pneumoniae, and the energy production for maintenance is roughly 50,000 ATP/s. ATP molecules provide 30,000 J/mol of energy. The size of Mycoplasma is around 0.01m. Therefore, the energy production density is on the order of 100,000 μW/g.

Judith Wodke et al., “Dissecting the Energy Metabolism in Mycoplasma Pneumoniae through Genome-Scale Metabolic Modeling,” Molecular Systems Biology 9 (2013): 653, doi:10.1038/msb.2013.6. Victor Rodwell et al., Harper’s Illustrated Biochemistry (New York: McGraw-Hill Education, n.d.), 107. Laleh Nikfarjam and Parvaneh Farzaneh, “Prevention and Detection of Mycoplasma Contamination in Cell Culture,” Cell Journal 13, no. 4 (2012): 204.
52. Mycoplasma pneumoniae is a parasite, so it lacks many of the metabolic processes of free-living prokaryotes. As a result, the first cell would have required a larger genome, thus increasing energy requirements. Stephen Giovannoni et al., “Genetics: Genome Streamlining in a Cosmopolitan Oceanic Bacterium”, Science 309, no. 5,738 (2005): 1,242–45, doi:10.1126/science.1114057. In addition, if enzymes or other processes were less efficient, the required energy output would have also increased.
53. Michael Lynch and Georgi Marinov, “The Bioenergetic Costs of a Gene,” Proceedings of the National Academy of Sciences of the United States of America 112, no. 51 (2015): 15,690–95, doi:10.1073/pnas.1514974112.
54. Barry Herschy et al., “An Origin-of-Life Reactor to Simulate Alkaline Hydrothermal Vents,” Journal of Molecular Evolution 79, no. 5–6 (2014): 213–27, doi:10.1007/s00239-014-9658-4.
55. The reactor simulation generated an approximately 20 nM concentration of formaldehyde from hydrogen and carbon dioxide in 10 minutes in an approximately 1-liter vessel, and the entire reaction corresponds to two electrons, n = 2, increasing in reduction potential, Er, by less than 200 mV. The reduction potential can be converted to the free energy change for the production of one mole of formaldehyde using ΔG = nFEr. Carl Hamann, Andrew Hamnett, and Wolf Vielstich, Electrochemistry (Weinheim: Wiley-VCH, 2007), 78. The corresponding power generation density is on the order of 0.001 microW/g (.001 mu-W/g).
56. The only product of the reactor that could have assisted OOL was formaldehyde, which is a possible precursor to essential sugars. However, the quantities generated by the experiment were too small to drive any further life-friendly reactions. As a result, none of the converted energy would have supported cell formation. Equally problematic, the measured formaldehyde might not have even resulted from the proton gradient. See J. Baz Jackson, “The ‘Origin-of-Life Reactor’ and Reduction of CO2 by H2 in Inorganic Precipitates,” Journal of Molecular Evolution 85, no. 1–2 (2017): 1–7, doi:10.1007/s00239-017-9805-9. Compounding the problem, individual rungs in the pathways to synthesizing life’s building blocks and the subsequent stages leading to the first cell require multiple, mutually exclusive reaction conditions, so no environment could support more than a few required steps. See Norio Kitadai and Shigenori Maruyama, “Origins of Building Blocks of Life: A Review,” Geoscience Frontiers 9, no. 4 (2018): 1,117–53, doi:10.1016/j.gsf.2017.07.007.
57. Morowitz, Energy Flow in Biology, 65. The estimate was performed by calculating the difference between the average bond energies in a bacterium and those in the molecules in the environment of the ancient earth.
58. The entropy and the free energy are state functions, so they are path independent. The probabilities are exponentials of those functions. To appreciate the significance, imagine breaking the path from nonlife to life into three steps which constantly move toward lower entropy (or higher free energy). Then, –ΔS = –ΔS1 – ΔS2 – ΔS3, and the probabilities associated with random fluctuations driving each step would be e‑ΔS1, e‑ΔS2, and e–ΔS3. The chance of all three fluctuations taking place is then e‑ΔS1 ⋅ e‑ΔS2 ⋅ e‑ΔS3 = e‑ΔS1‑ΔS2‑ΔS3 = e‑ΔS. The probability for the three steps is the same as for one combined step.
59. Hans Bisswanger, Practical Enzymology (Weinheim: Wiley-Blackwell, 2011), 1.
60. Bryson Bennett et al., “Absolute Metabolite Concentrations and Implied Enzyme Active Site Occupancy in Escherichia Coli,” Nature Chemical Biology 5, no. 8 (2009): 593–99, doi:10.1038/nchembio.186.
61. Kitadai and Maruyama, “Origins of Building Blocks of Life: A Review.”
62. Tugce Bilgin and Andreas Wagner, “Design Constraints on a Synthetic Metabolism,” PLoS ONE 7, no. 6 (2012): e39903, doi:10.1371/journal.pone.0039903. Steuer and Junker, “Computational Models of Metabolism.”
63. For further discussion on the need for enzymes, see Leslie Orgel, “The Implausibility of Metabolic Cycles on the Prebiotic Earth,” PLoS Biology 6, no. 1 (2008): e18, doi:10.1371/journal.pbio.0060018. Morowitz promotes metabolism-first models, but he acknowledges the extreme difficulty of driving the right reactions without enzymes:
Networks of synthetic pathways that are recursive and self-catalyzing are widely known in organic chemistry, but they are notorious for generating a mass of side products, which may disrupt the reaction system or simply dilute the reactants, preventing them from accumulating within a pathway. The important feature necessary for chemical selection in such a network, which remains to be demonstrated, is feedback-driven self-pruning of side reactions, resulting in a limited suite of pathways capable of concentrating reagents as metabolism does. The search for such self-pruning is one of the most actively pursued research fronts in Metabolism First research.
James Trefil, Harold Morowitz, and Eric Smith, “The Origin of Life: A Case Is Made for the Descent of Electrons,” American Scientist 97, no. 3 (2009): 208, doi:10.1511/2009.78.206.
64. For a discussion on the measure of meaningful information in proteins, see Winston Ewert, William Dembski, and Robert Marks II, “Algorithmic Specified Complexity,” in Engineering and the Ultimate: An Interdisciplinary Investigation of Order and Design in Nature and Craft, ed. Jonathan Bartlett, Dominic Halsmer, and Mark Hall (Broken Arrow: Blyth Institute Press, 2014), 131–49.
65. Arnaud Pocheville, Paul Griffiths, and Karola Stotz, “Comparing Causes: An Information-Theoretic Approach to Specificity, Proportionality and Stability” (conference paper, 15th Congress of Logic, Methodology, and Philosophy of Science, Helsinki, Finland, August 2015).
66. Demirel, “Information in Biological Systems.” Researchers such as Yaşar Demirel have employed an augmented form of the fluctuation theorem with the exponential side of the equation adjusted to es–ΔI. The ΔI term represents the change in the mutual information between the start and end of a cellular process directed by the information in a repository. The starting value is the mutual information between the information repository and the starting materials, and the end value is the mutual information between the repository and the final products. The mutual information change relates to the drop in entropy between the molecular precursors and the end products. Entropy production and mutual information are treated on an equal footing, so the theoretical minimum energy input needed for the information processing can be assessed. The cell constantly needs to replace cellular parts, so the corresponding information-related energy requirements exacerbate the free-energy hurdle to OOL.
67. Sara Imari Walker and Paul Davies, “The ‘Hard Problem’ of Life,” in From Matter to Life: Information and Causality, ed. George Ellis, Paul Davies, and Sara Imari Walker (Cambridge: Cambridge University Press, 2017), 19–37.
68. Jeremy England, “Dissipative Adaptation in Driven Self-Assembly,” Nature Nanotechnology 10, no. 920 (2015): 919–23, doi:10.1038/nnano.2015.250.
69. For the record, dissipative adaption was introduced in England, “Dissipative Adaptation in Driven Self-Assembly”; Perunov, Marsland, and England, “Statistical Physics of Adaptation”; Kachman, Owen, and England, “Self-Organized Resonance.” It is not mentioned anywhere in Jeremy England, “Statistical Physics of Self-Replication,” Journal of Chemical Physics 139, no. 121,923 (2013), doi:10.1063/1.4818538.
70. Quoted from Perunov, Marsland, and England, “Statistical Physics of Adaptation,” 6.
71. Jeremy England, “Dissipative Adaptation in Driven Self-Assembly.” Perunov, Marsland, and England, “Statistical Physics of Adaptation.” Kachman, Owen, and England, “Self-Organized Resonance.”
72. Kachman, Owen, and England, “Self-Organized Resonance.”
73. Hridesh Kedia et al., “Drive-Specific Adaptation in Disordered Mechanical Networks of Bistable Springs” (2019), arXiv:1908.09332v1.
74. Jacob Gold and Jeremy England, “Self-Organized Novelty Detection in Driven Spin Glasses” (2019), arXiv:1911.07216.
75. Kedia et al., “Drive-Specific Adaptation.” Gold and England, “Self-Organized Novelty Detection.” (2019), arXiv:1911.07216. B. Schiener et al., “Nonresonant Spectral Hole Burning in the Slow Dielectric Response of Supercooled Liquids,” Science 274, no. 5,288 (1996): 7,746–61, doi:10.1126/science.274.5288.752. Koit Mauring, Indrek Renge, and Rein Avarmaa, “A Spectral Hole-Burning Study of Long-Wavelength Chlorophyll A Forms in Greening Leaves at 5 K,” FEBS Letters 223, no. 1 (1987): 165–68, doi:10.1016/0014-5793(87)80529-x. Nicolas Bachelard et al., “Emergence of an Enslaved Phononic Bandgap in a Non-Equilibrium Pseudo-Crystal,” Nature Materials 16, no. 8 (2017): 808–13, doi:10.1038/nmat4920. Chad Ropp et al., “Dissipative Self-Organization in Optical Space,” Nature Photonics 12, no. 12 (2018): 739–43, doi:10.1038/s41566-018-0278-1.

Brian Miller holds a PhD in physics from Duke University.

Jeremy England is a Principal Research Scientist at the Georgia Institute of Technology.