The Fourth Law, Part 3. [6/29/03]

 

Thermodynamics

 

I find the subject of thermodynamics fascinating for a couple of reasons.  First, it seems to lead inexorably into discussions of quantum mechanics and information theory, not to mention biology, and is thereby a prime candidate for the umbrella science that informs the other sciences.  Second, the principles of thermodynamics seem to be ignored or poorly understood by most scientists, and those few who have focused on the field appear to be in disagreement on a number of key issues.

 

This is perhaps due to the fact that relatively little effort has gone into theoretical or empirical research in the field since the work of the last century by Maxwell, Boltzmann and Gibbs, even though much of Einstein’s work was based on thermodynamics and Planck wrote a textbook on the subject.  Also, since the decline of the steam engine, the practical imperatives of thermodynamics may have become less pressing.  In addition, the difficulties of measuring thermodynamic quantities such as entropy and free energy have no doubt discouraged experimentation.

 

As an example of these difficulties, Jaynes disputes Haynie’s contention that biological systems are isothermal.  For instance, Haynie states that “cells cannot do work by heat transfer because they are isothermal systems.” 1  By contrast, Jaynes asserts that muscle cells are ‘powered by tiny “hot spots” of molecular size, as hot as the sun.’ 2  He attributes the isothermal illusion to the coarseness of the devices used to measure temperature.

 

Needless to say, this difference in perspective is both huge and of practical significance, and Jaynes speculates that it lies behind much of the misunderstanding of thermodynamics by biologists (which is probably being generous).

 

Second Law Locality

 

Another example of the disparities in the literature relates to the locality of  the Second Law.  Haynie in his textbook writes that “The Second Law requires only that any process resulting in a decrease in entropy on a local level must be accompanied by an even larger increase in the entropy of the surroundings.”3  However, Prigogine in his textbook says that entropy production dSk/dt must be positive in every part k of a system of n parts.  In more general terms, he states that both the First and Second Laws must be local to be compatible with relativity.4  Haynie’s scenario amounts to instantaneous action at a distance, which the theory of relativity prohibits.

 

The implications of the above are interesting, since it means that on every scale, no matter how small, entropy is increasing all the time!  But if this is so, how can “order” be generated in the face of the Second Law?  To begin with, Jaynes rejects the common practice of equating order/disorder with entropy, a concept that has been borrowed from information theory, and agrees with Maxwell, who he credits with the assertion that those terms are only expressions of human aesthetic judgments.Furthermore, every irreversible process on any scale is dissipative, converting energy to work and waste heat.  This is necessarily accompanied by a net increase in entropy as a consequence of conservation of energy.  Work is expressed as kinetic energy or can be stored as potential energy, represented by what I prefer to call structure (instead of order).  Information is one form of potential energy (specified as information by virtue of human subjective judgment, of course).

 

Minimum vs. Maximum Entropy Production

 

This leads to a third disparity in viewpoint, this time between Prigogine and Jaynes.  At issue is Prigogine’s principle of minimum entropy production, which states that for a nonequilibrium stationary state, energy will flow through the system in such a way to minimize the production of entropy dS/dt.6  This assertion, however, is contradicted by Jaynes, who claims that heat production dQ/dt is being minimized instead, and that this is accounted for by the First Law, with the Second Law superfluous in this regard.7

 

Jaynes’ view is consistent with my Fourth Law principle of maximum efficiency, where Efficiency = 1 – T dS / dH, dS is the entropy production and dH is the energy added to the system.  Since heat production dQ = T dS, minimizing heat production maximizes efficiency for a given energy input.  I derived the above equation for efficiency without recourse to either Haynie or Prigogine, since neither of them mentioned efficiency in the contest of biological systems.  On the other hand, Jaynes, as I later discovered, evaluated the efficiency of muscle activity by deriving his own ad hoc equation for efficiency based on a rationale which paralleled mine:  Efficiency = work done / (work done + heat generated).8

 

Jaynes’ refutation of the minimum entropy production principle lead him to formulate his principle of Maximum Entropy Production (MEP), which appears at first to be in polar opposition to my principle of maximum efficiency (for isothermal systems my principle becomes one of entropy minimization).  However, a closer look reveals the apparent contradiction as an apples vs. oranges comparison leading to some interesting possibilities.

 

For a closed system, equilibrium is defined as the state at which entropy production vanishes, which corresponds to the entropy of the system being at a maximum.  This is consistent with Jaynes’ MEP, which he equates with Gibb’s “strong form” of the Second Law, which states that entropy not only tends to increase, but “will increase, to the maximum value permitted by the constraints imposed.”9   He further identifies the “entropy gradient” as the force driving the system back to equilibrium as rapidly as possible given the constraints on the system, thereby maximizing entropy.10

 

These ideas are consistent with classical dynamics, where equilibrium is the point at which the potential energy of a system reaches a minimum.  One can envision water flowing through a drainage seeking the lowest possible level.  If the efficiency of the system is assumed constant, the relative proportions of work W and heat Q generated by the flow of energy through the system would be fixed.  Therefore, for a given level of efficiency, maximum entropy production would correspond to a minimum level of potential energy at equilibrium.

 

However, if the constraints are reversed by specifying a fixed final equilibrium state of maximum entropy, it is clear that the system will follow the path of maximum efficiency in approaching that level, consistent with the Fourth Law.  This must be, since each individual particle in the system obeys the Principle of Least Action, just as light will find the shortest path through the curved geometry of space.  The sum of these trajectories must also be a minimum, resulting in maximum efficiency for the aggregate flow.  For an isothermal system, the principle of maximum efficiency translates into a requirement for minimum entropy.

 

We therefore appear to have two forces working in opposite directions.  But how can they be reconciled?  To satisfy the First Law, the change in potential energy dP = dW + dQ, which means that for a dynamical system, the decrease in potential energy must equal the increase in kinetic energy.  The Hamiltonian description of a collection of particles specifies the phase state for each particle by its position and momentum (p,q).  Over an incremental time period, the sum of the dp’s correspond to the aggregate change in the potential energy of the system.  The sum of the dq’s correspond to the aggregate change in kinetic energy  (dW + dQ).

 

The MEP generates an entropy gradient that acts on the positions p to minimize the potential energy.  The Fourth Law acts to minimize the kinetic energies q.  The First Law creates tension between these two opposing forces.  But how is this tension resolved?  One possibility is a minimax solution.  This can be visualized as a 3-dimensional graph with either potential or kinetic energy on the vertical z axis and p and q on the horizontal x and y axes, respectively.  The view of the minimax saddle for a macro system composed of a number of particles would, due to quantum uncertainty, present a statistical view, with the various possible equilibrium points distributed around a point representing the maximum likelihood.  In large systems, points at or very near to the point of maximum likelihood would be hugely more probable than the outliers, corresponding to a classical (point) description.  Therefore, in the macro world this saddle point provides a unique specification of the thermodynamic state for every location in space at any time.

 

Fourth Law Selection

 

From the above it is apparent that efficiency is optimized everywhere all the time.  How then can the Fourth Law act as a selection mechanism?  If the energy is flowing through a system composed of a single channel, there is no opportunity for “choice”.  Examples would be an apple falling from a tree, a light beam finding the shortest path through curved space, or water flowing through a canal (neglecting any turbulence, of course).  However, if a system is composed of two or more “channels”, then the energy flow will be distributed among the channels in such a way as to maximize efficiency, which is equivalent to minimizing heat production.

 

Imagine a hypothetical species of bacteria in a Petri dish supplied with a constant nutrient source and waste sink.  Assume some slight random variation in the population in their efficiency at converting nutrients into biomass, and that the population is such that all the nutrients are being utilized.  Each individual can then be considered a channel for transforming energy for any of three purposes:  growth, maintenance or reproduction.

 

The minor differences in efficiency would favor the growth of the most efficient until only large versions of the most efficient configuration remained, having squeezed the less efficient out of existence.  If structural constraints limited the size of the bacteria, then cell division (cloning) could substitute for further growth.  As Chris Davis points out, this need not be the result of some “reproductive imperative”, but could merely be a way of the cell disposing of excess baggage.11  However, since reproduction would divert resources from growth, it would be reserved for situations in which further growth would diminish efficiency or run into architectural limitations.

 

In addition, if the cells can wear out then efficiency may be enhanced by diverting some resources to maintenance.  Since “perfect” maintenance is not likely to be the optimal solution, the efficiency of the cells would degrade over time, and at some point reproduction would again become a viable alternative, this time as a sort of planned obsolescence.

 

In the above scenarios, reproduction is seen as a strategy of last resort, not the imperative embraced by neo-Darwinism.  Instead of “reproductive success”, organisms are selected for whatever combination of growth, longevity and fecundity maximizes thermodynamic efficiency.  Certain genes will then tend to dominate the gene pool over others as a result of selection (Gould), not as the cause of selection (Dawkins), since the driver is the Fourth Law.

 

From the above it follows that evolution should favor size and longevity over fecundity.  For geometric reasons, large creatures tend to be more efficient energy channels and to dominate an ecosystem, with smaller creatures filling in the gaps (presumably fractally distributed).  A similar argument could perhaps justify a bias toward larger brains.  Similarly, longevity is a surer means of perpetuating one’s existence than reproduction.  (While this may not be true retrospectively, it’s certainly true going forward, which is the only visibility available to nature.)  A prominent example of longevity bias is the modern human population of industrialized societies, where ever-increasing resources are allocated to extending life span and enhancing the “quality of life” while reproductive rates have fallen below replacement and continue to decline.  A corollary is the Catholic church’s position on contraception, which reflects the medieval condition of high mortality, a condition which now exists only in the poorest regions of the world.  As the mortality risk for the individual goes down for larger animals or high-tech humans, the relative utility of reproduction decreases.

 

Evolution and the Fourth Law

 

I will now revisit the gap between physics and biology as it pertains to the “Darwin wars”.  Historically, it appears that Darwin’s original selection criteria emphasized the differential survival of individual organisms.  Since the surviving organisms were the only ones to reproduce, reproductive success was indistinguishable from survival.  However, puzzles such as sexual display, which degraded survival prospects to enhance reproductive success, led to the concept of sexual selection.  When the neo-Darwinists picked up on this idea and combined it with genetics, the sole agent for selection became the “Selfish Gene”12 operating through the “Extended Phenotype”,13 as espoused by Richard Dawkins.  Stephen Jay Gould, by contrast, harkened back to the original concept of survival with his theory of “hierarchical selection” postulating selection on the levels of individuals, groups, species, genera, etc., relegating the genes to a “bookkeeping” function.14

 

While the two approaches may be orthogonal views of the same process, they are both self-referential.  Gould neglects to connect the various levels, eschewing “reductionism” in the best sense for “emergence” in the worst sense.  Dawkins connects the extended phenotype to the genes using “good” reductionism which terminates at the gene level, but leaves genetic fitness as its own reward, bereft of physical causality.

 

The Fourth Law resolves both dilemmas.  The hierarchical view is justified by providing a causal connection between the nested channels, since the evolution of more efficient individuals within a group renders the group more efficient, and the evolution of more efficient groups renders the species more efficient, etc.  For the neo-Darwinists, the selfish gene ceases to be a law unto itself, and instead acts in the service of the Fourth Law, which provides the causal link to physics.

 

Biological Complexity

 

While it is easy to see how the Fourth Law can result in the selection and maintenance of complex structures (if they are more efficient than simple structures, of course), it is more difficult to answer the question of how complex structures arise in the first place.  Complexity theory falls short, since the “self-organized” structures proposed aren’t sufficiently partitioned to act as discreet energy channels.  Prigogine’s dissipative structures15 and Kauffman’s autocatalytic processes16 beg the question of whether the innumerable varieties of inanimate complex structures observed on Earth and in the rest of  the Universe presage other forms of life.  While such beings regularly come alive in Star Trek episodes, they don’t appear to be given much credence in the real world.   

 

While speculation about the origins of life usually revolve around self-replicating molecules (probably as a result of the prevailing neo-Darwinist perspective), the thermodynamic approach points toward the creation of cells and particularly cell walls as the critical development, since these provide the requisite channels through which energy can be distributed.  This leads to the speculation that genes may have developed from “spandrels” co-opted from metabolic processes, implying that some form of life may have preceded reproduction.

 

Conclusion

 

The above conjectures illustrate how the thermodynamic approach can lead to a new way of looking at biological problems.  I believe this approach can also inform other issues, such as the “consciousness wars” between the emergentists (John Searle, weak AI) and reductionists (Daniel Dennett, strong AI) and the related issue of the natural limitations on computer modeling. 

 

 

Footnotes and References:

 

1 Donald Haynie, Biological Thermodynamics, Cambridge University Press, 2001, p. 57.

2 E. T. Jaynes,"Clearing Up Mysteries – The Original Goal", In the Proceedings Volume, Maximum Entropy and Bayesian Methods, J. Skilling, Editor, Kluwer Academic Publishers, Dordrecht-Holland. 1989. p. 25.

3 Donald Haynie, Biological Thermodynamics, Cambridge University Press, 2001, p. 320.

4 Dilip Kondepudi and Ilya Prigogine, Modern Thermodynamics, John Wiley & Sons, 1998, p. 336.

5 E. T. Jaynes,"Clearing Up Mysteries – The Original Goal", In the Proceedings Volume, Maximum Entropy and Bayesian Methods, J. Skilling, Editor, Kluwer Academic Publishers, Dordrecht-Holland. 1989. p. 24.

Jaynes’ aversion to this practice, however, is not shared by Haynie or Prigogine.

6 Dilip Kondepudi and Ilya Prigogine, Modern Thermodynamics, John Wiley & Sons, 1998, p. 392.

7 E. T. Jaynes,"The Minimum Entropy Production Principle", Ann. Rev. Phys. Chem. 1980. 31:581.

8 E. T. Jaynes,"Clearing Up Mysteries – The Original Goal", In the Proceedings Volume, Maximum Entropy and Bayesian Methods, J. Skilling, Editor, Kluwer Academic Publishers, Dordrecht-Holland. 1989. p. 16.

9 E. T. Jaynes,"The Minimum Entropy Production Principle", Ann. Rev. Phys. Chem. 1980. 31:585.

10 E. T. Jaynes,"The Minimum Entropy Production Principle", Ann. Rev. Phys. Chem. 1980. 31:588.

11 Chris Davis, "Idle Theory:  Sex as Counter-reproductive".

12 Richard Dawkins,The Selfish Gene, Oxford University Press, 1989.

13 Richard Dawkins,The Extended Phenotype, Oxford University Press, 1999.

14 Stephen Jay Gould, The Structure of Evolutionary Theory, The Belknap Press of Harvard University Press, 2002.

15 Dilip Kondepudi and Ilya Prigogine, Modern Thermodynamics, John Wiley & Sons, 1998.

16 Stuart Kauffman, Investigations, Oxford University Press, 2000.

 

Home