Physics claims to be the unique science of matter. All of the other sciences, physicists believe, are special cases of general relativity and quantum mechanics. It is this position of presumptive supremacy that drives the quest for a Theory of Everything.
The physicist or biologist seeks to discover the world of nature; the chemist creates a world of his own.^{1} The synthesis of new stable molecules represents an addition to the physical world.^{2} The physicist uses the language of mathematics. The chemist uses a language that is specific to him, one made up of symbols, formulas, diagrams, and arrows. The chemist is an artist of sorts, close in his own way to the mathematician, who is also able to create his own objects.^{3} Only their tools differ. Chemical activity produces about one million new molecules every year. In 1984, there were about ten million molecules; in 2015, one hundred million.^{4}
Whatever a chemist can imagine, he can make.^{5}
While physics and chemistry are very different sciences, they share some key concepts, such as energy, mass, spin, entropy, and information. They even employ many of the same mathematical tools, largely based on group theory and quantum mechanics.
Yet chemistry remains singular among the sciences.
Any chemical reaction presupposes a combination, dissociation or substitution involving atoms or groups of atoms. Yet quantum mechanics cannot explain the periodic table of elements. True, the solution of the Schrödinger equation for the hydrogen atom predicts the existence of three quantum numbers. These, together with a spin quantum number, predict that the principal orbits of an atom may contain a maximum of 2, 8, 18, or 32 electrons. This does seem to explain periodic lengths in Dmitri Mendeleev’s table. But for a principal quantum number n = 3, there are 18 possible states; only 8 of them are used going from sodium to argon. To explain this experimental fact, chemists appeal to Erwin Madelung’s rule, which cannot be derived from quantum physics.^{6} Madelung’s rule stipulates that the sum n + L of the principal quantum number n and of the secondary quantum number L should always increase. Hence 4s (n + L = 4) should be filled before 3d (n + L = 5). Quantum mechanics by itself cannot therefore justify or predict which element ends a period.^{7}
No first principles justify the assignment of four quantum numbers to each electron in a multielectron atom. The commutator of the Hamiltonian with the angular momentum of an electron does not vanish.^{8} Its eigenvalue is, therefore, not a constant of motion. Individual electrons in a polyelectronic atom cannot thus have stationary states; the concept of an atomic orbital, commonly invoked in theoretical chemistry, has no physical reality. In order to determine the electron configuration of an atom, one must calculate all possible configurations and choose the one with the lowest energy.^{9}
Quantum chemistry is a mathematical experiment, not the numerical resolution of a clearly stated and defined physical equation.^{10}
Wellknown equations by Erwin Schrödinger and Paul Dirac do not offer a consistent prediction about the electronic configuration of certain elements.^{11} Schrödinger’s equation predicts, correctly, that copper should have a [Ar] 3d^{10}4s^{1} electronic configuration. Dirac’s equation, on the other hand, predicts that copper should have a [Ar] 3d^{9}4s^{2} electronic configuration.
Elements with the same electronic configuration do not necessarily belong to the same group in the periodic system. Nickel (4s^{2}), palladium (5s^{0}), and platinum (6s^{1}) are in the same group but have three different electronic configurations for their outermost orbital. Elements that do have the same electron configuration in their outermost orbital do not necessarily belong to the same group. Helium (1s^{2}) and beryllium (2s^{2}) are examples.
Ab initio calculations are typically based on Schrödinger’s equation for Nparticles, a scheme that works successfully for the hydrogen atom, or the ion H_{2}^{+}. Even though Schrödinger’s equation gives a good account of simple systems, no inference is possible to more complex systems. An additional inductive step is necessary.^{12} With multielectron systems, approximations must be made and validated by comparison with experiment, not through theory.^{13} It is a quite remarkable form of empirical mathematics.
The quantities that interest chemists do not appear in Schrödinger’s equation, which involves only positions and momenta. There is no variable for length or binding energy in the formalism developed by Dirac, Werner Heisenberg, or Schrödinger.^{14}
Assumptions of a nonquantum nature are needed to adapt the basic theory to the variables of chemistry.^{15}
 Electronic approximation: any molecule consists of a nucleus and electrons, but all interactions among them are electromagnetic.^{16}
 Born–Oppenheimer approximation: there is adiabatic separation between electron motion and those of the nuclei.
 Lamb or Casimir effects: interactions with the vacuum that are fundamental in second quantization are ignored.
 Ground states: each molecule has a given ground state, at its lowest energy level. Any chemical reaction is either elementary or the result of a set of elementary reactions occurring simultaneously.
It may be clear why one uses such approximations, but why they work so well remains utterly mysterious.^{17}
The basic concept of molecular structure cannot be directly inferred from the principles of quantum mechanics.^{18} This is no incidental conceptual shortcoming. Consider the otherwise insoluble Nbody problem. It is the assumption of a molecular structure that makes it possible to identify atoms and molecules with the quasiparticles of a multibody system, where they become entities interacting by means of Nambu–Goldstone bosons. The size and shape of individual atoms or molecules follow from these assumptions. But the procedure is obviously nonsensical with respect to the stationary quantum state of an atom or an isolated molecule. Quasiparticles are not atoms, or individual isolated molecules. In condensed matter physics, quasiparticles move nearly independently in an external field created by immobile nuclei and the electrons in motion around them. These are not true electrons, as we observe them in highenergy physics; their apparent effective mass is different and their lifespan is finite.
Chemists can calculate the energy or length of any chemical bond, but it has never been explained in the context of quantum mechanics. The very concept of a chemical bond, while immeasurably useful, is a seductive abstraction, Charles Coulson observed:
Sometimes it appears to me that a bond between two atoms has become so real, so tangible, so familiar that I can almost see it. But then I awake with a little shock: for the chemical bond is not a real thing, it does not exist, and no one has ever seen or will ever see it. It is a figment of our imagination.^{19}
Molecular structure must necessarily be associated with timedependent quantum states, for which one can identify classical and quantum configuration spaces. It is only in these timedependent states that it is possible to establish a correspondence between molecular structure and the maxima of the wave function.
Conservation of parity means that molecular structure is a consequence of environmental perturbations rather than an intrinsic property, since it implies that an isolated stationary molecule cannot exhibit optical activity.^{20} Bond length and spectroscopic constants do not belong to the general framework of quantum mechanics; they reflect the properties of the asymptotic approximations. The concept of chemical bond is thus extremely vague and ill defined, but it is precisely this vagueness that makes chemistry interesting.
The concept of a chemical bond is thus closer to the concept of species in biology than to the concept of force in physics.^{21}
Increases in computing power suggest that wave mechanics will ultimately provide answers to problems of a chemical nature, but as calculations become more precise, the physical sense of the entities becomes less tangible.^{22} Quantum mechanics provides chemistry with a new understanding of known chemical facts. It is the chemist who solves the problems of the mathematician, through a chemical formula.^{23}
The chemist’s pictorial formulas are a concentrate of topological information that briefly illuminates the arrangement of parts and their secret relations. Perhaps it is less impressive than the synthesis proposed by Newton, but neither classical nor quantum mechanics can predict even the simplest molecular structures, with one exception.^{24} The isolated hydrogen atom can be explained in group theoretic terms.
The invariance of a Hamiltonian under a symmetry group expresses the degeneration of its energy levels; energy degeneracy is a sign of hidden symmetry. For the hydrogen atom, the degeneracy of related atomic levels (E < 0) of the same quantum number L means that the Hamiltonian is invariant under the minimal rotation group O(3). Now it happens that energy degeneration is not limited to 2L + 1 states of angular momentum, but to all the states that share the same principal quantum number.
The hydrogen atom exhibits, in addition to a static symmetry of rotation, a dynamic symmetry related to the form 1/r of the electric potential. There is thus another constant of motion aside from the angular momentum L, which classically corresponds to the Laplace–Runge–Lenz vector M. Motion in a potential 1/r is equivalent to the motion of a free particle on a sphere S^{3} of set points of fourdimensional Euclidean space; the quantum operators associated with L and M are, in fact, infinitesimal generators of the symmetry group SO(4). The operations of this group send states with the same principal quantum number to a linear combination of states. The operator associated with the Laplace–Runge–Lenz vector allows quantum numbers L and M_{L} to change from ± 1 or 0, while the angular momentum operator makes it possible to do this only for the quantum number M_{L}. The excited states, where the electron is no longer bound to the nucleus, are described by the Poincaré group SO(3,1).
The dynamic symmetry of SO(4), which rotationally transforms bound states, does not change their energy. The dynamic symmetry of SO(3,1), which transforms a scattering state into another scattering state, does not change the energy either.
It would be interesting to find a group that breaks degeneration while at the same time sending any bound state of the hydrogen atom to a linear combination of other states. If such a group existed, it would be possible to send the 1s state, for example, to any other bound state, and one would have a spectrum generator group. If we take the spectrum of the hydrogen atom and multiply every negative energy value by the cube of the principal quantum number, rescaled energies become regularly spaced. This is the sign of a new symmetry. Where there is symmetry, there is a group, one that might be designated the de Sitter group SO(4,1).
Finally, there should be a group that sends the labels of bound states to those that describe the scattering states by shifting them appropriately. This is precisely what allows for the conformal group SO(4,2) with five generators, in addition to those of SO(4,1).
It is worth noting that the 15 generators of SO(4,2) are the 10 generators of the Poincaré group ISO(3,1), plus the four generators describing special conformal transformations. These combine the inversions y^{µ} = x^{µ}/x^{2}, followed by the translation t^{µ} = y^{µ}  a^{µ}. There is, as well, a second inversion z^{µ} = t^{µ}/t^{2}, and one generator corresponding to spacetime dilatation by a scale factor λ.
The two spectrum generator groups, SO(4,1) and SO(4,2), do not conserve energylevel degeneracy, and can send any bound state or scattering state of the hydrogen atom to a linear combination of all other bound or scattering states.^{25} SO(4,2) leaves Maxwell’s equations invariant. At a certain level of observation, it seems, matter, radiation, and vacuum are the same.
As temperature and pressure are only defined when the number N of quanta (atoms or molecules) is very large, chemistry incorporates a fundamental inequality: N ≥ N_{A}, where N_{A} = 0.6022140857 Ymol^{1} is Avogadro’s constant.
Scale Invariance and Limits
Nothing can exceed the speed of light. There is, as well, an upper limit for energy dissipation in the universe. To these limits, one may add lower limits on quantum action, electrical charge, the number of quanta, and information. Six universal constants govern the universe: Einstein’s constant c, Newton’s constant G, Planck’s constant ħ, Coulomb’s constant e, Avogadro’s constant N_{A}, and Boltzmann’s constant k_{B}. There are also six fundamental physical parameters: length, L, time, T, mass, M, charge, Q, number of entities, N, and temperature, Θ. The symmetry group SO(4,2) governs a sixdimensional vacuum, with four spacelike (x, y, z, s) coordinates and two timelike (t, v) coordinates. These new coordinates (s, v) correspond to the possibility of space (s) or time (v) dilatation.
This opens the possibility of waves ψ(x, y, z, t, s, v) propagating not only in spacetime (x, y, z, t), but also at spatiotemporal scales (s, v). These scale waves, which generalize matter waves in quantum mechanics, are capable of taking values within the scale. They ensure coherence between descriptions of an object at different scales of observation in the same way that the waves of quantum mechanics ensure coherence between descriptions of a physical system at different points in spacetime.
Scale invariance means that matter should not be measured by its mass, M, but by the number of its entities, N. In the same way, the ability of any substance to move is not ruled by its energy, U, but by its temperature, Θ.^{26}
The true symmetry of the vacuum is, in fact, SO(4,2)⊗U(2)⊗U(2). The subgroup U(2)⊗U(2) may be used to describe internal symmetries when the symmetry between matter and the vacuum is broken. The U(2) symmetry can be written as SU(2)⊗U(1), containing the symmetry group SU(2). It is this group that controls the quantum field associated with the weak interaction, as well as the group U(1) that controls the electromagnetic field.
As the direct product of SU(2)⊗U(1), U(2) is also a trivial fiber space. Thus the important Hopf fibration, which associates the circle to the sphere S^{3} in many physics problems.
U(2) is also a subgroup of SU(3), which controls the strong interaction between quarks. Since U(2)⊗U(2) is, like SU(3), characterized by eight infinitesimal generators, everything seems to be in place for a complete description of the internal symmetries of matter. Symmetries of SO(4,2)⊗U(2)⊗U(2) combine the internal symmetries of matter with the external symmetries of the vacuum in a way that is nonlocal at all scales. This would be the symmetry group of biology. Sommerfeld’s finestructure constant, α = e^{2}/(4πε_{0}·ħ·c), may well be regarded as a seventh universal constant, one associated to biology by means of the candela unit. As is well known, 4πε_{0} = 10^{7}/c^{2} = 113.3 pF·m^{1} is the electrical permittivity of the vacuum, which is related to the magnetic permeability µ_{0} = 4π·10^{7} H·m^{1} of the vacuum, through the topological relationship ε_{0}·µ_{0}·c^{2} = 1.
Nature is ruled by inequalities rather than equalities. On the one hand, an equality A = B gives a unique reversible world that can be viewed under different, but fundamentally equivalent, viewpoints (A or B). On the other hand, an inequality A ≤ B automatically splits the world into a physical one where the inequality is satisfied and another, virtual one where the inequality does not hold. The virtual world cannot be directly observed, but it may nevertheless affect the physical world. Thanks to these inequalities, an immaterial world coupled to a material one becomes the basic structure to express any kind of natural phenomena. Inequalities are also needed to explain the existence of irreversible phenomena. Behind the constraint A ≤ B, a huge number of solutions exist for which A is far below the threshold B. Stability is the result; varying the solution by a tiny quantity changes nothing. Instability occurs only when A approaches B. If A = B, then for any A + δ, no matter how small δ, A = B must inevitably be false.
Because of the invariance of the vacuum, there are seven fundamental sciences: biology, chemistry, electromagnetism, general relativity, quantum mechanics, relativistic mechanics, and thermodynamics. At this level of thinking, physics considers only the ISO(3,1) subgroup for vacuum and the U(2)⊗U(2) subgroup for matter. Biology, chemistry, and thermodynamics all involve not only movement in space and time, but also movement in spatial or temporal scales through conformal symmetries. The symmetries linking small to large scales may be responsible for the apparent lack of autonomy of biology, chemistry, and thermodynamics.
Deep Background
Group Theory
Symmetry and invariance are fundamental to modern physics. These concepts receive their mathematical elaboration in the theory of groups and group representations. A group G consists of a set of objects, or elements, closed under an associative law: (a + b) + c = a + (b + c). Every group contains both an identity and an inverse. The integers form a group under addition: a + 0 = a and a + (–a) = 0. A finite group contains a whole number of elements, called the order of the group, and an infinite group contains infinitely many elements.^{27}
In group representation theory, groups are mapped to physical systems by some sort of encoding. The states are assigned a label corresponding to its irreducible representations. The number of permissible labels corresponds to symmetries between elements. A group G is commutative if a + b = b + a. In a commutative group, the number of labels is always equal to the order of the group. Under these conditions, each label has a dimension n such that the sum of the squares of each dimension is equal to the order of the group. This special property derives from the theorem of orthogonality, which follows from the group structure. It allows mathematicians to identify systematically all finite groups and to predict their features.
When several atoms organize themselves around a common center to form molecules, finite group theory becomes a crucial tool for chemists. In chemistry, empty space is not defined directly in terms of group theory; instead, chemists use invariant symmetry operations.
At the end of the nineteenth century, Sophus Lie introduced mathematicians to continuously varying groups. Lie groups in real or complex numbers are studied in the neighborhood of an identity element by means of Lie algebras. This very often suffices for understanding global geometry and topology. Using Lie groups, group theory is the most general, and the simplest, way of defining physical quantities. Every physical law is the representation of a symmetry group. For instance, the Lie algebra of the Galilean group is the group of affine transformations preserving the time interval between events and the distance between instantaneous events. The Lie algebra for a Galilean group in N dimensions may be constructed from N(N–1)/2 spatial rotations, N spatial translations, N Galilean boosts (changes in speed), and one time translation. G corresponds to N(N–1)/2 + 2N + 1 = N(N+3)/2 + 1 infinitesimal generators.
For a threedimensional space, the corresponding Lie algebra has ten infinitesimal generators. Three labels suffice to characterize the elementary objects: mass, energy, and spin. Mass is needed to indicate how an object behaves in a threedimensional translation in space, energy for its behavior during translation in time, and spin for its behavior in a threedimensional rotation. This is the essence of Emmy Noether’s theorem. Translational invariance over time can thus be associated with the principle of conservation of energy, which gives the observer complete freedom to determine the origin of time; only the interval between two instants has a physical reality. This is true also for translational invariance in space and invariance in spatial rotation. Linear and angular momentum are conserved, and only spatial and angular intervals are real.
Another interesting Lie group is the homogeneous Lorentz group SO(3,1) that insures the invariance of Maxwell’s equations in a vacuum, and that corresponds to the group of linear transformations leaving the Minkowski spacetime metric (x_{1}^{2} + x_{2}^{2} + x_{3}^{2} – x_{4}^{2}) invariant. The letters SO stand for Special Orthogonal group, describing a rotating space, and the numbers 3 and 1 point to a metric having the signature (1,1,1,–1).
For an Ndimensional space, the Lie algebra of this group requires N(N1)/2 infinitesimal generators. In the case where N = 4 (Minkowski spacetime), this corresponds to six generators: three associated to the infinitesimal rotations in the 3planes (xy, yz, xz) and three associated to Lorentz boosts in the 3planes (xt, yt, zt). Adding four infinitesimal generators for translations along the axes defines the tenparameter Poincaré Lie group ISO(3,1).
It is worth noting that one can obtain the Galilean group by starting with the Poincaré group and letting the ratio v/c tend to zero. Moving the Galilean group into the relativistic Poincaré group has the remarkable consequence of leaving only mass and spin labels for characterizing elementary objects. In other words, internal energy U and mass M labels merge, reflecting the famous relationship U = M·c^{2}.^{28} The structure of the Poincaré group implies that the spin J can be any integral value of ħ/2 (J = 0, ħ/2, ħ, 3ħ/2, 2ħ, 5ħ/2, …). From a group theory viewpoint, spin is not a quantum property.^{29}
For electric, magnetic, and optical phenomena, it is necessary to introduce a field, such that at each point of spacetime there are two components (E, B) that transform into each other like a tensor. The fourdimensional Fourier transform of the pair (E, B) gives a fourmoment with six components that forms an antisymmetric tensor of rank 2. The SO(3,1) group automatically sets the form of the equations to which the classical field must comply. The mathematical form of Maxwell’s equations for electromagnetism is fixed by the SO(3,1) group that is a subgroup of the Poincaré group ISO(3,1).
There is, however, a lower bound to any electrical charge transfer: ∆q ≥ e = 0.1602 aC. This rules out instantaneous electrical currents that might imply a blurring of the electric and magnetic fields values at small scales. At large scales, this same inequality gives an upper bound for both the electric field, E ≤ c^{4}/(4·G·e) ≈ 1.9 × 10^{62} V·m^{1}, and the magnetic field, B ≤ c^{3}/(4·G·e) ≈ 6.3 × 10^{53} T.
Quantification of the electric charge was a deep enigma that would not be solved until the advent of quantum mechanics.
If a given group fixes the mathematical form of a physical law, does the Poincaré group with ten generators offer the highest symmetry compatible with the invariance of Maxwell’s equations in the vacuum? No. Maxwell’s equations are invariant under the symmetry operations of the highly symmetric SO(4,2)⊗U(2)⊗U(2) group characterized by 6×5/2 + 2^{2} + 2^{2} = 23 generators.^{30} This symmetry escaped notice for a long time because the eight integraldifferential generators of U(2)⊗U(2) are associated with symmetry operations of a nongeometric nature. They are much harder to visualize than operations in the Lie algebra in the neighborhood of identity.^{31}
The nature of these operators suggests, in fact, that there is communication between all scales, from the smallest to the largest and vice versa, whence nonlocality and nonseparability, which are abundantly confirmed by experiment.
Zero and Infinity
Information theory serves the valuable function of showing that classical mechanics may be considered a highdensity approximation of quantum mechanics. Let ∆t = 2·∆R/c, the time necessary for light to cross an object of diameter 2·∆R, changing its energy by ∆U = ∆M·c^{2}. In order for this change of mass to be observable, ∆R must be equal to c·∆t/2 ≥ 2G·∆M/c^{2}; thus ∆U/∆t ≤ c^{5}/4G. There is, therefore, an upper limit for the rate of change of energy over time.^{32} Such an asymmetry between large and small scale certainly has no justification; zero and infinity are inextricably linked. If observable objects have upper limits, they must also have lower limits.
This is the raison d’être of quantum mechanics.
At the foundation of quantum mechanics is the principle that measurements cannot be considered intrinsic properties of observed systems.^{33} A state variable is one from which all others may be derived. The internal energy U and the Hamiltonian function H are good examples of state variables for thermodynamics and mechanics. A point in a space of N dimensions represents a measurement of a state variable, and all such measurement make up a hypersurface.
Paulette Février has established that if observable systems have a state variable, their measurements follow classical Aristotelian logic.^{34} Thus classical Newtonian or relativistic physics. By parity of reasoning, if there are quantities that are not simultaneously measurable, there can be no state variable. Thus quantum physics. In order to show the essential uncertainty of a physical theory, it is sufficient to exhibit two quantities that are not simultaneously measurable.^{35}
Without state variables, there are no privileged observers, and hence no distinction between quantum and classical systems.^{36} Two systems are mutually revealing if, and only if, there is a physical interaction between them. The precise physical mechanism used is unimportant. Information is necessarily discrete. There is thus a minimum amount of information that distinguishes between two alternatives. Any system S can be defined by a finite number of binary questions. These correspond to the physical variables of classical mechanics or to the observables of quantum mechanics.^{37} There is nothing quantum about this idea; it follows from the logical structure of information theory.
Information about a system is constrained in the same way that speed is limited in relativistic mechanics. There is a maximum amount of information yielded by any physical system. Its full description can be obtained in a finite amount of time.
The next step is to introduce the two quantities that cannot be measured simultaneously. As we are seeking a quantum theory that applies to macroscopic systems, the number of available quanta N emerges as a crucial variable. Allowing some fluctuations ∆N in the number of quanta, it may be shown that the conjugate variable is necessarily the quantum phase φ with a fundamental uncertainty relationship:^{38}
 ∆N·∆φ ≥ ½.
The right side of this inequality is plainly not zero. There is no way to measure N and φ to an arbitrary degree of accuracy. Classical physics emerges when ∆N = 0. The phase angle φ then becomes random. If the total number of quanta is not certainly known, then the quantum phase may take a welldefined value (∆φ → 0) when ∆N → +∞. This situation is routinely observed at macroscopic scales in ferroelectricity, ferromagnetism, superconductivity, and superfluidity. These are examples of manybody physics that cannot be explained by considering only pairs of interacting quanta.
The above uncertainty relationship is scale invariant. Quantum mechanics is the correct description of nature at any scale. There is no fundamental distinction between quantum and classical systems. Any coherent quantum system displays classical behavior, with a predictable trajectory computable according to the least action principle. Classical mechanics is just a highdensity approximation of quantum mechanics when phase coherence emerges.^{39}
One of the remarkable features of (1) is the absence of Planck’s constant ħ, another reminder of the fundamental scale invariance of quantum theory. In relational quantum mechanics, any system is characterized by a finite amount of information; Planck’s constant reflects that limit. A classical particle contains an infinite amount of information. If this is not the case, the system state occupies a finite volume.^{40}
For a system with n degrees of freedom, the quantum of action [M·L^{2}·T^{1}] transforms the phasespace volume [M·L^{2}·T^{1}]^{n} into a finite number of bits. The quantum of action ħ = 0.65821 eV·s makes possible the association of a pulsation ω = ∆φ/∆t and an energy ħ·ω to any phase change.^{41}
For a set of N indistinguishable quanta, the total energy of the system is U = N·ħ·ω, with a fluctuation given by ∆U = ∆N·ħ·ω. With ∆φ = ω·∆t and ∆N = (∆U/ħ·ω), (1) becomes:
 ∆U·∆t ≥ ħ/2.
It is often overlooked in the quantum mechanical literature that the time t appearing in (2) is not the time t used in Lorentz transformations or the time t of the timedependent Schrödinger equation. It corresponds to an internal time intimately associated with the rotation of a phase vector embedded within the quantum object. Such is the proper time of special relativity.
By the same token, the energy U in (2) is not the eigenvalue of a Hamiltonian operator, but a form of internal energy. There is no need to invoke a conjugate temporal operator in order to justify (2).
One may also write an infinite dilution approximation of (2) by considering a free quantum of mass m and momentum p = m·∆x/∆t. Writing energy as U = p^{2}/2m leads to ∆U = p·∆p/m, and as ∆t = m·∆x/p, (2) becomes the more familiar:
 ∆p·∆x ≥ ħ/2.
The fact that ħ is a quantum of action means that one may associate to any momentum value p a characteristic wavevector k = p/ħ.^{42} Schrödinger’s or Dirac’s equations are infinite dilution approximations of more general quantum field theory.
From a group theory standpoint, Schrödinger’s equation is generated by the Lie algebra of the Galilean group.^{43} There is no transition between states with different masses.^{44} Conservation of mass is a quantum and not a classical law.
Spin, which completes the triplets of mass and energy, is governed by the noncommutative symmetry group SU(2). The more general SU(n) encompasses all special unitary groups describing n × n complex matrices. The structure of SU(2) implies that for each J, the spatial orientation of the spin may take only 2J + 1 values. In quantum mechanics, spin varies by discrete values. It thus differs from momentum or energy. In the case of massless particles, spin labels are unidimensional; such particles are entirely characterized by their helicity, the projection of the spin along the direction of movement.
In the same way, electromagnetic interactions can be expressed by the exchange of photons between electric charges. The relevant symmetry group is U(1), where U(n) refers to the unitary group of n × n unitary matrices. For n = 1, there is one generator, exp(i·φ) = 1, corresponding to a plane circle where each point is labeled by its phase angle φ. Each photon is characterized by its fourmoment (k_{x}, k_{y}, k_{z}, iE/c), such that k_{x}^{2} + k_{y}^{2} + k_{z}^{2} – E^{2}/c^{2} = 0, describing its propagation along the (k_{x}, k_{y}, k_{z})direction with an associated wavelength λ = 2π/k, and carrying an energy E = h·f. Each photon is also characterized by an index of helicity giving the projection of the angular momentum ± 1 (+1 for right polarization and –1 for left polarization) along the direction of propagation.
Group theory allows one to guess the Hilbert space suitable for describing the electromagnetic field. No superposition is prohibited. This quantum description has only two independent components for each fourmoment. There remain four combinations of the classical field that do not describe the permitted physical conditions. Maxwell’s equations suggest the mechanism required to annihilate such illicit superpositions. Finally, invariance through rotation of the phase angle of the photon ensures the conservation of electric charge and the complete freedom to choose the sign of these charges.
In sum, information theory leads to the indivisibility of the quantum of action, which is at the origin of the uncertainty relationship. What is striking in this new way of reasoning is that information theory can be expressed in either classical or quantum language.
While quantum mechanics forces us to make a distinction between the observer and the system he observes, it also leaves open the boundary between the two. Different observers may well have different views of the same sequence of events; the quantum state, the value of a variable, or the result of a measurement are all concepts relative to the observer, just like velocity in relativistic mechanics.
Temperature
The characteristic feature of thermodynamics is temperature, Θ, a chemical potential driving any heat transfer. Temperature may be defined as the integrating factor that allows one to transform nonexact heat differentials δQ into total differentials of a new state variable, entropy (S), such that dS ≥ δQ/Θ.^{45} Entropy may also be defined independently from temperature. If W is the volume of a system in phase space, where W is the number of possible microstates or phases compatible with a given macrostate, then S(U, V, N) = k_{B}·ln W ≥ 0, (where U is the internal energy, V, the volume, and N, the number of entities).^{46}
It follows from the conservation of phase volume that if W_{A} is the volume of state A, then the process A → B is not reproducible unless W_{B} is large enough to hold all the microstates that could evolve out of W_{A}^{47}: W_{B} ≥ W_{A}, or S_{B} ≥ S_{A}. The key to the second law should not be disorder, but reproducibility. This is Ludwig Boltzmann’s statistical interpretation of entropy. It has the great advantage of applying to nonequilibrium thermodynamics and establishes a clear link to quantum mechanics. For a gas of N particles of mass M enclosed in a volume V = L^{3} at temperature Θ, uncertainty about the position of each particle is ∆q = L, leading to a number of allowed microstates, W_{q} ∝ V^{N} = (∆q)^{3N}.
From kinetic gas theory, we take the uncertainty ∆v on molecular speeds at temperature Θ, leading to an uncertainty ∆p = M·∆v on momenta and to a number of allowed microstates W_{p} ∝ (∆p)^{3N}. The volume occupied by the system in phase space is thus W ∝ W_{q}·W_{p} ∝ (∆q·∆p)^{3N}. The quantum of action ħ leads to a dimensionless number, S = k_{B}·ln(∆q·∆p/ħ)^{3N} ≥ 0.
What is this but ∆q·∆p ≥ ħ?
The essential uncertainty of thermodynamics emerges from the existence of a quantum of action. What is more, the minimum entropy is given by S ≥ k_{B}·ln 2.^{48} Entropy may therefore be defined as the number of binary questions, multiplied by k_{B}·ln 2, that provide full knowledge of the microscopic state of a system. Boltzmann’s constant k_{B} expresses the fact that matter is made of elementary particles; it establishes a link between thermodynamics and information theory; it plays the role of a quantum of entropy, in place of the quantum of action in quantum theory.
Newtonian physics cannot be correct, as it supposes the existence of arbitrarily small quantities. Moreover, that there is a minimum entropy means that it is impossible to reach the absolute zero of temperature, the third law of thermodynamics. For a system in contact with a heat bath having a definite temperature Θ, the energytime uncertainty relationship is
 ∆U·∆(1/Θ) ≥ k_{B}/2.
Thermodynamics cannot be reduced to a microscopic theory with an underlying molecular reality, in the same way that Heisenberg’s relationship prevents a hidden variables reconstruction of quantum mechanics.^{49}
Letters to the Editors

On the Madelung Rule
( Volume 3, Issue 1 )

Chemistry SelfCrystallizing
( Volume 3, Issue 1 )
 Roald Hoffmann, The Same and Not the Same (New York: Columbia University Press, 1995). ↩
 Joachim Schummer, “The Philosophy of Chemistry: From Infancy toward Maturity,” Boston Studies Series in the Philosophy of Science 242 (2006): 19–39. ↩
 JeanPaul Malrieu, “Du dévoilement au design: La chimie, une science en avance,” Alliage 9 (1991): 59–64. ↩
 Pierre Lazlo, La Chimie Nouvelle (Paris: Flammarion, 1995). ↩
 Gaston Bachelard, Le pluralisme cohérent de la chimie moderne (Paris: Vrin, 1932). ↩
 PerOlov Löwdin, “Some Comments on the Periodic System of the Elements,” International Journal of Quantum Chemistry 3 (1969): s331–34. ↩
 Eric Scerri, “The Past and the Future of the Periodic Table,” American Scientist 96 (2008): 52–60. ↩
 Eric Scerri, “Normative and Descriptive Philosophy of Science and the Role of Chemistry,” Boston Studies Series in the Philosophy of Science 242 (2006): 119–28. ↩
 Jack Woodyard, “A New Paradigm for Schrödinger and Kohn,” Boston Studies Series in the Philosophy of Science 242 (2006): 245–69. ↩
 Gopala Vemulapalli, “Physics in the Crucible of Chemistry,” Boston Studies Series in the Philosophy of Science 242 (2006): 191–204. ↩
 Eric Scerri, “Just How ab initio is ab initio Quantum Chemistry?” Foundations of Chemistry 6 (2004): 93–116. ↩
 Michael Redhead, “Models in Physics,” British Journal for the Philosophy of Science 31(1980): 145–63. ↩
 Valentin Ostrovsky, “What and How Physics Contributes to Understanding the Periodic Law,” Foundations of Chemistry 3 (2001): 145–82. ↩
 Gopala Vemulapalli, “Physics in the Crucible of Chemistry,” Boston Studies Series in the Philosophy of Science 242 (2006): 191–204. ↩
 Marc Henry, “The Hydrogen Bond,” Inference: International Review of Science 1, no. 2 (2015). ↩
 Mario Bunge, “Is Chemistry a Branch of Physics?” Journal for General Philosophy of Science 13 (1982): 209–23. Fixing the nuclei is a semiempirical approximation that cannot in any way be inferred from the basic principles of quantum mechanics. It can only be justified retrospectively as an asymptotic approximation greatly simplifying the calculations to be carried out. ↩
 This remark also applies to other approximations introduced in the methods of quantum chemistry termed “semiempirical” (R. G. Woolley, “Must a Molecule Have a Shape?” Journal of the American Chemical Society 100 (1978): 1,073–78). ↩
 R. G. Woolley, “Must a Molecule Have a Shape?” Journal of the American Chemical Society 100 (1978): 1,073–78. ↩
 Charles Coulson, “The Contributions of Wave Mechanics to Chemistry,” Journal of the Chemical Society (1955): 2,069–84. ↩
 R. G. Woolley, “Must a Molecule Have a Shape?” Journal of the American Chemical Society 100 (1978): 1,073–78. ↩
 Mario Bunge, “Is Chemistry a Branch of Physics?” Journal for General Philosophy of Science 13 (1982): 209–23. ↩
 Valentin Ostrovsky, “What and How Physics Contributes to Understanding the Periodic Law,” Foundations of Chemistry 3 (2001): 145–82. ↩
 Eric Scerri, “Normative and Descriptive Philosophy of Science and the Role of Chemistry,” Boston Studies Series in the Philosophy of Science 242 (2006): 119–28. ↩
 Gopala Vemulapalli, “Physics in the Crucible of Chemistry,” Boston Studies Series in the Philosophy of Science 242 (2006): 191–204. ↩
 Robert Gilmore, Lie Groups, Physics, and Geometry: An Introduction for Physicists, Engineers and Chemists (Cambridge, UK: Cambridge University Press, 2008). ↩
 Mass and number of entities or energy and temperature are different things because of the SO(4,2) symmetry of the vacuum. Considering only the subgroup SO(3,1), as in special relativity, leads to confusing M and N, on the one hand, and U and Θ, on the other, and to dismissing N_{A} or k_{B} as fundamental constants of Nature. This in turn leads to considering chemistry and thermodynamics to be merely complex cases of mechanics, not the autonomous nonmechanical sciences that they are. ↩
 In the latter case especially one understands the usefulness of defining a group by its generators and not by its constitutive elements. ↩
 Momentum and energy join to become a fourmoment through an invariant property called mass; the orientation in spacetime of the fourmoment is totally free. Following Noether’s theorem, invariance through Lorentz transformation ensures conservation of the spacetime interval and lets the observer decide which referent will be considered at rest; only the interval between two spacetime points has a physical reality. ↩
 Energy U is a central concept in modern science, merging with mass M upon moving from a Galileo to a Poincaré Lie group. As for large masses, U = Mc^{2} could be a huge value. It is convenient for low speeds (v < c) to split it into rest energy U_{0} = mc^{2} and kinetic energy (dK = p·dp/m), where p = mv is linear momentum. According to such a definition the spatial positions of mass m do not matter; mass is completely free to move. By contrast, if mass moves in a force field f = –dV/dx deriving from a potential V(x) that changes with position x, it has also a potential energy dV = –f·dx. For particles bearing no electrical charge, the force field is usually the gravitational field, while for charged particles it could be either the electric or magnetic fields, or both. As explained above, Noether’s theorem states that for any mechanical system whose equations of motion are covariant under translation in time, the total energy U = K + V should always remain constant (dU/dt = 0).
Energy remains a clear concept only when thermal and chemical effects are ignored, meaning that particles are allowed to exchange momentum but not energy. If heat exchanges are allowed, it is apparently still possible to retain an energy conservation principle (first principle of thermodynamics), but at the cost of losing the precise meaning of energy. One may thus now write U = K + V + Q = a constant, where Q is an internal energy under thermal, mechanical, chemical, electrical, magnetic, interfacial form. As Henri Poincaré pointed out, everything would be fine provided that the kinetic energy K were proportional to the square of the velocities, that V were independent of these velocities and of the state of the bodies, and that Q were independent of the velocities and of the positions of the bodies, depending only on their internal state. As this is clearly not the case for electrified bodies in motion, Poincaré was allowed to conclude that:
it follows that in every particular case we clearly see what energy is, and we can give it at least a provisory definition; but it is impossible to find a general definition of it. If we wish to enunciate the principle in all its generality and apply it to the universe, we see it vanish, so to speak, and nothing is left but this: there is something which remains constant.
(Henri Poincaré, The Value of Science: Essential Writings of Henri Poincaré (New York: Random House, 2001), 100–101). The energy and/or mass concept is also blurred by considering general relativity, which relies on the fact that any increase in energy tends to separate things with a kinetic energy K = Mc^{2}, whereas any increase in mass M tends to reduce distances L with a potential energy V = GM^{2}/L (where G = 66.7 pJ·m·kg^{2} is Newton’s gravitational constant). Any equilibrium between repulsion and attraction should thus be such that M·c^{2} = GM^{2}/L, i.e. ML^{1} = c^{2}/G. Now, choosing a system of units where G = c = 1, one finds that M = L. That is to say, mass may be identified with spacetime or vacuum (as L = T if c = L·T^{1} = 1). Any mass, therefore, is able to curve spacetime and any spacetime curvature may be considered as mass. Consequently, spacetime, instead of being mere theatrical scenery for the movements of masses, becomes an actor, able itself to oscillate and owing its curvature to the locations of masses. As any increase in mass dM leads to an increase in gravitational energy dV = 2G·M·dM/L that cannot be higher than the total energy dK = dM·c^{2}, one finds that dV ≤ dK or L ≥ 2G·M/c^{2} = R_{S}. This means that in order to be observed, a mass should have a minimum size, given by its Schwarzschild radius R_{S}. If L ≈ R_{S}, the spacetime curvature becomes so high that a socalled “black hole” horizon is formed, from which no light or matter is able to escape. ↩  Wilhelm Fushchich and Anatoly Nikitin, “On the New Invariance Group of Maxwell Equations,” Lettere al Nuovo Cimento 24 (1979): 220–24. ↩
 Wilhelm Fushchich and Anatoly Nikitin, “On the New Symmetries of Maxwell Equations,” Czechoslovak Journal of Physics B 32 (1982): 476–80. ↩
 Since special or general relativity does not give any lower limit for natural objects, one may use differential and integral calculus as well as tensor algebra. ↩
 Paulette DestouchesFévrier and JeanLouis Destouches, “Sur l’interprétation physique de la mécanique ondulatoire,” Comptes rendus de l’Académie des sciences 222 (1946): 1,087–89. ↩
 Uncertainty implies that some predictions should be expressed in terms of probabilities. For example, considering for a physical quantity A, the whole set of certain predictions (X_{1}, X_{2}, …, X_{n}) one should in principle express any prediction X as a linear combination X = c_{1}X_{1} + c_{2}X_{2} + … + c_{n}X_{n} (spectral decomposition principle). The probability of observing the element X_{i} is then an arbitrary function f(c_{i}) of the associated coefficient c_{i}. If f is to be the same for any spectral decomposition, then it must be that f(x·y) = f(x)·f(y), where x and y are complex numbers, having both a magnitude r and a phase φ: z = r·exp(i·φ). Requiring that f be a continuous function leads to the only acceptable solution: f(x) = x^{k}, with k > 0. Finally, using a generalized version of Pythagoras’s theorem, it is possible to show that only two values of k are possible: k = 1 (existence of a state variable, Boolean logic) and k = 2 (no state variable, nonBoolean logic). For this last case, the probability of observing a given element is given by the square of the corresponding complex coefficient in the linear superposition, a postulate of quantum theory. Paulette DestouchesFévrier, “Sur les rapports entre la Logique et la Physique théorique,” Comptes rendus de l’Académie des sciences 219 (1944): 481–83. ↩
 Max Born, “Quantenmechanik der Stoßvorgänge,” Zeitschrift für Physik 38 (1926): 803–27. ↩
 Carlo Rovelli, “Relational Quantum Mechanics,” International Journal of Theoretical Physics. 35 (1996): 1,637–78. ↩
 A yes/no question is represented by an operator projected onto a linear subsystem of a Hilbert space, or by the linear subsystem itself. ↩
 Giuliano Preparata, An Introduction to a Realistic Quantum Physics (Singapore: World Scientific, 2002), 41; Marc Henry, “The Topological and Quantum Structure of Zoemorphic Water,” in Aqua Incognita: Why Ice Floats on Water and Galileo 400 Years On, eds. Pierandrea Lo Nostro and Barry Ninham (Ballarat: Connor Court Publishing, 2014), 197–239. ↩
 Giuliano Preparata, An Introduction to a Realistic Quantum Physics (Singapore: World Scientific, 2002), 41. ↩
 David Nolte, “The Tangled Tale of Phase Space,” Physics Today (2010): 33–38. ↩
 Max Planck, “On the Theory of the Energy Distribution Law of the Normal Spectrum,” Verhandlungen der Deutschen physikalischen Gesellschaft 2 (1900): 237–44 ↩
 Louis de Broglie, “Ondes et Quanta,” Comptes rendus de l’Académie des sciences 177 (1923): 507–10. ↩
 JeanMarc LévyLeblond, “The Pedagogical Role and Epistemological Significance of Group Theory in Quantum Mechanics,” Rivista del Nuovo Cimento 4 (1974): 99–140. ↩
 Frederick Kaempffer, “Appendix 7,” in his Concepts in Quantum Mechanics (New York: Academic Press, 1965), 341–46 ↩
 Rudolf Clausius, “Ueber eine veränderte Form des zweiten Hauptsatzes der mechanischen Wärmetheoriein,” Annalen der Physik und Chemie 93 (1854): 481–506. ↩
 See Ludwig Boltzmann, “Über die Beziehung zwischen dem zweiten Hauptsatze des mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung, respective den Sätzen über das Wärmegleichgewicht,” Wiener Berichte 76 (1877): 373–435.
The universal constant k_{B} = 86,17µeV·K^{1} linking entropy and phase volume was first determined by Max Planck from its blackbody radiation formula. Max Planck, “On the Theory of the Energy Distribution Law of the Normal Spectrum,” Verhandlungen der Deutschen physikalischen Gesellschaft 2 (1900): 237–44. ↩  Joseph Liouville, Journal de Mathématiques Pures et Appliquées 1^{st} series, 3 (1838): 342–49; Edwin Jaynes, “Gibbs vs Boltzmann Entropies” American Journal of Physics 33 (1965): 391–98. ↩
 Leo Szilard, “Uber die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen,” Zeitchrift Physik 53 (1929): 840–56. ↩
 Jos Uffink and Janneke van Lith, “Thermodynamic Uncertainty Relations,” Foundations of Physics 29 (1999): 655–92. ↩
More From Marc Henry

The Hydrogen Bond
On the role of the quantum vacuum.
( Chemistry / Critical Essay / Vol. 1, No. 2 )