User talk:Ancheta Wis/f

Feynman's thesis (1942) The Principle of Least Action in Quantum Mechanics was submitted to Princeton University for his Ph.D.. His thesis advisor was John Archibald Wheeler.

Maxwell's equations

 * It's real physics: Nonlinear optics, not just algebraic manipulation into a simpler form -- you are free to write your versions for your personal use in the form you like, for your own purposes, in your own space, for example. The form you like just wouldn't apply in the general case listed in the top box of the article. Ancheta Wis 13:10, 12 July 2005 (UTC)

The equations can be tensor relations about materials (see for example, the Kerr and Pockels effects, which are quite practical-- they are used in lasers, for example). They need not be simply scalar relations with simple constants of proportionality, as in the form used in free space. That is what LauraScudder meant above. BTW, the thread should be kept sorted by time. We are getting 2005 mixed up with 2004 below. Ancheta Wis 05:37, 12 July 2005 (UTC)


 * As an encyclopedia, the scope of the Gravity article can be limited to canonical items. Thus the note "Whether or not Scalar Gravity has any physical basis, the mathematical elegance and importance of the theory can not be disputed." shows that the this addition is not to help someone learn about Gravity, but rather satisfies a different agenda by the contributor. But it is worthwhile to encourage contributions, just not here, an entry-level article. How can we channel all this good energy into a productive resource for the encyclopedia? Certainly if we can encourage people like Paul to talk to readers, we have a win-win situation. We have the same situation with other scholars. How do we create a venue where legitimate dialog with interested parties can occur? A super-talk page, as it were. We don't need Trolls or other nonproductive parties. We need a way for people to demonstrate good faith. That is why the discipline of the encyclopedia is good, because only those who are willing to show that they can contribute or participate in the productive channels of the Encyclopedia survive. But how can we keep the good ones? On the physics page, there is the concept of the "proto-science". We might nominate promising articles for that list. This keeps this article clear. Granted, not too many people will look there, but there are many fine physics articles in the encyclopedia. We could even create a form "letter" directing these potential contributors to the "proto-physics" area.

Yang-Mills
$$ \partial_{\mu}F^{\mu\nu} + 2 \epsilon ( b_\mu \times F^{\mu\nu} ) = J^\nu $$

see Ricci tensor

foundations of mathematics: mathematical logic, axiomatic set theory, proof theory, model theory, and recursion theory. category theory Erlangen program homotopy

CYD, if the referred-to answers to my questions are also to be found in this section, then I am going to interpolate some points (denote with dashes --), based on Prof. G.'s 4-part intro and Prof. G's answers. I am going to be brash, but this is talk-space not article-space. The italicized words are the topic for the interpolations:
 * >The fundamental rules of quantum mechanics are very broad. They state that the state space of a system is a Hilbert space and the observables are Hermitian operators acting on that space, but do not tell us which Hilbert space or which operators. These must be chosen appropriately in order to obtain a quantitative description of a quantum system. An important guide for making these choices is the correspondence principle, which states that the predictions of quantum mechanics reduce to those of classical physics when a system becomes large. This "large system" limit is known as the classical or correspondence limit. One can therefore start from an established classical model of a particular system, and attempt to guess the underlying quantum model that gives rise to the classical model in the correspondence limit.
 * --If the domain of discourse is operators acting on a state space, then it is possible to model the space and the operators with computer code. This is happening today, for example in computational chemistry, but they are using the Schrödinger picture, not the H picture.


 * >When quantum mechanics was originally formulated, it was applied to models whose correspondence limit was non-relativistic classical mechanics. For instance, the well-known model of the quantum harmonic oscillator uses an explicitly non-relativistic expression for the kinetic energy of the oscillator, and is thus a quantum version of the classical harmonic oscillator.
 * --Thus it is necessary to hand-craft each model for each solved case, such as the harmonic oscillator, Hydrogen atom, etc. There are only a few analytically solved cases, which everyone acclaims, but the messy problems are kept in the shadows as soluble in principle.


 * >Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein-Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field rather than a fixed set of particles. The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction.
 * --Now the Heisenberg picture i.e., the Dirac POV, etc comes into full view. But conservation of energy is still respected in this POV, whereas a GR POV tosses that out (David Hilbert 1915), unless we restrict ourselves to systems obeying a principle of stationary action, which leads to Feynman's formulation of QM, etc. It is a conundrum: QM explains matter and presumably mass, but GR implies non-conservation of mass/energy in the presence of mass.


 * >The full apparatus of quantum field theory is often unnecessary for describing electrodynamic systems. A simpler approach, one employed since the inception of quantum mechanics, is to treat charged particles as quantum mechanical objects being acted on by a classical electromagnetic field. For example, the elementary quantum model of the hydrogen atom describes the electric field of the hydrogen atom using a classical 1/r Coulomb potential. This "semi-classical" approach fails if quantum fluctuations in the electromagnetic field play an important role, such as in the emission of photons by charged particles.
 * --But clearly computational chemistry and the other computer code is modeling the molecular-scale world this way, semi-classically, based on the Schrödinger picture. The microcosm still requires more hand-crafted models and computer code.

--Feynman made the point that it was easier to do the measurements than to do the computations. But now computer programs/platforms are improving enough to help visualize down to molecular and atomic levels.

--Thus ad-hoc procedures, POV, and code, specialized to each soluble case are the current state of the art. It is as if solutions to messy problems have the right to remain messy. It seems odd that the computer codes do not exploit the invariants more, as they do not have to be computed, and can be asserted as invariants in the code. The 20th century was an algorithmic century. Does the 21st century also have to remain one?

These are not publishable points, but I believe that we can be on the lookout for the events as someone publishes them in the future. Ancheta Wis 10:49, 16 Apr 2005 (UTC)


 * $$\forall x \exists y \left [x\in I \land y\in J \to x+y\in I\times J \right]$$

$${\dot{\theta}}$$$${\dot{p}}$$$${\dot{q}}$$

$${i}/{\hbar}$$$${[H,A]}$$ time in physics

There is a time parameter in the equations of quantum mechanics. Currently, General relativity and quantum mechanics are inconsistent with each other.

Lagrangian

 * $$L(x(t),\dot{x}(t),t) = {T - V}$$



\frac{d}{dt} \frac{\partial L}{\partial \dot{x}} - {\partial L\over\partial x} = 0. $$ or

\frac{d}{dt} \partial_\dot{x} L - \partial_x L = 0 $$

Hamiltonian

 * $$\{A,B\} = \sum_{i=1}^{N}\left[

\frac{\partial A }{\partial q_{i}} \frac{\partial B}{\partial p_{i}}-\frac{\partial A }{\partial p_{i}}\frac{\partial B}{\partial {q}_{i}}\right] $$

$$\{A,B\} = \partial_{q_j} A^{j} \partial_{p_j}B^{j} - \partial_{p_j} A^{j} \partial_{q_j}B^{j} $$ (using the Einstein summation notation)

when $$\partial_{q_i} q^{i} \partial_{p_j}p^{j} - \partial_{p_i} p^{i} \partial_{q_j}p^{j} = {\delta_{ij}} $$ then $$q_i $$ and $$p_i $$  are canonically conjugate variables

Hamiltonian formulation for the equations of motion of p,q
 * $$\dot p = -\frac{\partial H}{\partial q} = \{p,H\} = -\{H,p\} $$
 * $$\dot q =\frac{\partial H}{\partial p} = \{q,H\} = -\{H,q\} $$

Relationship of the Heisenberg picture to the Hamiltonian, starting with the Schrodinger picture, whose solutions are
 * $$ \psi (t) = exp(  -\frac{i}{\hbar} t H ) \psi (0) $$
 * $$ A(t) =exp( \frac{i}{\hbar} t H)  A(0) exp( -\frac{i}{\hbar} t H)  $$
 * $$\frac{d}{dt} A= \frac{i}{\hbar} [H,A]$$

Heisenberg picture

 * $$ {d \over dt} A(t) = {i \over \hbar } [ H(t), A ]  + \left(\frac{\partial A}{\partial t}\right)_{classical}. $$

Schrödinger picture
The Schr&ouml;dinger equation


 * $$ i \hbar {\partial\over\partial t} \left| \psi (t) \right\rangle =  {H(t) \left| \psi (t) \right\rangle} $$

can be transformed according to standard techniques of mathematical physics; it has solutions whose interpretation can be controversial.

canonical commutation relation
In physics, the canonical commutation relation is the relation


 * $$[x,p] = i\hbar$$

among the position $$x$$ and momentum $$p$$ of a point particle in one dimension, where $$[x,p]=xp-px$$ is the so-called commutator of $$x$$ and $$p$$, $$i$$ is the imaginary unit and $$\hbar$$ is the reduced Planck's constant. This relation is attributed to Heisenberg, and it implies his uncertainty principle.

Relation to classical mechanics
By contrast, in classical physics all observables commute and the commutator would be zero; however, an analogous relation exists, which is obtained by replacing the commutator with the Poisson bracket and the constant $$i\hbar$$ with $$1$$:


 * $$\{x,p\}$$ = 1

This observation led Dirac to postulate that, in general, the quantum counterparts $$\hat f,\hat g$$ of classical observables $$f,g$$ should satisfy


 * $$[\hat f,\hat g]= i\hbar\widehat{\{f,g\}}.\,$$

Representations
According to the standard mathematical formulation of quantum mechanics, quantum observables such as $$x$$ and $$p$$ should be represented as self-adjoint operators on some Hilbert space.

Action
Action (physics)
 * $$ S = \int_{t_1}^{t_2}\; L(x,\dot{x})\,dt. $$

\delta S = \int_{t_1}^{t_2}\; \left(      \varepsilon{\partial L\over \partial x}     - \varepsilon{d\over dt }{\partial L\over\partial \dot x}       \right)\,dt. $$



{\partial L\over\partial x_{a}} - {d\over dt }{\partial L\over\partial \dot{x}_{a}} = 0 \;\;\;\;\; \mbox{Euler-Lagrange equations} $$
 * if $$ {\partial L/\partial x}=0$$ then $$ {\partial L/\partial\dot x}$$ is constant

so the variation in S is at an extremum
 * $$\frac{\delta S}{\delta x_{i}(t)}=0$$.

Noether's theorem

 * $$S[x]=\int dt \mathcal{L}(x(t),\dot{x}(t))=\int dt \left(\frac{m}{2}g_{ij}\dot{x}^i(t)\dot{x}^j(t)-V(x(t))\right)$$


 * $$Q[\mathcal{L}]=m g_{ij}\dot{x}^i\ddot{x}^j-\frac{\partial}{\partial x^i}V(x)\dot{x}^i.$$

This has the form of


 * $$\frac{d}{dt}\left[\frac{m}{2} g_{ij}\dot{x}^i\dot{x}^j-V(x)\right]$$

so we can set


 * $$f=\frac{m}{2} g_{ij}\dot{x}^i\dot{x}^j-V(x).$$

Then,


 * $$j=\left(\frac{\partial}{\partial \dot{x}^i}\mathcal{L}\right)Q[x]-f=m g_{ij}\dot{x}^j\dot{x}^i-\left[\frac{m}{2} g_{ij}\dot{x}^i\dot{x}^j-V(x)\right]=\frac{m}{2}g_{ij}\dot{x}^i\dot{x}^j+V(x).$$

You might recognize the right hand side as the energy and Noether's theorem states that $$\dot{j}=0$$ (i.e. the conservation of energy is a consequence of invariance under time translations).

More generally, if the Lagrangian does not depend explicitly on time, the quantity (called the energy)


 * $$\sum_i \left (\frac{\partial}{\partial \dot{x}^i}\mathcal{L}\right )\dot{x^i}-\mathcal{L}$$

is conserved.

Lie algebra

 * In local components, if X=Xa&part;a and Y=Yb&part;b (using the Einstein summation notation), then
 * $$[X,Y]f=(XY-YX)f=X(Y(f))-Y(X(f))=X^a(\partial_aY^b)(\partial_bf)+X^aY^b\partial_a\partial_bf-Y^bX^a\partial_b\partial_af-Y^b(\partial_bX^a)(\partial_af)$$.
 * The middle two terms there will cancel, leaving only first order derivative terms of f.


 * So in short, the notation XYf means Y acts on the function, taking the directional derivative, and returning a smooth function, and then we let X act on the resulting smooth function. This is more clearly brought out when you use the notation X(Y(f)), however people often leave off the parentheses in this context, because they become cumbersome.


 * One could also take the f out of the equation, switch the dummy index in one of the terms, and get
 * $$[X,Y]^b=X^a\partial_aY^b-Y^a\partial_aX^b$$ ;
 * this notation would be common in, for example, a GR textbook.

Poisson bracket

 * $$\frac{d}{dt} f=\frac{\partial }{\partial t} f + \{\,f,H\,\}.$$

If we have a probability distribution, &rho;, then (since the phase space velocity ($$ {\dot p_i}, {\dot q _i} $$) has zero divergence, and probability is conserved) its convective derivative can be shown to be zero and so


 * $$\frac{\partial}{\partial t} \rho = - \{\,\rho ,H\,\}.$$

This is called Liouville's theorem

Time in physics
In physics, the treatment of time is a central issue. It has been treated as a question of geometry. (See: philosophy of physics.)

Regularities in Nature
The regular recurrences of the seasons, the motions of the sun, moon and stars were noted and tabulated for millennia, before the laws of physics were formulated. The sun was the arbiter of the flow of time, but time was known only to the hour, for millennia.


 * I farm the land from which I take my food.
 * I watch the sun rise and sun set.
 * Kings can ask no more.

-- as quoted by Joseph Needham Science and Civilisation in China

Astronomical observatories
In particular, the astronomical observatories maintained for religious purposes became accurate enough to ascertain the regular motions of the stars, and even some of the planets.

Timekeeping technology by the advent of the scientific revolution
At first, timekeeping was done by hand, by priests, and then for commerce, with watchmen to note time, as part of their duties. The tabulation of the equinoxes, the sandglass, and the water clock became more and more accurate, and finally reliable.

For ships at sea, boys were used to turn the sandglasses, and to call the hours.

The use of the pendulum, ratchets and gears allowed the towns of Europe to create mechanisms to display the time on their respective town clocks; by the time of the scientific revolution, the clocks became miniaturized enough for families to share a personal clock, or perhaps a pocket watch. At first, only kings could afford them.

Galileo Galilei discovered that a pendulum's harmonic motion has a constant period, which he learned by timing the motion of a swaying lamp in harmonic motion at mass, with his pulse.

Galileo's water clock
In his Two New Sciences, Galileo used a water clock to measure the time taken for a bronze ball to roll a known distance down an inclined plane; this clock was
 * "a large vessel of water placed in an elevated position; to the bottom of this vessel was soldered a pipe of small diameter giving a thin jet of water, which we collected in a small glass during the time of each descent, whether for the whole length of the channel or for a part of its length; the water thus collected was weighed, after each descent, on a very accurate balance; the differences and ratios of these weights gave us the differences and ratios of the times, and this with such accuracy that although the operation was repeated many, many times, there was no appreciable discrepancy in the results.".

The flow of time

 * Galileo's experimental setup to measure the literal flow of time (see above), in order to describe the motion of a ball, preceded Isaac Newton's statement in his Principia:
 * I do not define time, space, place and motion, as being well known to all.

Newtonian physics and linear time
See classical physics

In or around 1665, when Isaac Newton derived the motion of objects falling under gravity, the first clear formulation for mathematical physics of a treatment of time began: linear time, conceived as a universal clock.


 * Absolute, true, and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent, and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time; such as an hour, a day, a month, a year.

The water clock mechanism described by Galileo was engineered to provide laminar flow of the water during the experiments, thus providing a constant flow of water for the durations of the experiments, and embodying what Newton called duration.

Lagrange (1736-1813) would aid in the formulation of a simpler version of Newton's equations. He started with an energy term, L, named the Lagrangian in his honor:

\frac{d}{dt} \frac{\partial L}{\partial \dot{\theta}} - \frac{\partial L}{\partial \theta} = 0. $$ The dotted quantities, $${\dot{\theta}}$$ denote a function which corresponds to a Newtonian fluxion, whereas $$$$ denote a function which corresponds to a Newtonian fluent. But linear time is the parameter for the relationship between the $${\dot{\theta}}$$ and the $$$$ of the physical system under consideration. Some decades later, it was found that, under a Legendre transformation, Lagrange's equations can be transformed to Hamilton's equations; the Hamiltonian formulation for the equations of motion of some conjugate variables p,q (for example, momentum p and position q) is:
 * $$\dot p = -\frac{\partial H}{\partial q} = \{p,H\} = -\{H,p\} $$
 * $$\dot q =\frac{\partial H}{\partial p} = \{q,H\} = -\{H,q\} $$

in the Poisson bracket notation. Thus by transformation to suitable functions, the solutions to sets of these first order differential equations can be more easily implemented or visualized than the second order equation of Lagrange or Newton, and clearly show the dependence of the time variation of conjugate variables p,q on an energy expression.

This relationship, it was to be found, also has corresponding forms in quantum mechanics as well as in the classical mechanics shown above.

Thermodynamics and the paradox of irreversibility
1824 - Sadi Carnot scientifically analyzed the steam engines with his Carnot cycle, an abstract engine. Along with the conservation of energy, which was enunciated in the nineteenth century, the second law of thermodynamics noted a measure of disorder, or entropy. See the arrow of time for the relationship between irreversible processes and the laws of thermodynamics. In particular, Stephen Hawking identifies three arrows of time: Connes and Rovelli 1994 - thermal connection to a flow of time
 * 1st law of thermodynamics
 * 2nd law of thermodynamics
 * Psychological arrow of time - our perception of an inexorable flow.
 * Thermodynamic arrow of time - distinguished by the growth of entropy.
 * Cosmological arrow of time - distinguished by the expansion of the universe.

Electromagnetism and the speed of light
Somewhere between 1831 and 1879, James Clerk Maxwell developed a combined theory of electricity and magnetism. These vector calculus equations which use the del operator ($$\nabla$$) are known as Maxwell's equations for electromagnetism. In free space, the equations take the form:


 * $$\nabla \times \mathbf{E} = - \frac{1}{c} \frac{\partial \mathbf{B}}{\partial t}$$


 * $$\nabla \times \mathbf{B} = \frac{1}{c} \frac{\partial \mathbf{E}}{\partial t}$$


 * $$\nabla \cdot \mathbf{E} = 0$$


 * $$\nabla \cdot \mathbf{B} = 0$$

where c is a constant that represents the speed of light in vacuum, E is the electric field, and B is the magnetic field.

The solution to these equations is a wave, which propagates at speed c. The wave is an oscillating electromagnetic field, often embodied as a photon which can be emitted by the acceleration of an electric charge. The frequency of the oscillation is variously a photon with a color, or a radio wave, or perhaps an x-ray or cosmic ray. Thus in our epoch, during which electromagnetic waves can propagate without being disturbed by conductors or charges, we can see the stars, at great distances from us, in the night sky. (Before this epoch, there was a time, 300,000 years after the big bang, during which starlight would not have been visible.)

In free space, Maxwell's equations have a symmetry which was exploited by Einstein in the twentieth century.


 * $$ \mathbf{B} = \nabla \times \mathbf{A}$$


 * $$\mathbf{E} = -(\nabla \phi + \frac{1}{c} \frac{\partial \mathbf{A}}{\partial t}) $$

The $$ (A, \phi )$$ are defined up to a function M, a gauge, which can be chosen to transform Maxwell's equations.


 * $$ \mathbf{A} \mapsto \mathbf{A} + \nabla {M} $$


 * $$ \phi \mapsto \phi + \frac{1}{c} \frac{\partial M }{\partial t}$$

M can be the Lorenz gauge, or the Coulomb gauge, or the Weyl gauge, etc.

In electromagnetism, the Lorenz gauge condition is the gauge fixing in which


 * $$\partial_{a}\tilde{A}^a = \tilde{A}^a{}_{,a}=0$$

where $$\tilde{A}^a$$ is the four-potential, the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used.

Lorenz gauge is Lorentz invariant
This gauge has the advantage of being Lorentz invariant. It still leaves some residual gauge degrees of freedom, but they propagate freely at the speed of light, so they are insignificant.

The Lorenz gauge is often erroneously spelled as 'Lorentz gauge', many people believing that H. Lorentz, a Dutch physicist, was the first to state the condition. In fact, it was the Danish physicist Ludwig Lorenz who first published this condition.

See also Coulomb gauge, Weyl gauge

Einsteinian physics and time
See special relativity 1905, general relativity 1915.

Einstein's 1905 special relativity challenged the notion of an absolute definition for times, and could only formulate a definition of synchronization for clocks that mark a linear flow of time:
 * If at the point A of space there is a clock ... If there is at the point B of space there is another clock in all respects resembling the one at A ... it is not possible without further assumption to compare, in respect of time, an event at A with an event at B. ... We assume that ...
 * 1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
 * 2. If the clock at A synchronizes with the clock at B, and also with the clock at C, the clocks at B and C also synchronize with each other.

In 1875, Hendrik Lorentz discovered the Lorentz transformation, upon which Einstein's theory of relativity, published in 1915, is based. The Lorentz transformation states that the speed of light is constant in all inertial frames.

Einstein's theory of relativity uses Riemannian geometry, employing the metric tensor which describes Minkowski space:


 * $$\left[(dx^1)^2+(dx^2)^2+(dx^3)^2-c(dt)^2)\right],$$

to develop a geometric solution to Lorentz's transformation that preserves Maxwell's equations.

Einstein's theory was motivated by the assumption that no point in the universe can be a 'center', and that correspondingly, physics must act the same in all inertial frames. His simple and elegant theory shows that time is relative to the inertial frame, i.e. that there is no 'universal clock'. Each inertial frame has its own local geometry.


 * $$E^2 = m^2c^4+p^2c^2 \ $$ (atomic energy)

E = energy, m = mass, p = momentum, c = the speed of light

Quantum physics and time
See quantum mechanics

There is a time parameter in the equations of quantum mechanics. Currently, General relativity and quantum mechanics are inconsistent with each other. The Schr&ouml;dinger equation


 * $$ H(t) \left| \psi (t) \right\rangle = i \hbar {\partial\over\partial t} \left| \psi (t) \right\rangle$$

can be transformed according to standard techniques of mathematical physics; it has solutions whose interpretation can be controversial.

In the general case where


 * $$ A[f] = \int_{x_1}^{x_2} L(x,f,f') \, dx \,$$

with
 * $$ f'(x) = \frac{df}{dx}, \,$$

and f is required to have two continuous derivatives. In that case, an extremal $$f_0$$ will satisfy the Euler-Lagrange equation


 * $$ -\frac{d}{dx} \frac{\partial L}{\partial f'} + \frac{\partial L}{\partial f}=0.\,$$

The Euler-Lagrange equation is a necessary condition for an extremal. but its satisfaction does not guarantee that the solution is in fact an extremal. Sufficient conditions for an extremal are discussed in the references.

Dynamical systems
See dynamical systems and chaos theory, dissipative structures

One could say that time is a parameterization of a dynamical system that allows the geometry of the system to be manifested and operated on. It has been asserted that time is an implicit consquence of chaos (i.e. nonlinearity/irreversibility): the characteristic time, or rate of information entropy production, of a system. Mandelbrot introduces intrinsic time in his book Multifractals and 1/f noise.

Ilya Prigogine NL: *"I felt that some essential message was embedded, still to be made explicit, in Bergson's remark:
 * "The more deeply we study the nature of time, the better we understand that duration means invention, creation of forms, continuous elaboration of the absolutely new."


 * "I would first mention Théophile De Donder (1873-1957).2 What an amiable character he was! Born the son of an elementary school teacher, he began his career in the same way, and was (in 1896) conferred the degree of Doctor of Physical Science, without having ever followed any teaching at the university.


 * "It was only in 1918 - he was then 45 years old - that De Donder could devote his time to superior teaching, after he was for some years appointed as a secondary school teacher. He was then promoted to professor at the Department of Applied Science, and began without delay the writing of a course on theoretical thermodynamics for engineers.


 * "Allow me to give you some more details, as it is with this very circumstance that we have to associate the birth of the Brussels thermodynamics school.


 * "In order to understand fully the originality of De Donder's approach, I have to recall that since the fundamental work by Clausius, the second principle of thermodynamics has been formulated as an inequality: "uncompensated heat" is positive - or, in more recent terms, entropy production is positive. This inequality refers, of course, to phenomena that are irreversible, as are any natural processes. In those times, these latter were poorly understood. They appeared to engineers and physico-chemists as "parasitic" phenomena, which could only hinder something: here the productivity of a process, there the regular growth of a crystal, without presenting any intrinsic interest. So, the usual approach was to limit the study of thermodynamics to the understanding of equilibrium laws, for which entropy production is zero.


 * "This could only make thermodynamics a "thermostatics". In this context, the great merit of De Donder was that he extracted the entropy production out of this "sfumato" when related it in a precise way to the pace of a chemical reaction, through the use of a new function that he was to call "affinity".3


 * "It is difficult today to give an account of the hostility that such an approach was to meet. For example, I remember that towards the end of 1946, at the Brussels IUPAP meeting,4 after a presentation of the thermodynamics of irreversible processes, a specialist of great repute said to me, in substance: "I am surprised that you give more attention to irreversible phenomena, which are essentially transitory, than to the final result of their evolution, equilibrium."


 * "Fortunately, some eminent scientists derogated this negative attitude. I received much support from people such as Edmond Bauer, the successor to Jean Perrin at Paris, and Hendrik Kramers in Leyden.


 * "De Donder, of course, had precursors, especially in the French thermodynamics school of Pierre Duhem. But in the study of chemical thermodynamics, De Donder went further, and he gave a new formulation of the second principle, based on such concepts as affinity and degree of evolution of a reaction, considered as a chemical variable.


 * "Given my interest in the concept of time, it was only natural that my attention was focused on the second principle, as I felt from the start that it would introduce a new, unexpected element into the description of physical world evolution. No doubt it was the same impression illustrious physicists such as Boltzmann5 and Planck6 would have felt before me. A huge part of my scientific career would then be devoted to the elucidation of macroscopic as well as microscopic aspects of the second principle, in order to extend its validity to new situations, and to the other fundamental approaches of theoretical physics, such as classical and quantum dynamics.


 * "Before we consider these points in greater detail, I would like to stress the influence on my scientific development that was exerted by the second of my teachers, Jean Timmermans (1882-1971). He was more an experimentalist, specially interested in the applications of classical thermodynamics to liquid solutions, and in general to complex systems, in accordance with the approach of the great Dutch thermodynamics school of van der Waals and Roozeboom.7


 * "In this way, I was confronted with the precise application of thermodynamical methods, and I could understand their usefulness. In the following years, I devoted much time to the theoretical approach of such problems, which called for the use of thermodynamical methods; I mean the solutions theory, the theory of corresponding states and of isotopic effects in the condensed phase. A collective research with V. Mathot, A. Bellemans and N. Trappeniers has led to the prediction of new effects such as the isotopic demixtion of helium He3+ He4, which matched in a perfect way the results of later research. This part of my work is summed up in a book written in collaboration with V. Mathot and A. Bellemans, The Molecular Theory of Solutions. 8


 * "My work in this field of physical chemistry was always for me a specific pleasure, because the direct link with experimentation allows one to test the intuition of the theoretician. The successes we met provided the confidence which later was much needed in my confrontation with more abstract, complex problems.


 * "Finally, among all those perspectives opened by thermodynamcis, the one which was to keep my interest was the study of irreversible phenomena, which made so manifest the "arrow of time". From the very start, I always attributed to these processes a constructive role, in opposition to the standard approach, which only saw in these phenomena degradation and loss of useful work. Was it the influence of Bergson's "L'évolution créatrice" or the presence in Brussels of a performing school of theoretical biology?9 The fact is that it appeared to me that living things provided us with striking examples of systems which were highly organized and where irreversible phenomena played an essential role.


 * "Such intellectual connections, although rather vague at the beginning, contributed to the elaboration, in 1945, of the theorem of minimum entropy production, applicable to non-equilibrium stationary states.10 This theorem gives a clear explanation of the analogy which related the stability of equilibrium thermodynamical states and the stability of biological systems, such as that expressed in the concept of "homeostasy" proposed by Claude Bernard. This is why, in collaboration with J.M. Wiame,11 I applied this theorem to the discussion of some important problems in theoretical biology, namely to the energetics of embryological evolution. As we better know today, in this domain the theorem can at best give an explanation of some "late" phenomena, but it is remarkable that it continues to interest numerous experimentalists.12


 * "From the very beginning, I knew that the minimum entropy production was valid only for the linear branch of irreversible phenomena, the one to which the famous reciprocity relations of Onsager are applicable.13 And, thus, the question was: What about the stationary states far from equilibrium, for which Onsager relations are not valid, but which are still in the scope of macroscopic description? Linear relations are very good approximations for the study of transport phenomena (thermical conductivity, thermodiffusion, etc.), but are generally not valid for the conditions of chemical kinetics. Indeed, chemical equilibrium is ensured through the compensation of two antagonistic processes, while in chemical kinetics - far from equilibrium, out of the linear branch - one is usually confronted with the opposite situation, where one of the processes is negligible.


 * "Notwithstanding this local character, the linear thermodynamics of irreversible processes had already led to numerous applications, as shown by people such as J. Meixner,14 S.R. de Groot and P. Mazur,15 and, in the area of biology, A. Katchalsky.16 It was for me a supplementary incentive when I had to meet more general situations. Those problems had confronted us for more than twenty years, between 1947 and 1967, until we finally reached the notion of "dissipative structure". 17


 * "Not that the question was intrinsically difficult to handle; just that we did not know how to orientate ourselves. It is perhaps a characteristic of my scientific work that problems mature in a slow way, and then present a sudden evolution, in such a way that an exchange of ideas with my colleagues and collaborators becomes necessary. During this phase of my work, the original and enthusiastic mind of my colleague Paul Glansdorff played a major role.


 * "Our collaboration was to give birth to a general evolution criterion which is of use far from equilibrium in the non-linear branch, out of the validity domain of the minimum entropy production theorem. Stability criteria that resulted were to lead to the discovery of critical states, with branch shifting and possible appearance of new structures. This quite unexpected manifestation of "disorder-order" processes, far from equilibrium, but conforming to the second law of thermodynamics, was to change in depth its traditional interpretation. In addition to classical equilibrium structures, we now face dissipative coherent structures, for sufficient far-from-equilibrium conditions. A complete presentation of this subject can be found in my 1971 book co-authored with Glansdorff.18


 * "In a first, tentative step, we thought mostly of hydrodynamical applications, using our results as tools for numerical computation. Here the help of R. Schechter from the University of Texas at Austin was highly valuable.19 Those questions remain wide open, but our centre of interest has shifted towards chemical dissipative systems, which are more easy to study than convective processes.


 * "All the same, once we formulated the concept of dissipative structure, a new path was open to research and, from this time, our work showed striking acceleration. This was due to the presence of a happy meeting of circumstances; mostly to the presence in our team of a new generation of clever young scientists. I cannot mention here all those people, but I wish to stress the important role played by two of them, R. Lefever and G. Nicolis. It was with them that we were in a position to build up a new kinetical model, which would prove at the same time to be quite simple and very instructive - the "Brusselator", as J. Tyson would call it later - and which would manifest the amazing variety of structures generated through diffusion-reaction processes.20


 * "This is the place to pay tribute to the pioneering work of the late A. Turing,21 who, since 1952, had made interesting comments about structure formation as related to chemical instabilities in the field of biological morphogenesis. I had met Turing in Manchester about three years before, at a time when M.G. Evans, who was to die too soon, had built a group of young scientists, some of whom would achieve fame. It was only quite a while later that I recalled the comments by Turing on those questions of stability, as, perhaps too concerned about linear thermodynamics, I was then not receptive enough.