Gunduz Caginalp

Gunduz Caginalp (died December 7th, 2021) was a Turkish-born American mathematician whose research has also contributed over 100 papers to physics, materials science and economics/finance journals, including two with Michael Fisher and nine with Nobel Laureate Vernon Smith. He began his studies at Cornell University in 1970 and received an AB in 1973 "Cum Laude with Honors in All Subjects" and Phi Beta Kappa. In 1976 he received a master's degree, and in 1978 a PhD, both also at Cornell. He held positions at The Rockefeller University, Carnegie-Mellon University and the University of Pittsburgh (since 1984), where he was a professor of Mathematics until his death on December 7, 2021. He was born in Turkey, and spent his first seven years and ages 13–16 there, and the middle years in New York City.

Caginalp and his wife Eva were married in 1992 and had three sons, Carey, Reggie and Ryan.

He served as the Editor of the Journal of Behavioral Finance (1999–2003), and was an Associate Editor for numerous journals. He received awards from the National Science Foundation as well as private foundations.

Thesis and related research
Caginalp's PhD in Applied Mathematics at Cornell University (with thesis advisor Professor Michael Fisher) focused on surface free energy. Previous results by David Ruelle, Fisher, and Elliot Lieb in the 1960s had established that the free energy of a large system can be written as a product of the volume times a term $$f_{\infty}$$ (free energy per unit volume) that is independent of the size of the system, plus smaller terms. A remaining problem was to prove that there was a similar term associated with the surface. This was more difficult since the $$f_{\infty}$$ proofs relied on discarding terms that were proportional to the surface.

A key result of Caginalp's thesis [1,2,3] is the proof that the free energy, F, of a lattice system occupying a region $$\Omega$$ with volume $$|\Omega|$$ and surface area $$|\partial\Omega|$$ can be written as

$$F(\Omega)=|\Omega|f_\infty+|\partial\Omega|f_x+...$$

with $$f_x$$ is the surface free energy (independent of $$|\Omega|$$ and $$|\partial\Omega|$$ ).

Shortly after his PhD, Caginalp joined the Mathematical Physics group of James Glimm (2002 National Medal of Science recipient) at The Rockefeller University. In addition to working on mathematical statistical mechanics, he also proved existence theorems on nonlinear hyperbolic differential equations describing fluid flow. These papers were published in the Annals of Physics and the Journal of Differential Equations.

Developing phase field models
In 1980, Caginalp was the first recipient of the Zeev Nehari position established at Carnegie-Mellon University's Mathematical Sciences Department. At that time he began working on free boundary problems, e.g., problems in which there is an interface between two phases that must be determined as part of the solution to the problem. His original paper on this topic is the second most cited paper in a leading journal, Archive for Rational Mechanics and Analysis, during the subsequent quarter century.

He had published over fifty papers on the phase field equations in mathematics, physics and materials journals. The focus of research in the mathematics and physics communities changed considerably during this period, and this perspective is widely used to derive macroscopic equations from a microscopic setting, as well as performing computations on dendritic growth and other phenomena.

In the mathematics community during the previous century, the interface between two phases was generally studied via the Stefan model, in which temperature played a dual role, as the sign of the temperature determined the phase, so the interface is defined as the set of points at which the temperature is zero. Physically, however, the temperature at the interface was known to be proportional to the curvature, thereby preventing the temperature from fulfilling its dual role of the Stefan model. This suggested that an additional variable would be needed for a complete description of the interface. In the physics literature, the idea of an "order parameter" and mean field theory had been used by Landau in the 1940s to describe the region near the critical point (i.e., the region in which the liquid and solid phases become indistinguishable). However, the calculation of exact exponents in statistical mechanics showed that mean field theory was not reliable.

There was speculation in the physics community that such a theory could be used to describe an ordinary phase transition. However, the fact that the order parameter could not produce the correct exponents in critical phenomena for which it was invented led to skepticism that it could produce results for normal phase transitions.

The justification for an order parameter or mean field approach had been that the correlation length between atoms approaches infinity near the critical point. For an ordinary phase transition, the correlation length is typically just a few atomic lengths. Furthermore, in critical phenomena one is often trying to calculate the critical exponents, which should be independent of the details of the system (often called "universality"). In a typical interface problem, one is trying to calculate the interface position essentially exactly, so that one cannot "hide behind universality".

In 1980 there seemed to be ample reason to be skeptical of the idea that an order parameter could be used to describe a moving interface between two phases of a material. Beyond the physical justifications, there remained issues related to the dynamics of an interface and the mathematics of the equations. For example, if one uses an order parameter, $$\phi$$, together with the temperature variable, T, in a system of parabolic equations, will an initial transition layer in $$\phi$$, describing the interface remain as such? One expects that will vary from -1 to +1 as one moves from the solid to the liquid and that the transition will be made on a spatial scale of $$\varepsilon$$, the physical thickness of the interface. The interface in the phase field system is then described by the level set of points on which $$\phi$$ vanishes.

The simplest model [4] can be written as a pair $$(\phi,T)$$ that satisfies the equations

$$ \begin{array}{lcl} C_{P}T_{t}+\frac{l}{2}\phi = K\Delta T \\ \alpha\varepsilon^2 \phi_t = \varepsilon^2 \Delta\phi + \frac{1}{2}(\phi-\phi^3)+\frac{\varepsilon[s]_E}{3\sigma}(T-T_E) \end{array} $$

where $$C_{P}, l, \alpha, \sigma, [s]_{E}$$ are physically measurable constants, and $$\varepsilon$$ is the interface thickness.

With the interface described as the level set of points where the phase variable vanishes, the model allows the interface to be identified without tracking, and is valid even if there are self-intersections.

Modeling
Using the phase field idea to model solidification so that the physical parameters could be identified was originally undertaken in [4].

Alloys
A number of papers in collaboration with Weiqing Xie* and James Jones [5,6] have extended the modeling to alloy solid-liquid interfaces.

Basic theorems and analytical results
Initiated during the 1980s, these include the following.


 * Given a set of physical parameters describing the material, namely latent heat, surface tension, etc., there is a phase field system of equations whose solutions formally approach those of the corresponding sharp interface system [4,7]. In fact it has been proven that a broad spectrum of interface problems are distinguished limits of the phase field equations. These include the classical Stefan model, the Cahn-Hilliard model, and motion by mean curvature.
 * There exists a unique solution to this system of equations and the interface width is stable in time [4].

Computational results
The earliest qualitative computations were done in collaboration with J.T. Lin in 1987.
 * Since the true interface thickness, $$\varepsilon$$, is atomic length, realistic computations did not appear feasible without a new ansatz. One can write the phase field equations in a form where ε is the interface thickness and $$d_0$$ the capillarity length (related to the surface tension), so that it is possible to vary $$\varepsilon$$ as a free parameter without varying $$d_0$$ if the scaling is done appropriately [4].
 * One can increase the size of epsilon and not change the motion of the interface significantly provided that $$d_0$$ is fixed [8]. This means that computations with real parameters are feasible.
 * Computations in collaboration with Dr. Bilgin Altundas* compared the numerical results with dendritic growth in microgravity conditions on the space shuttle [9].

Phase field models of second order
As phase field models became a useful tool in materials science, the need for even better convergence (from the phase field to the sharp interface problems) became apparent. This led to the development of phase field models of second order, meaning that as the interface thickness, $$\varepsilon$$, becomes small, the difference in the interface of the phase field model and the interface of the related sharp interface model become second order in interface thickness, i.e., $$\varepsilon^2$$. In collaboration with Dr. Christof Eck, Dr. Emre Esenturk* and Profs. Xinfu Chen and Caginalp developed a new phase field model and proved that it was indeed second order [10, 11,12]. Numerical computations confirmed these results.

Application of renormalization group methods to differential equations
The philosophical perspective of the renormalization group (RG) initiated by Ken Wilson in the 1970s is that in a system with large degrees of freedom, one should be able to repeatedly average and adjust, or renormalize, at each step without changing the essential feature that one is trying to compute. In the 1990s Nigel Goldenfeld and collaborators began to investigate the possibility of using this idea for the Barenblatt equation. Caginalp further developed these ideas so that one can calculate the decay (in space and time) of solutions to a heat equation with nonlinearity [13] that satisfies a dimensional condition. The methods were also applied to interface problems and systems of parabolic differential equations with Huseyin Merdan*.

Research in behavioral finance and experimental economics
Caginalp has been a leader in the newly developing field of Quantitative Behavioral Finance. The work has three main facets: (1) statistical time series modeling, (2) mathematical modeling using differential equations, and (3) laboratory experiments; comparison with models and world markets. His research is influenced by decades of experience as an individual investor and trader.

Statistical time series modeling
The efficient-market hypothesis (EMH) has been the dominant theory for financial markets for the past half century. It stipulates that asset prices are essentially random fluctuations about their fundamental value. As empirical evidence, its proponents cite market data that appears to be "white noise". Behavioral finance has challenged this perspective, citing large market upheavals such as the high-tech bubble and bust of 1998–2003, etc. The difficulty in establishing the key ideas of behavioral finance and economics has been the presence of "noise" in the market. Caginalp and others have made substantial progress toward surmounting this key difficulty. An early study by Caginalp and Constantine in 1995 showed that using the ratio of two clone closed-end funds, one can remove the noise associated with valuation. They showed that today's price is not likely to be yesterday's price (as indicated by EMH), or a pure continuation of the change during the previous time interval, but is halfway between those prices.

Subsequent work with Ahmet Duran* [14] examined the data involving large deviations between the price and net asset value of closed end funds, finding strong evidence that there is a subsequent movement in the opposite direction (suggesting overreaction). More surprisingly, there is a precursor to the deviation, which is usually a result of large changes in price in the absence of significant changes in value.

Dr. Vladimira Ilieva and Mark DeSantis* focused on large scale data studies that effectively subtracted out the changes due to the net asset value of closed end funds [15]. Thus one could establish significant coefficients for price trend. The work with DeSantis was particularly noteworthy in two respects: (a) by standardizing the data, it became possible to compare the impact of price trend versus changes in money supply, for example; (b) the impact of the price trend was shown to be nonlinear, so that a small uptrend has a positive impact on prices (demonstrating underreaction), while a large uptrend has a negative influence. The measure of large or small is based upon the frequency of occurrence (measure in standard deviations). Using exchange traded funds (ETFs), they also showed (together with Akin Sayrak) that the concept of resistance—whereby a stock has retreats as it nears a yearly high—has strong statistical support [16].

The research shows the importance of two key ideas: (i) By compensating for much of the change in valuation, one can reduce the noise that obscures many behavioral and other influence on price dynamics; (ii) By examining nonlinearity (e.g., in the price trend effect) one can uncover influences that would be statistically insignificant upon examining only linear terms.

Mathematical modeling using differential equations
The asset flow differential approach involves understanding asset market dynamics.

(I) Unlike the EMH, the model developed by Caginalp and collaborators since 1990 involves ingredients that were marginalized by the classical efficient market hypothesis: while price change depends on supply and demand for the asset (e.g., stock) the latter can depend on a variety of motivations and strategies, such as the recent price trend. Unlike the classical theories, there is no assumption of infinite arbitrage, which says that any small deviation from the true value (that is universally accepted since all participants have the same information) is quickly exploited by an (essentially) infinite capital managed by "informed" investors. Among the consequences of this theory is that equilibrium is not a unique price, but depends on the price history and strategies of the traders.

Classical models of price dynamics are all built on the idea that there is infinite arbitrage capital. The Caginalp asset flow model introduced an important new concept of liquidity, L, or excess cash that is defined to be the total cash in the system divided by the total number of shares.

(II) In subsequent years, these asset flow equations were generalized to include distinct groups with differing assessments of value, and distinct strategies and resources. For example, one group may be focused on trend (momentum) while another emphasizes value, and attempts to buy the stock when it is undervalued.

(III) In collaboration with Duran these equations were studied in terms of optimization of parameters, rendering them a useful tool for practical implementation.

(IV) More recently, David Swigon, DeSantis and Caginalp studied the stability of the asset flow equations and showed that instabilities, for example, flash crashes could occur as a result of traders utilizing momentum strategies together with shorter time scales [17, 18].

In recent years, there has been related work that is sometimes called "evolutionary finance".

Laboratory experiments; comparison with models and world markets
In the 1980s asset market experiments pioneered by Vernon Smith (2002 Economics Nobel Laureate) and collaborators provided a new tool to study microeconomics and finance. In particular these posed a challenge to classical economics by showing that participants when participants traded (with real money) an asset with a well defined value the price would soar well above the fundamental value that is defined by the experimenters. Repetition of this experiment under various conditions showed the robustness of the phenomenon. By designing new experiments, Profs. Caginalp, Smith and David Porter largely resolved this paradox through the framework of the asset flow equations. In particular, the bubble size (and more generally, the asset price) was highly correlated by the excess cash in the system, and momentum was also shown to be a factor [19]. In classical economics there would be just one quantity, namely the share price that has units of dollars per share. The experiments showed that this is distinct from the fundamental value per share. The liquidity, L, introduced by Caginalp and collaborators, is a third quantity that also has these units [20]. The temporal evolution of prices involves a complex relationship among these three variables, together with quantities reflecting the motivations of the traders that may involve price trend and other factors. Other studies have shown quantitatively that motivations in the experimental traders are similar to those in world markets.

$$\ast$$ - PhD student of Caginalp