User:Greg L/Fuzzballs (string theory)

Fuzzball theory, which is derived from superstring theory, is advanced by its proponents as a description of black holes that harmonizes quantum mechanics and Albert Einstein’s general theory of relativity, which have long been incompatible.

Fuzzball theory dispenses with the singularity at the heart of a black hole by positing that the entire region within the black hole’s event horizon is actually an extended object: a ball of strings, which are advanced as the ultimate building blocks of matter and light. Under string theory, strings are bundles of energy vibrating in complex ways in both the three physical dimensions of space as well as in compact directions—extra dimensions interwoven in the quantum foam (see Fig. 2 and Fig. 9, below).

Fuzzball theory addresses two intractable problems that classic black hole theory poses for modern physics:


 * 1) It dispenses with the gravitational singularity at the heart of the black hole, which is thought to be surrounded by an event horizon, the inside of which is detached from the space and time—spacetime—of the rest of the universe (see Fig.&thinsp;1&thinsp;). Conventional black hole theory holds that a singularity is a zero-dimensional, zero-volume point in which all of a black hole’s mass exists at infinite density. Such a theory is highly problematic because spacetime breaks down catastrophically when gravitational acceleration is infinite. For over a century, these theoretical tensions helped motivate theoretical physicists to examine whether black holes could be extended objects composed of some form of degenerate quantum matter (see Exotic star&thinsp;). Importantly, a 2023 scientific paper by a prominent black hole theoretical physicist added impetus to the view that black holes may actually be singularity-free extended objects. That paper detailed how a journal paper published in 1965 had an error in its treatment of a mathematical term used in a differential geometry-based description of spacetime when it is maximally warped by gravity per general relativity. According to the 2023 paper, the error lead to a long-held but false canonical belief among physicists that singularities lie at the heart of black holes when a proper treatment of the mathematical term showed the opposite. 
 * 2) It resolves, in a theoretical sense, the black hole information paradox wherein conventional black hole theory holds that the quantum information describing the light and matter that falls into a classic black hole is thought to either be: A) extinguished within singularities, or B) somehow preserved within singularities but the quantum information cannot climb up against the infinite gravitational intensity inside a black hole to reach past the event horizon where it would be visible to regular spacetime. Either situation violates a fundamental law of quantum mechanics requiring that quantum information be conserved.

Outwardly, fuzzballs are similar to gravastars inasmuch as they appear to outside observers as black holes, have a core comprising degenerate matter, possess no singularities at their centers, and resolve the information paradox. See &thinsp;§Relationship to gravastars, below, for more.

As no direct experimental evidence supports either string theory or fuzzball theory, both are products purely of calculations and theoretical research. However, fuzzball theory may be testable through gravitational-wave astronomy. 

= Physical properties =

String theory and composition
Samir D. Mathur of The Ohio State University published eight scientific papers between 2001 and 2012, assisted by postdoctoral researcher Oleg Lunin, who contributed to the first two papers. The papers propose that black holes are sphere-like extended objects with a definite volume and are composed of strings. The primary paper was a 2002 publication (#3, below) titled “A proposal to resolve the black hole information paradox”. The list: 1) “AdS/CFT duality and the black hole information paradox”, Oleg Lunin and Samir D. Mathur, arXiv:hep-th/0109154, (September 20, 2001). This is a paper about the AdS/CFT correspondence, which examines the relationships between two different theories: Anti-de Sitter space (AdS), and Conformal field theory (CFT), wherein the former deals with quantum gravity and the latter deals with quantum field theory. The AdS/CFT correspondence is central to resolving the black hole information paradox. 2) “Statistical interpretation of Bekenstein entropy for systems with a stretched horizon”, Oleg Lunin and Samir D. Mathur, ArXiv:hep-th/0202072, (February 12, 2002) 3) “A proposal to resolve the black hole information paradox”, Samir D. Mathur, ArXiv:hep-th/0205192, (May 19, 2002) 4) “The fuzzball proposal for black holes: an elementary review”, Samir D. Mathur, ArXiv:hep-th/0502050, (February 3, 2005) 5) “What Exactly is the Information Paradox?”, Samir D. Mathur, ArXiv:0803.2030, (March 13, 2008) 6) “Fuzzballs and the information paradox: a summary and conjectures”, Samir D. Mathur, ArXiv:0909.1038, (October 24, 2008) 7) “The information paradox: A pedagogical introduction”, Samir D. Mathur, ArXiv:0909.1038, (January 25, 2011) 8) “Black Holes and Beyond”, Samir D. Mathur, ArXiv:1205.0776, (May 14, 2012) This differs from the classic view of black holes in which there is a singularity at their centers (see Fig.&thinsp;1&thinsp;), which are thought to be a zero-dimensional, zero-volume point in which the entire mass of a black hole is concentrated at infinite density, surrounded many kilometers away by an event horizon below which light cannot escape.

All variations of string theory hold that the fundamental constituents of subatomic particles, including the force carriers (e.g., photons and gluons), are actually strings of energy that take on their identities and respective masses by vibrating in different modes and frequencies. Fuzzball theory is rooted in a particular variant of superstring theory called Type IIB (see also String duality&thinsp;), which holds that strings are both “open” (double-ended entities) and “closed” (looped entities) and that there are 9&thinsp;+&thinsp;1 spacetime dimensions wherein five of the six extra spatial dimensions are “compactified” (see Fig. 2&thinsp;).



Unlike the view of a black hole as a singularity, a small fuzzball can be thought of as an extra-dense neutron star in which the neutrons have undergone a phase transition and decomposed, liberating the quarks (strings in string theory) comprising them. Accordingly, fuzzballs are theorized to be the terminal phase of degenerate matter. Mathur calculated that the physical surfaces of fuzzballs have radii equal to that of the event horizon of classic black holes; thus, the Schwarzschild radius of a ubiquitous 6.8 solar mass stellar-mass-class black hole—or fuzzball—is 20 kilometers when the effects of spin are excluded. He also determined that the event horizon of a fuzzball would, at a very tiny scale (likely on the order of a few Planck lengths), be very much like a mist: fuzzy, hence the name “fuzzball.”

With classical-model black holes, objects passing through the event horizon on their way to the singularity are thought to enter a realm of curved spacetime where the escape velocity exceeds the speed of light—a realm devoid of all structure. Moreover, precisely at the singularity—the heart of a classic black hole—spacetime itself is thought to break down catastrophically since infinite density demands infinite escape velocity; such conditions are problematic with known physics. Under the fuzzball theory, however, the strings comprising matter and photons are believed to fall onto and absorb into the fuzzball’s surface, which is located at the event horizon—the threshold at which the escape velocity has achieved the speed of light.



A fuzzball is a black hole; spacetime, photons, and all else not exquisitely close to the surface of a fuzzball are thought to be affected in precisely the same fashion as with the classical model of black holes featuring a singularity at its center. The two theories diverge only at the quantum level; that is, classic black holes and fuzzballs differ only in their internal composition and how they affect virtual particles that form close to their event horizons (see , below). Fuzzball theory is thought by its proponents to be the true quantum description of black holes.

Densities
Fuzzballs become less dense as their mass increases due to fractional tension. When matter or energy (strings) fall onto a fuzzball, more strings are not simply added to the fuzzball; strings fuse, or join together. In doing so, all the quantum information of the infalling strings becomes part of larger, more complex strings. Due to fractional tension, string tension exponentially decreases as they become more complex with more vibration modes, relaxing to considerable lengths. The string theory formulas of Mathur and Lunin produce fuzzball surface radii that precisely equal Schwarzschild radii, which Karl Schwarzschild calculated using an entirely different mathematical technique 87 years earlier.

Since the volume of fuzzballs is a function of the Schwarzschild radius (2953 meters per for a non-rotating black hole), fuzzballs have a variable density that decreases as the inverse square of their mass (twice the mass is twice the diameter, which is eight times the volume, resulting in one-quarter the density). A typical fuzzball would have a mean density of $1.616 m$. This is an average, or mean, bulk density; as with neutron stars, the Sun, and its planets, a fuzzball’s density varies from the surface where it is less dense, to its center where it is most dense. A bit of such a non-spinning fuzzball the size of a drop of water would, on average, have a mass of twenty million metric tons, which is equivalent to that of a granite ball 243 meters in diameter (Fig. 3&thinsp;).

Though such densities are almost unimaginably extreme, they are, mathematically speaking, infinitely far from infinite density. Although the densities of typical stellar-mass fuzzballs are extreme—about the same as neutron stars—their densities are many orders of magnitude less than the Planck density ($5.155 kg/m^{3}$), which is equivalent to the mass of the universe packed into the volume of a single atomic nucleus.

As can be seen below in Fig. 4, since the mean densities of fuzzballs (and the effective densities of classic black holes) decrease as the inverse square of their mass, fuzzballs greater than are actually less dense than neutron stars possessing the minimum possible density. Due to the mass-density inverse-square rule, fuzzballs need not even have unimaginable densities. Supermassive black holes, which are found at the center of virtually all galaxies, can have modest densities. For instance, Sagittarius A*, the black hole at the center of our Milky Way galaxy, is 4.3 million. Fuzzball theory predicts that a non-spinning supermassive black hole with the same mass as Sagittarius A* has a mean density “only” 51 times that of gold. Moreover, at 3.9 billion (a rather large super-massive black hole), a non-spinning fuzzball would have a radius of 77 astronomical units—about the same size as the termination shock of the Solar System’s heliosphere—and a mean density equal to that of the Earth’s atmosphere at sea level (1.2 kg/m3).

Neutron star collapse
Black holes (or fuzzballs) are produced in various ways, most of which are exceedingly violent mass-shedding astral events like supernovas, kilonovas, and hypernovas. Though the complex dynamics of supernovas are a developing field of study, no stellar-mass black holes produced by core-collapse supernovas have been observed that are less than, which is believed to be the lower limit for black holes and requires progenitor stars of at least 25–40. Nonetheless, black holes smaller than can be produced through another process: An accreting neutron star (one slowly siphoning off mass from a companion star) that exceeds a critical mass limit, Mmax, will suddenly and nonviolently (relatively speaking) collapse into a black hole or fuzzball as small as 2.18–2.9. Such neutron star collapses can serve as a useful case study when examining the differences between the physical properties of neutron stars and fuzzballs.

As shown below in Fig. 4, neutron stars (the pink stripe) exist only in a narrow range of masses. A neutron star must be at least, which is known as the Chandrasekhar limit (pronounced chan･druh･shay･car&thinsp;). The maximum permissible mass, Mmax, at which neutron stars must collapse into either black holes or fuzzballs is not precisely known and is represented by the gray zone in Fig. 4, which begins at and extends up to. If, for example, Mmax proved one day to be (the double-ended red arrow), then an accreting neutron star that reached that limit will have accumulated more matter than neutron degeneracy pressure and other phenomena can resist and will experience a sudden cascading collapse. In doing so, according to fuzzball theory, the hadrons in its core (neutrons and perhaps a smattering of protons and mesons) decompose into what could be regarded as the final stage of degenerate matter: a ball of strings, which fuzzball theory predicts is the true quantum description of not only black holes but theorized quark stars composed of quark matter.

Following the length of the double-ended red arrow in Fig. 4, a collapsing neutron star, which is already a supernova remnant, will quietly produce a fuzzball with very nearly the same mass but with a radius of only 7.32 kilometers (reduced from around 13.5 kilometers) and with a density that increased six-fold, from roughly $7.8 Planck lengths$ to $1.26 m$.

As measured by a comoving observer outside a neutron star’s gravitational influence (in the parlance of physicists, “for a stationary observer at infinity”), the collapse of an accreting neutron star (the time spent traveling down the double-ended red arrow in Fig. 4, above) occurs in only about one-eighth of a second; this speed is shown in the Fig. 5 animation at right. After the hadrons in a neutron star’s core begin decomposing and a nascent event horizon begins expanding, its radius shrinks by only about 500 meters during the first 125 milliseconds. However, this initial phase of the collapse is an extraordinarily energetic thermodynamic event, generating a peak internal temperature of 95 MeV (1.1 trillion kelvin). During this period, a neutron star emits a powerful burst of neutrinos with a total energy release of about $m$ ergs (equivalent to about $1.27 kg/m^{3}$ per E&thinsp;=&thinsp;mc2&thinsp;), which radiates away 0.2 percent of the neutron star’s mass. This is a virtually incomprehensible amount of power and energy equivalent to the near-simultaneous detonation of four million-billion-trillion Tsar Bomba hydrogen bombs (at 50 megatons, the largest ever tested).

Since hot neutron star matter is opaque to neutrinos, the exceedingly energetic neutrino emission process during a collapse, which is known as “deleptonization” (see Lepton), powerfully opposes and retards the collapse progress. Soon though, relativistic effects due to the expanding internal event horizon overwhelm the collapse-opposing effects of deleptonization and the collapse rate dramatically accelerates. The vast majority of the neutron star’s shrinkage occurs over the final 0.7 millisecond as the neutron star’s surface accelerates from a near-stall and shoots downward to merge with the rapidly expanding event horizon. The instant before disappearing beneath the event horizon, the surface of the neutron star is moving at approximately half the speed of light and has a temperature of 6 MeV (70 billion kelvin). At only 0.7 millisecond, this final phase of collapse, which comprises 92 percent the length of the double-ended red arrow in Fig. 4, is brief indeed; it is only about one-sixth the duration of a flash from a typical camera-mounted strobe-type speedlight on its maximum setting, which is about $4 kg/m^{3}$ th of a second (~4 ms).

As shown in the Fig. 5 animation, the collapse of a neutron star not only appears to be nearly instantaneous but also transforms it from one of the brightest whites in the Universe to the blackest possible black. While violent transient events such as those underlying gamma-ray bursts can briefly produce the hottest observable temperatures in the Universe, neutron stars can have the hottest surfaces for a continuously radiating stellar body. Newly formed neutron stars may have surface temperatures of ten million kelvin or more. However, since neutron stars generate no new heat through fusion, they inexorably cool down after their formation. Consequently, a given neutron star reaches a surface temperature of one million kelvin when it is between one thousand and one million years old. Older and even-cooler neutron stars are still easy to discover; the well-studied neutron star, RX J1856.5−3754, has an average surface temperature of about 434,000 kelvin. For comparison, the Sun has an effective surface temperature of 5,780 kelvin.

Though a neutron star with a surface temperature of one million kelvin emits the vast majority of its light at a peak wavelength of about 3 nanometers, which is in an electromagnetic band known as soft x-rays (see Electromagnetic spectrum&thinsp;), it still emits truly blinding amounts of bluish-white light in the range the human eye is sensitive to (380–750 nm). Specifically, compared to the Sun, the average square meter of the surface of a one-million-kelvin neutron star would appear several thousand times more luminous.The stated equivalency, “compared to the Sun, the average square meter of the surface of a one-million-kelvin neutron star would appear several thousand times more luminous,” has sufficient precision. Moreover, the adjective “luminous” has a specific scientific meaning. In part, the relatively low precision of the equivalency is appropriate because it is based upon a low-precision neutron star temperature of one million kelvin and because any random population of neutron stars varies widely in temperature. More significantly though, to correctly calculate the relative luminosities of blackbody radiators with temperatures that are exceptionally different (a factor of 173:1 in this case), various technicalities must be addressed, such as the color temperature of light sources, how the human vision system responds to different wavelengths of light, and even how the human eye quickly adapts to different color temperatures. Without pinning down such details, the surface of a one-million-kelvin neutron star could be said to be anywhere from 3,140 to 4,880 times “brighter” than the Sun. The color temperatures of the Sun (5,780 kelvin) and a one-million-kelvin neutron star are exceedingly different. The Sun is considered a slightly yellowish-white star, whereas the neutron star radiates primarily in soft x-rays (at a peak-power wavelength of 2.90 nm) and would appear bluish-white because it emits deep-violet (380 nm) light 15 times more intensely than deep-red (750 nm) light. These peak spectral radiances are 389,540,000 W/m2/sr/nm and 25,913,000 W/m2/sr/nm, respectively, which may be calculated using the following variant of Plank’s law (with a final division by $2.53 kg/m^{3}$ to convert to nanometers): $$B(\lambda, T) =\frac{2 hc^2}{\lambda^5} \frac{1}{ e^{h c/(\lambda k_\mathrm{B}T)} - 1 }$$ In comparison, the Sun outputs only 14% more deep-violet light than deep-red, not 15 times like the neutron star. When summed across the entire range of wavelengths the human eye is sensitive to (380–750 nm), the neutron star—boosted in part by its outsized output in the violet end of the spectrum—would have an “in-band radiant exitance” that is 4,880 times more radiant per square meter than the Sun. This ratio compares the total power of light with wavelengths of 380–750 nm emitted from a given area of their respective surfaces. This range is well away from the wavelength at which a one-million-kelvin neutron star most intensely emits electromagnetic radiation (soft x-rays at 2.90 nm) and which does so 155 billion times more intensely than what the Sun radiates at its peak spectral radiance wavelength of 501 nm. In the science of radiometry, this property is called radiant exitance and the unit of measure is watts per square meter. However, the human vision system would not perceive differences in luminosity between the surface of the Sun and a neutron star as scientific measurements of total in-band radiant output would. The human eye has a photopic response curve (bright-light spectral sensitivity, as shown at right) that makes it maximally sensitive to a specific type of green (555 nm) but which can only barely detect deep-violet and deep-red light (380 and 750 nm, respectively); the eye’s relative insensitivity to light in the violet end of the spectrum would significantly undercut the neutron star’s advantage over the Sun at shorter wavelengths. Measuring the brightness of light as it is visually perceived is the science of photometry, the property is luminous emittance, and the unit of measure is the lux (lumens per square meter). When measured this way, using light meters that respond like the human eye, the neutron star would appear only 3,330 times more luminous than the Sun instead of 4,880 times more radiant. Further complicating matters is a one-million-kelvin neutron star would appear to the human vision system as bluish-white only comparatively—when side-by-side with a reference white—and only after one’s eyes have first adapted to that reference white (see Chromatic adaptation and LMS color space). Short of merely standing outside at around noontime when there is a cloudless and clear blue summer sky, whether a given reference light source is close to “white” is established via colorimetry measurements made with spectrometers (“color temperature” meters often used by photographers) calibrated to industry-standard chromaticity coordinate standards (CIE 1931 and CIE 1976). However, the human eye rapidly adapts to different color temperatures when no competing light sources exist. The perception of “what is white” is partly determined by the way the visual system of the human mind works, which can be confused, particularly when light sources with different color temperatures are mixed and there are no unambiguously white objects or specular reflections in the scene (see The dress). But when directly viewing only a neutron star or when viewing typical and familiar environments illuminated solely by the light from a neutron star, the retina would adapt to produce what the vision system perceives as a balanced white. In doing so, the blue-sensitive cones would rapidly decrease in sensitivity (a phenomenon called bleaching or decrease in gain) and the green-sensitive cones would bleach slightly less. Such bleaching begins almost instantly and is mainly responsible for afterimage-type optical illusions. The eyes’ red-sensitive cones, also called “long”-wavelength cones, would require no gain reduction relative to the green and blue cones to chromatically adapt to the light from the neutron star. When the spectral radiances of a one-million-kelvin neutron star and the Sun are measured at the peak-sensitivity wavelength of the red-sensitive (long wavelength) cones (a wavelength of 565 nm, per “A new transformation of cone responses to opponent color responses” (PDF), Ralph W. Pridmore, Perception, & Psychophysics, (January 6, 2021) 83, pp. 1797–1803, Table 1, which summarizes 31 color-vision studies conducted between 1955 and 2011), the surface of the neutron star is only 3,140 times more luminous per unit area than the Sun as perceived by eyes that have chromatically adapted to the neutron star, not 3,330 times as would be perceived during side-by-side comparisons to the Sun. These biological complexities mean that calculating perceived differences in brightness to a precision greater than “several thousand” would require ponderous statements regarding the underlying assumptions lest the stated value be incorrect, misleading, or suffer from false precision. If a one-million-kelvin neutron star, which is only about the size of a large city, were as far away from Earth as the Sun is, it would appear in the night sky as an unresolvable star-like point of light that is 2600 times brighter (11$5.155 kg/m^{3}$ f-stops) than the brightest star in the sky, Sirius (the apparent magnitudes of the neutron star would be −10 whereas Sirius is −1.46). Such a neutron star would illuminate a nighttime landscape on Earth about as well as a half moon, which has an apparent magnitude of about −9.4.

Escape velocity
Irrespective of a fuzzball’s mass, resultant mean density, or even its spin (which affects the Schwarzschild radius; see also Ergosphere and Rotating black hole), its physical surface is located precisely at the event horizon, which is the threshold at which the escape velocity equals the speed of light: 299,792,458 meters per second. As its name suggests, escape velocity is the velocity a smaller body must achieve to escape from a much more massive one; at 11,186 m/s, Earth’s escape velocity is only 3.7 thousandths of one percent that of event horizons. Thus, event horizons—those either surrounding singularities or the surface of fuzzballs—lie at the point where spacetime, as shown in Fig. 6 at right, has been curved by gravity to the speed of light in accordance with general relativity.The warpage of space by mass is described in Einstein’s second theory of relativity, later known as “general relativity,” which includes the effects of accelerating frames of reference and gravity (another type of acceleration)—not his first theory of relativity (later known as “special relativity”). The theoretical physicist John A. Wheeler, who was largely responsible for reviving interest in general relativity in the United States after World War II, wrote the following oft-cited summarization of general relativity: “Matter tells spacetime how to curve, and curved spacetime tells matter how to move.”



How these two theories (“special” and “general”) were related, described the laws of nature, and eventually got their names (which describe their scope, or meaning) was an evolving, multi-year process as Einstein endeavored to incorporate the effects of gravity into a unified theory that correctly predicted observations for all observers in all frames of reference and enabled Karl Schwarzschild to precisely calculate the radius of event horizons.

Having authored or coauthored nearly 500 scientific journal papers (an average of one paper every six weeks) and 16 books over his 54-year-long career, Einstein was a prolific writer (see List of scientific publications by Albert Einstein). In his 1905 paper, “Zur Elektrodynamik bewegter Körper,” published in a German scientific journal and later re-published in English as “On the Electrodynamics of Moving Bodies” (PDF), which would later be known as “special relativity,” Einstein—as illustrated in the animation at right—established the following:
 * 1) The laws of physics are identical in all non-accelerating frames of reference, and
 * 2) The speed of light in a vacuum is the same for all observers, irrespective of the relative motion between the light source and observer.

Note that Einstein’s famous formula regarding mass–energy equivalence, $E = mc^{2}$, as Einstein began writing the equation in the 1920s and which entered popular culture at the start of the post-World War II Atomic Age, was neither part of his paper on special relativity nor general relativity; it was from a separate 1905 journal paper, “Ist die Trägheit eines Körpers von seinem Energieinhalt abhängig?” (“Does the Inertia of a Body Depend upon its Energy-Content?”). In that paper, Einstein originally expressed the equivalency partly in prose by writing (when translated to English), “$If$ $a$ $body$ $gives$ $off$ $the$ $energy$ $L$ $in$ $the$ $form$ $of$ $radiation,$ $its$ $mass$ $diminishes$ $by$ $L/V^{2}$.” Note Einstein’s early use of $L$ instead of $E$ as the symbol for energy and $V$ instead of $c$ as the symbol for the velocity of light, which could be expressed entirely symbolically as $m = L/V^{2}$ and $L = mV^{2}$.

Einstein’s 1914 paper, “Die Formale Grundlage der allgemeinen Relativitätstheorie” (known as “The Foundation of the Generalised Theory of Relativity”) was the first to mention the term “General Theory” and refer to his previous theory as “Special Relativity theory”. From the preamble of the paper:

In 1916, Einstein expanded upon general relativity and tied it together with special relativity in the German-language paper, “Die Grundlage der allgemeinen Relativitätstheorie,” (“Relativity: The Special and the General Theory”), which constituted 54 pages in the German-language physics journal, Annalen der Physik (Annals of Physics), Volume 354, Issue 7. A 2.4 MB downloadable and searchable German-language PDF is available here at Wiley Online Library.

Later, Einstein, in collaboration with the British physicist Robert W. Lawson who translated Einstein’s works, further expanded upon his 1916 journal paper and consolidated his theories into an English-language hard-cover book given the same title as the paper. Two versions—with different forewords by Lawson on the dust jackets—were published in 1920: 1) In the U.S., as a 182-page (168 numbered body pages) book titled “Relativity: The Special and the General Theory,” by Henry Holt and Company, New York; and 2) In England with a 138-page printing titled “Relativity: The Special and the General Theory. A Popular Exposition,” by Methuen & Co., Ltd, London.

In the book, Einstein explained the basis for referring to his first theory (“On the Electrodynamics of Moving Bodies”) as “special relativity”; it was valid only for a particular, or special, subset of reference frames (non-accelerating ones). Einstein had been striving for a unified theory applicable to all observers, regardless of whether they were in an inertial or accelerating frame of reference. Such a unified theory would, in Einstein’s view, have the virtue of being compliant with an all-encompassing universal law of nature. The German adjective “allgemeinen,” (in “Die Grundlage der allgemeinen Relativitätstheorie,” or “Relativity: The Special and the General Theory”) translates to “general” but has a subtly different meaning than in English technical writing where it commonly connotes “broad but not necessarily specific”. The word “allgemeinen” is a declension of the root adjective “allgemein” (a close pronunciation for English-only speakers is I'll･guh･mine, where the syllable I'll is pronounced like the contraction for “I will”), which has multiple context-sensitive connotations in German, one of which—especially in technical matters—means “universal.” The following is from his 1920 book, “Relativity: The Special and the General Theory”:

Gravitational acceleration
Note that escape velocity, which has the unit of measure m/s, is distinct from gravitational strength, which is a different property known as acceleration and has m/s2 as its unit of measure. Though the escape velocity at an event horizon is a finite value (the speed of light), the gravitational strength at event horizons (and the surface of theorized fuzzballs) is infinite, imbuing particles possessing any mass whatsoever with infinite weight. Thus, an imaginary uncrushable rocket with its center of mass located at an event horizon would require infinite thrust to merely hover. This is general relativity’s “accelerating frame of reference” counterpart to special relativity’s requirement that infinite energy is required to accelerate an object possessing mass—even a subatomic particle—to precisely the speed of light.

This property of infinite gravitational acceleration (infinite gravitational strength) at event horizons merits further scrutiny because at least as recently as 2023, online popular culture sites such as physics discussion boards, science websites, and even a university physics professor on YouTube writing calculations on a blackboard were promulgating a misunderstanding that objects have non-infinite weights at event horizons. The root cause of some of this misunderstanding was the improper application of Isaac Newton’s 337-year-old formula for the law of universal gravitation (upper equation, below) rather than a proper appreciation of the ramifications of Einstein’s theory of general relativity and how extreme gravity affects spacetime. Such a mistake is born of a logical non sequitur that while general relativity explains the existence of an event horizon around a black hole, that event horizon somehow remains part of regular un-warped spacetime where Newton’s law of universal gravitation applies; it does not. In accordance with general relativity (lower equation, below), event horizons exist because their escape velocity equals the speed of light and gravitational acceleration is infinite, completely cutting them off from spacetime; no further calculations are warranted.


 * Newton’s law of universal gravitation: $$F=G\frac{m_1m_2}{r^2}$$
 * Schwarzschild radius: $$R=\frac{2GM}{c^{2}}$$

Newton’s law of universal gravitation yields increasingly inaccurate results as both space and time (spacetime) are increasingly warped by large masses. Even in the mildly gravitationally warped spacetime surrounding Earth, general relativity’s gravitational effect on GPS satellites makes their onboard atomic clocks run 45,685 nanoseconds per day (0.01669 second per year) faster when in orbit versus their Earth-centered reference location, 26,562 kilometers below. To make GPS timing signals run at the slower center-of-Earth rate while in orbit, the satellites' reference oscillators receive a “factory offset” before launch, which also compensates for a smaller opposing effect of special relativity due to orbital velocity. At the other extreme, the improper use of Newton’s formula to calculate the gravitational strength at the event horizon of the largest known supermassive black hole, Phoenix A* (see List of most massive black holes), which is estimated to be, yields a wildly incorrect (and even survivable) gravitational acceleration of only about 15 times that of Earth’s gravity. Regardless of the size of a black hole, from the perspective of an observer outside a black hole’s gravitational influence, the escape velocity at event horizons and the surface of fuzzballs equals the speed of light, gravitational strength is infinite, and the flow of time has come to a halt.

Gravitational tides
The aforementioned phenomenon of infinite gravitational acceleration at event horizons is distinct from gradients known as gravitational tides. The intense gravitational tides of stellar-mass black holes, intermediate-mass black holes, and smaller supermassive black holes cause a stretching effect on objects known as spaghettification, lethal amounts of which can occur hundreds of kilometers above the surface of stellar-mass fuzzballs (or above the event horizon surrounding a singularity). For instance, a 10 stellar-class fuzzball has a gravitational tide at its surface of 100 billion Earth-gravities per meter, which would stretch an infalling astronaut into a stream of paste well before reaching its surface. Even a relatively small 400,000 supermassive fuzzball, which has a gravitational tide of 64 Earth-gravities per meter at its surface, would pull apart the body of a hapless astronaut falling feet-first before reaching its surface due to vertebral tensile forces greater than the weight of one metric ton on Earth.

= Singularities =

Overview
After Einstein published his theory of general relativity in 1916, theoretical physicists soon realized that the theory predicted the existence of black holes from which light could not escape. Today, it is widely accepted that black holes exist. Though the formulas of general relativity could be extrapolated to produce a zero-volume singularity at the center of a black hole in which all of a black hole’s mass existed at infinite density, many physicists after 1916 assumed that undiscovered natural phenomena could prevent a singularity from forming. This notion was reinforced by the realization that the theories of general relativity and the then-rapidly evolving science of quantum mechanics were mutually incompatible.

The suspicion that undiscovered natural phenomena would ultimately prevent a singularity from forming was undercut circa 1965–1970 when Roger Penrose and Stephen Hawking published papers showing how a differential geometry-based treatment of general relativity proved singularities existed. Though many professional cosmologists and physicists remained skeptical that singularities were physically real, pop-culture and textbooks directed to laypersons promulgated a simplified scientific orthodoxy that they were.

The long-held belief that there was a sound theoretical basis underpinning that singularities were real was challenged with a 2023 paper by Roy Kerr, a theoretical physicist who made notable contributions to black hole theory in the mid-1960s, showed that 59 years earlier Penrose made a mathematical error that was soon adopted by Hawking. When the error was corrected, Kerr wrote that their math actually showed as follows:
 * Singularities don’t exist;
 * There was no other proof that singularities exist, and;
 * Black holes were actually extended objects composed of dense degenerate matter of some sort (“quantum matter,” as Kerr wrote).

Kerr’s view of black holes is in general concordance not only with fuzzballs but with gravastars, which, like fuzzballs, are theorized to be compact objects with no singularities at their centers. See &thinsp;§Relationship to gravastars, below, for more.

Details
The modern view of black holes has existed since 1916—the same year that Einstein’s theory of general relativity was published—when Karl Schwarzschild, for whom the “Schwarzschild radius” is named, found that general relativity predicted the existence of objects so massive that their surface gravity would prevent light from escaping. Though Schwarzschild did not predict the existence of singularities at the center of black holes, the consequences of his formulas could be extrapolated to the ultimate extent: all the matter comprising black holes would collapse into zero-volume singularities of infinite density and infinite escape velocity.

While “infinity” conceptually exists in the science of mathematics, nothing in the natural world is known to actually be infinite other than, quite possibly, the size of the multiverse. Truly zero volume and infinite density are incompatible with quantum theory because the smallest linear dimension in physics that has any meaning in the measurement of spacetime is the Planck length, which Max Planck introduced in 1899 and today is given the value $5 kg/m^{3}$. Below the Planck length, the effects of quantum foam dominate and it is meaningless to conjecture about length at a finer scale, much like how meaningless it would be to specify to a precision of one centimeter where one can recover a buoy floating in storm-tossed seas.

Infinite-density singularities are also incompatible with other Planck units. For instance, if singularities have non-zero diameters with a density equal to the Planck density ($3 kg/m^{3}$), then even a minimal singularity would have a definite non-zero volume with a diameter of $5.9 kg/m^{3}$ ($11 km$), which may seem large but is still far smaller than an electron and is even a thousand times smaller than the minimum dimension that can be probed with a world-class 10 TeV particle accelerator ($5 kg/m^{3}$). Alternatively, if a minimal singularity has a quantum-limited size of one Planck volume, then it has a “fuzzy” density averaging $9$, which far exceeds the Planck density.

This inability to describe the exact nature of singularities speaks to the dilemma of physics theories wherever their mathematical formulas have a zero being used as a divisor and the known laws of physics have been declared to have “broken down”; it is often an indicator that a theory is incomplete. Physicists in the decades after 1916 assumed they may well have had an incomplete understanding of black holes and that unknown physical processes would ultimately prevent the formation of the zero-volume singularities that general relativity was predicting.

Between 1965 and 1970, Roger Penrose and Stephen Hawking published three papers, collectively called the Penrose–Hawking singularity theorems, which relied upon a differential geometry-based analysis of the maximally distorted timespace of general relativity. In doing so, the two physicists concluded that gravitational singularities truly existed. It is important to note that the 1965 paper by Penrose was published three years before deep inelastic scattering experiments conducted at what was then known as the Stanford Linear Accelerator Center confirmed that hadrons such as protons and neutrons were composite particles—not elementary—and comprised quarks, which had been predicted to exist in 1964. Quarks are truly fundamental quanta of mass-energy that interact with the mediators of the three fundamental particle forces of the Standard Model (electromagnetism, the strong interaction, and the weak interaction).

In the nearly 59 years after Penrose and Hawking—both highly respected members of the scientific community—advanced their theories, the scientific orthodoxy that singularities lie at the heart of black holes was widely promulgated in textbooks, magazines, and other media directed to laypersons. Nonetheless, not too many cosmologists and physicists really believed that singularities and their accompanying true infinities like “infinite escape velocity” and “infinite density” truly existed. This doubt spurred some theoretical physicists to continue to look towards quantum mechanics and quantum gravity for a solution to the conundrum, while still others endeavored on the equation of state of various forms of exotic degenerate matter such as neutron star matter, as well as fuzzball theory, which posits that the equation of state of the matter comprising black holes is found in the Type IIB variant of superstring theory.

The assumption that there was a solid theoretical basis predicting the existence of singularities was challenged in 2023 when the prominent theoretical physicist, Roy Kerr—by then 89 years old—issued a scientific paper stating that the conclusion by Penrose and Hawking that black holes have singularities at their centers was based upon an incorrect mathematical assumption. Kerr authored or coauthored papers containing two now-eponymous formulas for rotating black holes: the Kerr metric (1963), which described the spinning generalization of the Schwarzschild radius, and the Kerr–Newman metric (1965), which described a charged spinning solution. Referring to Kerr’s 2023 paper, the theoretical physicist and author, Sabine Hossenfelder, said, “This is maybe the most surprising development in theoretical physics I’ve seen for a decade or so.”&thinsp;

Kerr wrote in his 2023 paper that there is no evidence that black holes contain singularities and that the mathematical basis Penrose and Hawking relied upon to show they do actually showed the opposite. He wrote specifically as follows (bracketed text added here for clarity):When Kerr wrote above of “finite affine length” (“FALL’s”), he was referring to the effect of “affine parameters” (see Affine connection). Affine parameters are a way of measuring relativistically distorted spacetime geometry—the length of curves in the science of differential geometry—in a way that is independent of the travel time of light since light does not experience time. Kerr wrote that Penrose, Hawking, and other theoretical physicists predicated their work on a seminal but flawed journal paper published in 1955 by Amal Kumar Raychaudhuri (1923–2005) known as “Raychaudhuri’s theorem” (see Raychaudhuri equation). Kerr wrote that Raychaudhuri and those who later relied upon his theorem failed to appreciate the distinction between affine and geodesic distances, leading to the erroneous conclusion that affine parameters have finite lengths.

Kerr showed that affine parameters are exponential functions that are asymptotic inasmuch as their graphed curves form asymptotes that approach exceedingly close to a bound limit (the zero line in the right pane of Fig. 7&thinsp;) but never equal the bound limit as Penrose and Hawking treated them. This is analogous to how one may accelerate protons in a particle collider to 99.9999% the speed of light or may endow them with 100 times that relativistic kinetic energy by accelerating them to $1 kg$% the speed of light, but one can never accelerate them to a velocity that equals the speed of light.



Kerr’s 2023 realization repudiated the logic of Penrose and Hawking, which was as follows:


 * Affine parameters reach their bound limits, which means;
 * Affine curves (spacetime curves) possess finite lengths, which can only be valid if;
 * There is a point where all curves converge: a singularity.

Kerr wrote that the false underlying premise of Penrose of Hawking that affine parameters in black holes reach their bound limits invalidated all that followed, particularly the final logical leap that singularities must exist.

In his paper, Kerr endorsed neither fuzzball theory nor string theory; he did not mention them. He wrote that there was no proof that singularities exist and that black holes comprise some sort of degenerate matter. Regarding the possible nature of that degenerate matter, Kerr had the following footnote on page nine of his paper:What this all means, I have no idea! I do not believe anyone else does either since the behavior of quantum matter at such extreme pressure is unknown.

Kerr’s view of black holes as comprising compact objects with a physical surface and no singularities at their centers is in general concordance not only with fuzzballs but also gravastars, which are a description of black holes according to a theory advanced in 2001. See &thinsp;§Relationship to gravastars, below, for more.

= Information paradox =

Overview
Classical black holes create a problem for physics known as the black hole information paradox; there is no such paradox under fuzzball theory. The paradox was first raised in 1972 by Jacob Bekenstein and later popularized by Stephen Hawking. The information paradox is born of a requirement of quantum mechanics that quantum information must be conserved, which conflicts with general relativity’s requirement that if black holes have singularities at their centers, quantum information must be extinguished from spacetime. This paradox can be viewed as a contradiction between two very different theories: general relativity, which describes the largest gravity-based phenomena in the Universe, and quantum mechanics, which describes the smallest phenomena. Fuzzball theory purports to resolve this tension because the Type IIB superstring theory it is based on is a quantum description of gravity called supergravity.

Details


A black hole that fed primarily on the stellar atmosphere (protons, neutrons, and electrons) of a nearby companion star should, if it obeyed the known laws of quantum mechanics, grow to have a quantum composition different from another black hole that fed only on light (photons) from neighboring stars and the cosmic microwave background. This follows a core precept of both classical and quantum physics that, in principle, the state of a system at one point in time should determine its state at any other time.

Yet, general relativity’s implications for classic black holes are inescapable: Other than the fact that the two black holes would become increasingly massive due to the infalling matter and light, no difference in their quantum compositions would exist because if singularities have zero volume, black holes have no quantum composition. Moreover, even if quantum information was not extinguished at singularities, it could not climb against infinite gravitational intensity and reach up to and beyond the event horizon where it could reveal itself in normal spacetime. This is called the no-hair theorem, which states that black holes can reveal nothing about themselves to outside observers except their mass, angular momentum, and electric charge, whereby the latter two could theoretically be revealed through a phenomenon known as superradiance.

For classic black holes (those with singularities at their centers), Hawking radiation (radiation proposed by the theoretical physicist Stephen Hawking that comprises photons and possibly other quanta theorized to be emitted from the proximity of black holes) cannot circumvent the no-hair theorem as it can reveal only a black hole’s mass. However, this is in a highly theoretical sense since Hawking radiation is, for all practical purposes, undetectable (see §Testability of the theory, below).

Hawking radiation is created whenever massless and truly neutral virtual particle pairs—virtual photons for modern expositions of this topic—form in proximity to, but outside of, an event horizon. One member of a virtual particle pair possesses negative mass-energy (in the absolute E&thinsp;=&thinsp;mc2 sense), the other has positive mass-energy, and the average pair’s net energy is zero. The meanings of “negative mass-energy” or simply “negative energy” (in the absolute E&thinsp;=&thinsp;mc2 sense) in discussions of virtual photons at black holes, differ somewhat from what “negative energy” normally means for virtual photons in the lab (in regular spacetime). Virtual photons are oscillations in the background electromagnetic field that prevent an otherwise pure vacuum from containing (possessing) zero energy. Virtual photons are characterized by their wavelength (frequency, or “color”), momentum (which, unlike real photons, is exceedingly variable), and polarization (spin-angular momentum). Unfortunately, terms like “negative energy,”[space-start-curly-quote]antiparticle,” and “antimatter” can add confusion to a topic that has long fallen victim to popular misunderstanding. Moreover, the issue of whether photons are best described as “waves” or “particles” can needlessly belabor a simple and accessible exposition on Hawking radiation if not formally addressed. Accordingly, a short treatise on the broad subject is required to establish context for how Hawking radiation can be viewed as “arising from virtual photons possessing negative energy tunneling through an event horizon.” The ”wave–particle duality” adds complexity to a topic that is already challenging to understand. This duality is commonly encountered when photons are referred to as “quantized wave packets propagating in the electromagnetic field” rather than simply “particles.” Referring to photons as “particles” more accurately and conveniently describes the nature of photons after they hit a light detector in a double-slit experiment; it is needlessly ponderous to say, “the photon’s wave function then collapses to a point.” Especially in a treatise on Hawking radiation, where Hawking himself wrote, “there will be pairs of particles, one with negative energy and one with positive energy,” it is useful here to use particle-based vernacular. Nonetheless, the wave nature of virtual photons must be addressed to properly cover Hawking radiation. Though one member of a pair of virtual photons can possess negative energy, this is neither the product of charge conjugation (the reversing of electric charge as permitted by “C-symmetry”), nor is it antimatter because, by definition, antimatter is “matter (which has mass) possessing an electric charge opposite that of ordinary matter.” Though photons are considered to be their own antiparticle (which is a broad family that confusingly includes antimatter like antiprotons), photons are more specifically a truly neutral particle&thinsp;/&thinsp;antiparticle. Furthermore, real photons must always possess energy equal to the speed of light times their momentum vector and must have zero rest mass. In laboratories, virtual photons possess different kinds of momentum and interact with matter and its accompanying electromagnetic fields in different ways. Virtual photons exist everywhere and their effects are observable as the Lamb shift as they interact with the electromagnetic field of electrons surrounding atoms. This activity also underlies zero-point energy, which jostles matter to such an extent it prevents helium at near-absolute zero from freezing at room pressure. Virtual photons are also responsible for the Casimir effect, which squeezes two closely spaced plates together. Virtual photons can also be polarized; this is to say, they have the quantum property of spin-angular momentum, which can couple to the angular momentum of charged particles. In simple terms, all three of the above effects: the Lamb shift, the Casimir force, and the inability of helium to freeze at room pressure due to zero-point energy, arise from the collective activity of virtual photons. More precisely—and ponderously—these three effects result from oscillations in the quantum electrodynamic field, resulting in a non-zero QED vacuum (or simply vacuum energy). The QED vacuum is the lowest energy state of the all-pervasive electromagnetic field permeating the Universe; real photons are traveling excitations in this electromagnetic field. Note that the quantum electrodynamic field and the electromagnetic field are essentially the same thing except that the former is the quantum-based view of electromagnetism that accounts for a non-zero vacuum energy. Note also that in discussions of Hawking radiation, the term “zero-point energy” is interchangeable in practice with “vacuum energy,” but the former is broader and encompasses other zero-point fields, including the quantum chromodynamic vacuum (QCD), which governs interactions at the quark level. The relativistic mass-energy of real photons (their absolute E&thinsp;=&thinsp;mc2 energy that, as Hawking wrote, is “relative to infinity”) is proportional to their momentum vector times the speed of light per $p$, where…
 * $E = pc$ is energy,
 * $E$ is the magnitude of the momentum vector, and
 * $p$ is the speed of light.

Individual virtual photons are different from real ones; they may carry any momentum, or relativistic mass-energy, permitted by the Heisenberg uncertainty principle. Thus, any given pair of virtual photons may possess opposite and unequal momenta. However, across a large population of virtual photon pairs, their net momentum averages to zero and so too does their rest mass-energy and relativistic energy. With regard to measurements in the lab of virtual photon momentum, the labels “positive energy” and “negative energy” are relative classifications established by the direction of their momentum vector, p (and accompanying energy) in relation to an external electromagnetic field (from one or more nearby charged particles). This convention comes from the behavior of real photons, which possess positive energy with respect to electrons; this underlies spectral lines where the electrons surrounding atoms transition from a lower-energy atomic orbital to a higher-energy one after absorbing photons. When an individual virtual photon is exchanged between two particles with like charges (followed soon after by its partner), it is considered to have positive energy when its momentum adds energy to the electromagnetic force between them and they more vigorously repel each other. Note that a virtual photon with a positive-energy momentum direction when exchanged between two electrons would be classified as possessing a negative-energy momentum if the exchange was between positrons. In the context of Hawking radiation, however, the labels “positive energy” and “negative energy” for virtual photons are in an absolute sense, or “relative to infinity,” as Hawking wrote. Virtual photons with negative mass-energies in an absolute sense are generally considered as not physically real. This is because a virtual photon possessing positive momentum and positive mass-energy behaves just like easy-to-study real photons, whereas a virtual photon possessing negative momentum and negative mass-energy cannot be isolated in the lab and its distinctive property studied. This inability to isolate a virtual photon possessing negative energy is to be expected since, in the wave-based view, it is actually an integral part of an individual oscillation in the quantum electrodynamic field (electromagnetic field) permeating the Universe (see image at right) that humans cannot bifurcate. Black holes have the unique ability to do what cannot be done in the lab: separate virtual photon pairs; Hawking was clear about this in his paper. In Mathur’s scientific paper, The fuzzball proposal for black holes: an elementary review (PDF), he endeavored to explain in greater detail the mechanism that allows the quantum information describing what fell into a black hole to be imprinted in Hawking radiation. Mathur wrote that since the radius of curvature near supermassive black holes is larger than both the Planck length and the size of strings, supermassive black holes would seemingly be incapable of violating Hawking’s argument; i.e., such holes would continuously separate virtual photon pairs, creating a problem for quantum theory. Mathur posited that the problem is resolved in the following way: “Bound states” (oscillations in quantum fields like the QED field) grow larger with increasing “degeneracy”; which is to say, spacetime and gravity near event horizons are so agitated that different quantum states begin to share the same energy levels. Essentially, near event horizons where spacetime is close to no longer existing, the quantum property of size smears, enlarges, and overlaps with other quantum properties. This changes the structure of the black hole from what Hawking had imagined to something more like a “string star.” A string star does not separate photon pairs like a traditional black hole does, and thus, Hawking’s puzzle is resolved. From the perspective of an outside observer viewing wave-based phenomena, black holes can shear, stretch, and bifurcate the components of a QED oscillation possessing opposite momenta. Hawking radiation arises when the portion of the oscillation possessing negative momentum and negative relativistic mass-energy (the lower bulge of the QED oscillation at right) tunnels through to a black hole. This liberates the positive-energy half of the QED oscillation as a real photon, which becomes exceedingly gravitationally redshifted as it climbs up the extreme gravity well surrounding the black hole and escapes to infinity. The peak-emission wavelength of fully redshifted photons is about ten times the diameter of the event horizon surrounding a non-spinning black hole regardless of its mass; a 6.77 non-spinning black hole with an event horizon diameter of 40.0 kilometers emits Hawking radiation where the most common photons have a wavelength of 403 kilometers. Finally, it is important to bear in mind that gravity and spacetime are so agitated near a black hole that even the mathematics describing Hawking radiation can be viewed in different ways. For instance, instead of negative-energy particles tunneling through the horizon in forward-directed time, they can be thought of, as Hawking wrote in his paper, “as positive-energy particles crossing the horizon on past-directed world-lines.” As mentioned earlier, Hawking cautioned, “It should be emphasized that the mechanism responsible for the thermal emission and area decrease are heuristic only and should not be taken too literally.” Even the basic premise that Hawking radiation is the product of a stationary person observing a highly accelerating region of spacetime has its general relativity inverse known as the Unruh effect, which predicts that thermal radiation surrounds an accelerating observer. A helpful YouTube video, “Hawking radiation” by ScienceClic English, provides a very visual and detailed explanation of virtual particles, Hawking radiation, and how there are different ways of looking at these phenomena. The virtual photon possessing negative energy is captured; it travels down through the event horizon via quantum tunneling, whereupon it becomes part of the black hole (robbing it of energy and an equivalent amount of mass). Meanwhile, the pair member with positive energy is ejected, carrying away its share of energy from the black hole; this is Hawking radiation, wherein the ejected photons are no longer virtual and are real.

Hawking showed that his now-eponymously named radiation takes the form of blackbody thermal emissions that make black holes appear to be blackbody radiators with effective temperatures that, despite being extraordinarily close to absolute zero, are inversely proportional to the mass of a black hole. If one could collect a sufficient number of Hawking radiation photons, their spectrum distribution would reveal the mass of the black hole that emitted them.

It is essential to bear in mind that the above description of the origin of Hawking radiation is highly simplified. Even though Hawking’s scientific paper, “Particle Creation by Black Holes,” was directed to theoretical physicists and delved into arcane phenomena like Killing vector fields, Hawking cautioned that his descriptions of the mechanism responsible for black hole thermal emission “are heuristic only and should not be taken too literally.” Note that heuristic teaching means “a teaching method where students learn on their own through discovery and problem-solving in lieu of pure instruction,” however, in theoretical physics, the verb heuristic can connote “treated in a simpler manner than it really is,” whereas the compound noun heuristic approach tends to mean “a simpler or more intuitive way to examine or explain a phenomenon.” Nonetheless, Hawking’s advisement to his peers to not take his explanations too literally bears witness to the complexities underlying Hawking radiation. His advisement also underscores his remarkable achievement of producing a mathematical formula relating photon emissions from black holes of any given mass to a blackbody temperature. Within that formula, Hawking linked thermodynamics to a variety of disparate disciplines in physics: quantum mechanics, relativity, Newtonian mechanics, and gravitation, as shown below.



The amount of Hawking radiation emitted by black holes, or their luminosity, is inversely proportional to the square of their mass. Such calculations assume that Hawking radiation comprises only photons; that assumption is used throughout this and related articles on Wikipedia. That equation is as follows:&thinsp;


 * $$L = \frac{\delta E}{\delta t} = \frac{1}{M^2}\frac{\hbar c^6}{15360 \pi G^2}$$

The term $c$ (luminosity) represents power in watts (an exceedingly small portion of a watt for Hawking radiation), which can be converted to other measures such as mass loss rates. Details on the formula’s other terms are beyond the scope of this article and are covered at Bekenstein–Hawking formula. The formula’s name honors Jacob Bekenstein (1947–2015), who laid down essential foundations of black hole theory that predated Hawking’s contributions by several years.

In a purely theoretical sense, the fuzzball theory advanced by Mathur and Lunin goes beyond Hawking’s formula relating the blackbody temperature of Hawking radiation and the mass of the black hole emitting it. Fuzzball theory satisfies the requirement that quantum information be conserved because it holds, in part, that the quantum information of the strings that fall onto a fuzzball is preserved as those strings dissolve into and contribute to the fuzzball’s quantum makeup. The theory further holds that a fuzzball’s quantum information is not only expressed at its surface but tunnels up through the tunneling fuzziness of the event horizon where it can be imprinted on Hawking radiation, which very slowly carries that information into regular spacetime in the form of delicate correlations in the outgoing quanta.

Fuzzball theory‘s proposed solution to the black hole information paradox resolves a significant incompatibility between quantum mechanics and general relativity. While Einstein made important contributions to quantum mechanics, he had objections to it. Throughout the remainder of his career, Einstein searched in vain for a unifying theory—a Theory of Everything, so to speak, that explained all aspects of the universe. To this day, there is no widely accepted theory of quantum gravity—a quantum description of gravity—that is in harmony with general relativity. However, all five variations of superstring theory, including the Type IIB variant upon which fuzzball theory is based, have quantum gravity incorporated into them. Moreover, all five versions have been hypothesized as actually constituting five different limits, or subsets, that are unified under M-theory.

= Testability of the theory =

Overview


As no direct experimental evidence supports either string theory or fuzzball theory, both are products purely of calculations and theoretical research. However, theories must be experimentally testable if there is to be a possibility of ascertaining their validity. To be in full accordance with the scientific method and one day be widely accepted as true—as are Einstein’s theories of special and general relativity—theories regarding the natural world must make predictions that are consistently affirmed through observations of nature. Superstring theory predicts the existence of highly elusive particles that, while they are actively being searched for, have yet to be detected. Moreover, fuzzball theory cannot be substantiated by observing its predicted subtle effects on Hawking radiation because the radiation itself is for all practical purposes undetectable. However, fuzzball theory may be testable through gravitational-wave astronomy.

Details
The first challenge insofar as the testability of fuzzball theory is it is rooted in unproven superstring theory, which is short for supersymmetric string theory (see Fig. 9&thinsp;). Supersymmetry predicts that for each known quanta (particle) in the Standard Model, a superpartner particle exists that differs by spin $2.8 ergs/s$. This means that for every boson (massless particles in the Standard Model with integer spins like 0, 1, and 2), there is a supersymmetric-spin fermion-like particle known as a gaugino that has a half-odd-integer spin (e.g., $1/250$ and $4.34 K$) and possesses a rest mass. Examining this spin-$1$ supersymmetry in the opposite direction, superstring theory predicts that fermions from the Standard Model have boson-like superpartners known as sfermions, except that unlike actual gauge bosons from the Standard Model, sfermions don't strongly act as force carriers. All bosons (e.g., photons) and the boson-like sfermions will readily overlap each other when crowded, whereas fermions and the fermion-like gauginos possessing mass (such as electrons, protons, and quarks) will not; this is one reason why superpartners—if they exist—have properties that are exceedingly different from their Standard Model counterparts. Take the example of the photon, which is a massless boson with an integer spin of 1 and is the carrier of electromagnetism in the Standard Model; it is predicted to have a superpartner called a photino, which is a mass-carrying fermion with a half-odd-integer spin of $1/3$. Conversely, the electron (spin $1.616 m$) is an example of a mass-carrying fermion where its superpartner is the spin-0 selectron, which is a massless boson but is not considered to be a primary force carrier.

The experimental detection of superpartners would not only bolster superstring theory but would also help fill gaps in current particle physics, such as the likely composition of dark matter and the muon’s anomalous magnetic moment (it should be precisely equal to 2 and is instead about $5.155 kg/m^{3}$, suggesting hidden interactions); particle physicists have accordingly been searching for these superpartners. Based on cosmological effects, there is strong evidence for the existence of dark matter of some sort (see Dark matter: Observational evidence), but if it is composed of subatomic particles, those particles have proven to be notoriously elusive despite the wide variety of detection techniques that have been employed since 1986. This difficulty in detecting supersymmetric particles is not surprising to particle physicists since the lightest ones are believed to be stable, electrically neutral, and interact weakly with the particles of the Standard Model. Though many searches using particle colliders have ruled out certain mass ranges for supersymmetric particles, the hunt continues.

Fuzzball theory resolves a long-standing conflict between general relativity and quantum mechanics by holding that quantum information is preserved in fuzzballs and that Hawking radiation originating within the Planck-scale quantum foam just above a fuzzball’s surface is subtly encoded with that information. As a practical matter, however, Hawking radiation is virtually impossible to detect because black holes emit it at astronomically low power levels and the individual photons constituting Hawking radiation have extraordinarily little energy. This underlies why theoretically perfectly quiescent black holes (ones in a universe containing no matter or other types of electromagnetic radiation to absorb) evaporate so slowly as they lose energy (and equivalent amounts of mass) via Hawking radiation; even a modest black hole would require $7.8 Planck lengths$ times the current age of the Universe to vanish. Moreover, a top-of-the-list 106 billion supermassive black hole would require ten million-trillion-trillion times longer still to evaporate: $1.26 m$ times the age of the Universe.

Hawking showed that the energy of photons released by Hawking radiation is inversely proportional to the mass of a black hole and, consequently, the smallest black holes emit the most energetic photons that are the least difficult to detect. However, the radiation emitted by even a minimum-size, black hole (or fuzzball) comprises extremely low-energy photons that are equivalent to those emitted by a black body with a temperature of around 23 billionths of one kelvin above absolute zero. More challenging still, such a black hole has a radiated power—for the entire black hole—of $m$ (12 billion-billion-billionths of one milliwatt). Such an infinitesimal transmitted power is to one watt as 1⁄3000th of a drop of water (about one-quarter the volume of a typical grain of table salt) is to all the Earth’s oceans.

Critically though, when signals are this weak, the challenge is no longer one of classic radio astronomy technological issues like gain and signal-to-noise ratio; Hawking radiation comprises individual photon quanta, so such a weak signal means a black hole is emitting at most only ten photons per second. Even if such a black hole was only 100 lightyears away, the odds of just one of its Hawking radiation photons landing anywhere on Earth—let alone being captured by an antenna—while a human is watching are astronomically improbable. Importantly, the above values are for the smallest possible stellar-mass black holes; far more difficult yet to detect is the Hawking radiation emitted by supermassive black holes at the center of galaxies. For instance, M87* (Fig. 10&thinsp;), which is an unremarkable supermassive black hole, emits Hawking radiation at a near-nonexistent radiant power of at most 13 photons per century and does so with a wavelength so great that a receiving antenna possessing even a modest degree of absorption efficiency would be larger than the Solar System.

However, fuzzball theory may be testable through gravitational-wave astronomy. Gravitational wave observatories like the Laser Interferometer Gravitational-Wave Observatory (LIGO) have proven to be a revolutionary advancement in astronomy and are enabling astronomers and theoretical physicists to develop ever-more detailed insights into compact objects such as neutron stars and black holes. Ever since the first direct detection of gravity waves, a 2015 event known as GW150914, which was a merger between a binary pair of stellar-mass black holes, gravity-wave signals have so far matched the predictions of general relativity for classical black holes with singularities at their centers. However, an Italian team of scientists that ran computer simulations suggested in 2021 that existing gravity-wave observatories are capable of discerning fuzzball-theory-supporting evidence in the signals from merging binary black holes (and the resultant effects on ringdowns) by virtue of the nontrivial unique attributes of fuzzballs, which are extended objects with a physical structure. The team’s simulations predicted slower-than-expected decay rates for certain vibration modes that would also be dominated by “echoes” from earlier ring oscillations. Moreover, a separate Italian team a year earlier posited that future gravity-wave detectors, such as the proposed Laser Interferometer Space Antenna (LISA), which is intended to have the ability to observe high-mass binary mergers at frequencies far below the limits of current observatories, would improve the ability to confirm aspects of fuzzball theory by orders of magnitude.

= Relationship to gravastars = Outwardly, fuzzballs are similar to gravastars since they appear to outside observers as black holes, have a core comprising degenerate matter, and possess no singularities at their centers. Unlike fuzzball theory, however, gravastar theory is not based on string theory; it is instead based on a theory advanced in 2001 that extends Bose–Einstein condensation to gravitational systems in accordance with Einstein’s theory of general relativity.

Gravastar cores comprise what the authors termed “Gravitational Bose–Einstein Condensate,” or “GBEC.” The theory holds that gravastars are nearly as compact as black holes, are cold objects, their GBEC cores have a phase boundary skin of an ultra-relativistic fluid of “soft quanta” with a Planckian-level thickness approaching zero, and have a globally defined Killing time. Because gravastars are slightly less compact than black holes, event horizons do not reach their surface, allowing quantum information to escape. However, since gravastars possess surface gravities that are essentially as strong as that of black holes, they nonetheless resemble black holes for all practical purposes to outside observers. As the authors of the 2001 paper wrote, “Unlike black holes, a collapsed star of this kind is consistent with quantum theory, [is] thermodynamically stable, and suffers from no information paradox.”&thinsp;

= See also =


 * Black hole information paradox
 * Black hole thermodynamics
 * Cosmic censorship hypothesis
 * Degenerate matter
 * Ergosphere
 * Event horizon
 * General relativity
 * Gravastar
 * Gravitational singularity
 * Hawking radiation
 * Horizon (general relativity) (a list of types)
 * M-theory
 * Minkowski space
 * Neutron star
 * Planck star
 * Rotating black hole
 * Quantum foam
 * Quantum gravity
 * Spacetime
 * Spacetime diagram
 * Spaghettification
 * Special relativity
 * String (physics)
 * String duality
 * String theory
 * Supermassive black hole
 * Superstring theory
 * Timeline of black hole physics
 * Type II string theory
 * Virtual particle

= Notes = 

= References =

= External links =
 * Are Black Holes Fuzzballs? — Space Today Online
 * The Fuzzball Fix for a Black Hole Paradox, June 23, 2015 — Quanta Magazine
 * Information paradox solved? If so, Black Holes are “Fuzzballs” — The Ohio State University
 * ArXiv.org link: Unwinding of strings thrown into a fuzzball — Stefano Giusto and Samir D. Mathur
 * Astronomers take virtual plunge into black hole (84 MB) (10 MB version), a 40-second animation produced by JILA — a joint venture of the University of Colorado at Boulder and the NIST
 * Video lecture series at CERN (four parts approximately an hour each): “The black hole information problem and the fuzzball proposal”, Part 1, Part 2, Part 3, Part 4
 * What if Singularities DO NOT Exist?, hosted by Matt O’Dowd, PBS Space Time — Youtube