Wikipedia:Reference desk/Archives/Science/2017 November 16

= November 16 =

"After"-Talk
Is it just a myth or a fact that certain types of bugs can make it possible to listen what was talked in a room upto an even an hour after the talk has ended. I mean the device wasn't there when the conversation was on and installed, say, many minutes after the talkers had left the premises. Jon Ascton   (talk)  05:13, 16 November 2017 (UTC)


 * Just a myth. Perhaps someone can link to a reference that debunks this fanciful notion?    D b f i r s   08:43, 16 November 2017 (UTC)
 * it is hard to debunk the general concept of after-talk-listener - there is no rock-solid physical principle that says you cannot, like it would be the case for a claim that you can listen before the talk happens. But any specific implementation I can imagine would be easily debunked.
 * For instance, "picking up the attenuated sound waves bouncing off the walls by a strong microphone" is next-to-impossible: (1) since the sentence spoken at t is bouncing off when the sentence at t+Δt is spoken, it will need a whole lot of deconvolution that may or may not be possible and will anyways surely worsen the signal-to-noise ratio; (2) except at resonant frequencies of the room, sound attenuates quite fast (i.e. the Q factor is low) (test: shout at the top of your lungs, and listen if you hear anything a few seconds after you stopped: you don't, which means the decibels drop fairly quick); (3) microphones are not much more sensitive than the human ear and way less complex as far as signal processing go (see e.g., ), so if you cannot hear something it is usually a good guess that a microphone next to you cannot either. (I remember someone saying that the acoustic noise generated by air at room temperature was not far below the threshold of human hearing and some people with Hyperacusis could hear it, but I could not track a source to that effect - anyone else can, or is that just another myth?) Tigraan Click here to contact me 09:07, 16 November 2017 (UTC)
 * Methinks it would require some kind of Echo chamber. But unless it could reverberate for a very long time, the OP's concept wouldn't work. Also, you'd likely have echoes of different parts of the conversation going on all at once, and it would require some tedious work to separate it out. ←Baseball Bugs What's up, Doc? carrots→ 09:26, 16 November 2017 (UTC)


 * The events were recorded when they occurred (and could then have been relayed later). There are many ways to record people which they are not necessarily aware of... The belief however reminds me of "wall memory", a spiritualist belief that objects, walls and houses could have memory which they could echo later in particular circumstances to explain paranormal encounters etc.  — Paleo  Neonate  – 10:55, 16 November 2017 (UTC)


 * The device doesn't have to be inside the room to hear what's going on.  See laser microphone. 82.13.208.70 (talk) 11:16, 16 November 2017 (UTC)


 * Perhaps the users of such devices spread rumours about recording with a device installed long after the event, just to hide how sensitive their real-time devices really are.   D b f i r s   12:27, 16 November 2017 (UTC)


 * Oh, it's a myth, but it would be useful if you could provide a source/link for the original claim. It would be much easier for us to provide sources to debunk a particular and specific assertion, rather than just throwing open the field to try to prove a general negative.
 * For a thought experiment, though, consider the speed of sound is about 300 meters per second, and a good-sized meeting room might be 10 meters across. In one second, a sound originating from a point in the room will have bounced back and forth off the walls of the room 30 times.  (It's even worse if you remember that rooms have ceilings about 3 meters high; that's 100 floor-ceiling bounces per second.)  A minute later, a single short, clear sound will have bounced off at least a couple of thousand surfaces, getting spread out, attenuated, and jumbled into a bit of molecular jiggling indistinguishable from heat. A hard surface like concrete, drywall, or glass might reflect as much as 98%  of the sound energy that hits it back into the room&mdash;an echo.  If we do the math for 2000 ideal 98% bounces, we get...2*10^-18 times the original intensity.  And that's your best-case, because it doesn't account for the presence of soft sound-absorbing objects in the room, like chair cushions, drapes, or people, and it doesn't account for the miserable nuisance of multiple paths interfering with each other.
 * If I fire a pistol in an office with a closed door, and then open the door a minute later, the guys out in the hall don't hear a 'bang' when the door opens. Forget trying to get information about something like a conversation. TenOfAllTrades(talk) 13:48, 16 November 2017 (UTC)


 * There's some novel where "the words used by Jesus to raise Lazarus from the dead", I think, are recorded in a ceramic vibrations of a potter's wheel or some such. But sound can be very weak (the ear is said to be able to sense an eardrum motion on the order of a single atom's radius) and these fictional scenarios are usually only that.  Although yes, spies can record conversations by bouncing lasers off office tower windows and watching the vibration.  You can certainly picture some loopy scenario where such a wiggling of an office tower window reflecting a distant bright object gets recorded on a security camera or something.  But the easiest fiction is for a cop to tell someone of average intelligence -- for criminal defendants -- that they have such a device, and that he can get a lesser charge by talking before they have to bring it out to the scene. Wnt (talk) 16:53, 19 November 2017 (UTC)


 * I just read of a sci-fi story from the '40's (can't remember if it's a book or movie) where in the future, all prior actions can be reconstructed by trace imprints and vibrations, so the murderer concocts a plot entirely in his head, and befriends his intended victim to allay suspicion. It's killing me I can't think of the name, remember the source, or find it on google.  It may have been mentioned in the Oct Discover or Sci Am magazines, which I returned to the library last week.  In any case, the idea is nonsense simply based on chaos theory, namely nonlinear feedback and path independence.  Most of the relevant info is quickly overwhelmed by signal noise, destroyed by entropy. or simply not recorded in the first place.  For example, leave two glasses of water in the fridge an hour apart, then try to find out which one was placed in first the next day?  The temperature will give you no clue, and other hints will also very swiftly disappear. μηδείς (talk) 22:06, 19 November 2017 (UTC)


 * Medeis: ask in Sci Fi Stack Exchange if you want to identify it. – b_jonas 11:39, 24 November 2017 (UTC)

How far from the center has a bound electron of the observable universe ever reached?
Out of zillions of atoms one has had an electron reach the most picometers from the center since the Big Bang. This distance should be estimatable right? Maybe it's "more wrong" to think of electrons this way than as clouds but you're only probabilistically estimating, not observing and changing actual electrons. Sagittarian Milky Way (talk) 08:57, 16 November 2017 (UTC)
 * Even before you get into quantum weirdness, your question is poorly defined. Say there are only 2 protons and 2 electrons in the universe.  If the two electrons are both closer to proton A than to proton B, do you have two hydrogen atoms, or a positive hydrogen ion and a negative hydrogen ion?  i.e. when an electron is far enough away from the atom, it ceases to be meaningful to define it as an atom (and that's before you get to issues regarding all electrons being interchangeable, and not having defined positions unless measured). MChesterMC (talk) 09:35, 16 November 2017 (UTC)
 * The greater issue then is that what you really have is just a set of data describing the location of charge centers in relation to each other and their relative movement. Concepts like "electron" and "proton" and "ion" and "atom" are human-created categorizations to make communicating about complex data like this meaningful to us.  We define the difference between an atom and an ion based on our own (arbitrary but useful) distinctions.  What makes something a thing is that we set the parameters for that thing.  There is no universal definition for that thing outside of human discussion.  -- Jayron 32 11:50, 16 November 2017 (UTC)
 * Also there is no such thing as the center of the Universe.--Shantavira|feed me 11:14, 16 November 2017 (UTC)
 * True, but I read it as being from the centre of the atom. My quantum mechanics isn't up to the task, but it should be possible to estimate a probable maximum distance just by multiplying the probability density function from the Schrödinger equation by the number of atoms being considered, perhaps just for hydrogen atoms.  Whether such a distance could ever be measured in practice is questionable, but the mathematics based on a simple model should provide a very theoretical answer. Do we have a quantum mechanic able to offer an estimate?    D b f i r s   12:22, 16 November 2017 (UTC)
 * You have to define your probability limits. The maximum distance is infinite if you don't define a limit, like 90% probability, or 99% probability, or 99.99% probability.  If you set the probability to 100%, you get a literally infinitely large atom.  -- Jayron 32 12:36, 16 November 2017 (UTC)
 * Yes, of course, that's what the probability density function gives, but if you find the distance at which the probability is ten to the power of minus eighty, then we have a theoretical figure for the maximum expected distance since there are about ten to the power of eighty hydrogen atoms in the observable universe. Statisticians might be able to refine this estimate, and I agree that it might bear little relevance to the real universe.    D b f i r s   12:50, 16 November 2017 (UTC)
 * In the ground state, the distance you are asking about is ~100 times the Bohr radius of a hydrogen atom. However, in principle there exist an infinite number of potential excited states with progressively increasing orbital sizes.  Very large orbitals involve energies very close to but slightly below the ionization energy of the atom.  In that case the electron is only very weakly bound.  Aside from the general problem that the universe is full of stuff that will interfere, there is no theoretical reason why one couldn't construct a very lightly bound state with an arbitrarily large size.  Dragons flight (talk) 14:12, 16 November 2017 (UTC)
 * The important thing to remember here is that energies are quantized but distances are not. This is all what the uncertainty principle is about.  You can't handwave some "yeahbuts" around; the position of electrons with a well-defined momentum are fundamentally unknowable which means that the chance of finding that electron at any arbitrary point in the universe is not zero.  In a single-hydrogen-atom universe, we can construct a hydrogen atom of any arbitrarily large radius by asymptotically approaching the ionization energy of the atom (this is akin to the escape velocity in a gravitationally bound system).  As the energy level of an electron approaches asymptotically close to the ionization energy, the radius of that atom tends towards infinity.  Well, sort of.  The radius itself is not a well defined thing, but any given radius definition (such as the Van der Waals radius) will tend to increase arbitrarily large values as one approaches ridiculously high energy levels.  There are an infinite number of energy levels below the ionization energy, so you can get arbitrarily close to it without passing it.  That's what DF is on about.  In a real universe with other atoms, highly excited electrons are able to absorb enough energy from a stray photon to excite it past the ionization energy, so in practical terms in a real universe, there are practical limits to the size of atoms, but those are imposed by factors external to the atom, not factors based on internal forces within the atom.  Purely as a system-unto-itself, there is no limit to the distance that a bound atom cannot remain bound.  Only an energy limit.  -- Jayron 32 15:15, 16 November 2017 (UTC)
 * There's no way you can find an answer to "there's a 50% chance a bound electron has not been x far from its atom center" in some way not really applicable to the real universe? (like if you were to measure the position of every electron once to good accuracy (clearly not possible for many reasons i.e. sentient life postdated atoms) there should be a 50% probability one of the bound electrons are x far, the most likely electron speed is y (which has to have an answer since superheavy elements get relativistic effects from the electrons moving that fast) it takes z time for an electron at the most likely or 50th percentile distance to move a reasonable distance away from where it was at that speed (say 1 radian, yes they don't really orbit), there's been w of these time periods since the percent of hydrogen atoms that were non-ionized is similar to now (does that even cover most of the time since stars?) and that could be taken as w more chances for an electron to get far so you can then calculate the distance with w times more atoms each being measured once? So if (numbers for example, not accurate) there were 1080 atoms and there have been 1034 periods z long so far you'd find the 50% probability maximum for 10114 atoms being measured once? (since good positional accuracy would screw with trying to measure the real universe's electrons' positions quadrillions of times per second) If cosmological weirdness has made the amount of (normal) matter within the observable boundary vary a lot I wouldn't mind if that was ignored to make the math easier) Sagittarian Milky Way (talk) 19:27, 16 November 2017 (UTC)
 * There's some fun stuff that happens with low-probability statistics and indistinguishable particles.
 * The probability that an electron is measured at a distance of a thousand light-years radially from the proton it orbits is very low but non-zero.
 * But - if you set up a detector, and you register one "count", can you prove that the electron you observed is the one you were trying to measure?
 * No, you cannot. Your detector might have measured noise - in other words, it might have been lit up by a different electron whose interaction with your detector was more or less likely than the interaction with your super-distant electron.  Actually, the probabilities and statistics don't matter at all, because we have a sample-size defined exactly as one event.  Isn't quantization fun?
 * In order to prove that it was the electron you were hoping to measure, you need to repeat the experiment and detect a coincidence. The probability that you will measure this is very low, but non-zero, squared - in other words, it won't be measurable within the lifetime of the universe.
 * Here's what Plato Encyclopedia has to say on this topic: Quantum Physics and the Identity of Indiscernibles.
 * Take it from an individual who has spent a lot of time counting single electrons - even with the most sophisticated measurement equipment, my electron looks exactly like your electron, and I can't prove which electron was the one that hit my detector.
 * Nimur (talk) 19:53, 16 November 2017 (UTC)


 * I think the weirdness is weirder than that. I mean, the question is how far any electron in the history of history has ranged away from its nucleus and returned.  But electrons are ... a cloud of probability.  At any instant you could measure the electron and find it anywhere.  So if you measured any one electron at every possible instant, an infinite number of instants, it should range pretty much an infinite distance away.  Except... that measuring the electron changes it!  If you find the electron a light year away from its nucleus, you won't find it at a Bohr radius a nanosecond later away, because the probabilities are all changed.  And the reverse is also true.  So you can't talk about where you would have measured an electron, but only where you did measure it.
 * But even if you rephrase the question to where an electron has been found in the history of history, it's still a problem ... because if you measured its position, it is not possible to measure its momentum accurately, to determine if it was truly "in orbit" or if it had simply been ejected from the atom! And so you can't actually give a distance; you can only give a probability function that may extend out an arbitrarily large distance.
 * I think... Wnt (talk) 17:04, 19 November 2017 (UTC)
 * In other words, shut up and calculate. --47.138.163.207 (talk) 09:45, 20 November 2017 (UTC)

ethanol fermentation
A gram of sugar has 4 calories and a gram of alcohol has 7 calories. Would anyone be able to tell me what the approximate conversion rate of sugar to alcohol in ethanol fermentation is in calories? Eg if you put 400 calories of sugar into the reaction how many calories of ethanol do you get out? I've tried reading the article but all the equations are way over my head, sorry. Thanks for your time. — Preceding unsigned comment added by Oionosi (talk • contribs) 10:21, 16 November 2017 (UTC)
 * C6H12O6 → 2 C2H5OH + 2 CO2
 * translates into
 * 180g sugar → 2x 46 g alcohol + 2x 44g CO2,
 * (those weight values can be found at respective article of glucose, ethanol and CO2)
 * 4x 180 calories sugar (=720) -> 7x 2x 46 calories alcohol (=644) + 76 calories lost
 * As you see, this was not are way over my head, you underestimate yourself — Preceding unsigned comment added by 185.24.186.192 (talk) 11:36, 16 November 2017 (UTC)
 * That looks like a valid calculation (though the one-significant-figure calories given by the OP lack the precision to come up with an accurate "calories lost" figure). I'll add a note of explanation though.  With fermentation the idea is to take something in an intermediate redox state and split it up into more oxidized and reduced components.  In a very loose sense it is like the opposite of a combustion reaction, though one rarely thinks of burning something (even something greasy) in carbon dioxide!  So you start with a sugar, i.e. -CHOH-.  Not counting any ends, the formal oxidation state of the carbon is -1 from the direct hydrogen and +1 from the oxygen bonds adding to 0; the oxygen is -2 as usual.  Such a compound could be produced by the oxidation of -CH2- (which has carbon at -2) with one oxygen.  As a result it has less energy than -CH2-, assuming an oxygen-containing atmosphere.  (On Saturn or early Earth it would have more energy than -CH2- since that won't do much in regard to methane)  Oxidation can take a terminal -CH2OH (alcohol, -1) and convert it to -CHO (aldehyde, +1), then -COOH (carboxylic acid, +3), then separate it as CO2 (carbon dioxide, +4, with a -1 change in oxidation state on the other end as a hydrogen replaces this carbon on the decarboxylated main chain).  Reduction can convert -CH2OH to -CH3 (methyl, -3).  By such manipulations, we see that the initial six +0 CHOH carbons become two +4 CO2s, two -3 methyls, and two -1 alcohols.  As a result, the "burnability" of six carbons is concentrated into four carbons, while the other two are now completely burned to CO2.  This doesn't quite match the 7/4 ratio because the energy of compounds is more complicated than that (after all, oxidation state is a rather crude approximation that assumes absolute differences based on quantitative differences in electronegativity, among many other things) but I think it is qualitatively useful. Wnt (talk) 01:14, 20 November 2017 (UTC)

Telephone lines
Can someone explain how telephone lines "work", i.e. how it is that they can carry multiple conversations simultaneously, rather than being busy for all the people on the exchange whenever any subscriber is on the phone? I looked at telephone and telephone line without seeing anything, and Google wasn't helpful either. I would imagine that the line would carry electrical pulses just one at a time (as on a party line), and multiple conversations would cancel each other out, but obviously our whole telephone system wouldn't work properly if that were the case. Nyttend (talk) 12:23, 16 November 2017 (UTC)
 * This has a pretty good explanation, getting down to how signals are encoded for travelling down the wire. -- Jayron 32 12:34, 16 November 2017 (UTC)
 * Hm, so nowadays it's electronic when going from exchange to exchange; not surprised, but I wasn't aware of that. And I didn't realise that there was a completely separate wire from every subscriber to the exchange, or I wouldn't have wondered.  But before electronics, how was it possible for two subscribers from the same exchange to talk simultaneously with two subscribers from the other exchange, rather than one person taking it up and preventing other callers?  Does Myrtle have multiple individual wires going to every nearby town's exchange, and a massive number of individual wires going to some nationwide center in order to allow Fibber to phone someone halfway across the country?  Nyttend (talk) 12:49, 16 November 2017 (UTC)
 * Andy gives a good summary below. In the early days of telephone systems there really was a direct electrical connection that had to be made between each pairs of callers, and no one else could use the same wires at the same time, so each hub on the network had to have many separate wires available that could be variously connected to make one complete circuit for each call.  However, we have long since abandoned that approach.  Nowadays everything is generally digitized and travels over packet switched networks.  Depending on where you live, and who provides the phone line, the digitization may happen inside your home or at some regional or central exchange.  Dragons flight (talk) 14:25, 16 November 2017 (UTC)


 * There are several methods used historically.
 * Party line systems were connected simply in parallel. Only one could be used at a time.
 * Underground cables used a vast number of conductor pairs, one for each circuit. 100 pair cables were common, far more than could ever be used by overhead cables, which were mostly single pairs to each visible cable (the first telegraph signals used a single copper conductor for each visible wire, so a telephone circuit might need two wires and pairs of china insulators). Cables for the 'local loop' from the exchange to the telephone used a single pair for each phone. Cables between exchanges were of course circuit switched to only need enough pairs for the calls in progress (not the number of phones) and many calls would be local, within the same exchange.
 * Analogue multiplexing was used from the 1930s (rarely) to the 1980s. Like a radio, this was a broadband system that packed multiple separate signals down the same cable by multiplexing them. Frequency division multiplexing was used, like an AM radio. Each telephone signal only needed a narrow bandwidth of 3kHz: 300Hz to 3.3 kHz. This meant that the largest trunk lines could carry several MHz signals over a coaxial copper tube conductor, several to a cable, and these could each carry thousands of voice phone calls - or a single TV signal, for the early days of national TV in the 1950s-1960s.
 * In the 1980s, PCM (pulse code modulation) came into use, where analogue phone signals were digitised, then distributed as circuit-switched digital signals. Usually this was done in the telephone exchange, but commercial switchboards began to operate digitally internally and so these could be connected directly to the digital network, through ISDN connections (64kbps or 2Mbps). There was some movement to placing concentrators in cabinets in villages, where the local phones were digitised and then connected to a local town's exchange via such a connection (all phones had a digital channel to the exchange). This allowed simpler cables (such as a single optical fibre) to the village, but was less complex than an exchange in the village.
 * In the 1990s, the Internet became more important and packet switching began to replace circuit switching for digital connections between exchanges and commercial sites. The domestic telephone was still an analogue connection, rarely ISDN, and anyone with home internet access used a modem over this.
 * By 2000 the analogue telephone was no longer as important as the digital traffic. Also IP protocols from the computer networking industry replaced the mix of digital protocols (ATM, Frame Relay) from the telephone industry. Analogue phones became something carried over an IP network, rather than digital traffic being carried by analogue modems. BT in the UK began to implement the 21CN, as a total reworking of their legacy network. Andy Dingley (talk) 13:27, 16 November 2017 (UTC)
 * Thank you; this really helps. I don't much understand how radio works, but the idea of broadcasting at different frequencies I understand, so using a different frequency for each telephone conversation makes sense.  Could you add some of this information to telephone line and/or telephony, since the first is small and the second mostly talks about digital?  Nyttend backup (talk) 15:06, 16 November 2017 (UTC)
 * What he said about 100 circuits is the source for the old "all circuits are busy" message. ←Baseball Bugs What's up, Doc? carrots→ 15:30, 16 November 2017 (UTC)
 * Very rarely. It was exchange equipment that ran out first, not cables.
 * Telephone exchanges are obviously complex, but for a long time and several generations of technology pre-1980 (and the introduction of stored program exchanges, i.e. single central control computers) they consisted of line circuits, junctors and a switching matrix between them. Line circuits were provided for each local loop (i.e. each customer phone). Obviously the amount of equipment per-customer was kept to an absolute minimum, and as much as possible was shared between several subscribers. Typically a rack of subscribers' uniselectors was provided, each one handling 25 lines. Several sets were provided, so each subscriber might be connected to 5, or even 10 on a busy exchange. When a subscriber picked up their phone, the next free uniselector would switch to their line (and only then the dialling tone was turned on).  So no more than 1 in 5 people could make a call at the same time - any more than that and you didn't get dial tone (and maybe did get a busy tone or message instead).
 * Exchanges are connected together by cables, and the switching circuit for these is called a junctor (Junctor is a useless article). Again, these are expensive so the equipment is shared and multiple sets are provided, but not enough to handle a call over every cable at once. Traffic planning and the Erlang were important topics for telephone network design. For a pair of exchanges (called "Royston" and "Vasey") where all of their traffic is between the two exchanges and they don't talk to people from outside, then enough junctors might be provided to meet the full capacity of that one cable.  Usually though, enough equipment was provided to meet the "planned" capacity for a cable and the "planned" capacity for the exchange, and the equipment racks (the expensive and more flexible aspect) would be the one to run out first. Only in exceptional cases would all the traffic land on a single cable, such that it was the cable which maxed out.
 * One aspect of more recent and packet switched systems, rather than circuit switched, is that they become more efficient at load sharing, thus "equipment busy" becomes rarer. Also we demand more, and the hardware gets cheaper, so it's easier to meet this demand. Andy Dingley (talk) 17:27, 16 November 2017 (UTC)
 * Thanks Andy, great stuff! I agree that junctor is a fairly poor article, and I'm interested to learn more about them. If anyone has any references on those please post them here, maybe we can use them to improve our article :) SemanticMantis (talk) 02:00, 17 November 2017 (UTC)


 * See also the phantom circuit, later reused in a patented method for supplying power, known as Power over Ethernet (PoE). -- Hans Haase (有问题吗) 11:11, 17 November 2017 (UTC)


 * Many years ago I went to an "Open Day" at the local telephone exchange.  There was a historical talk and we saw lots of metal rods with ratchets which moved up and down to make connections, making a clicking noise.   I accepted an offer from the presenter to call my house - he didn't get through, but when I mentioned this later my mother said that the phone had been ringing intermittently all evening. 82.13.208.70 (talk) 16:36, 17 November 2017 (UTC)
 * See Strowger exchange Andy Dingley (talk) 17:23, 17 November 2017 (UTC)

What is a 'double domed appearance'?


Relating to an animal's (in this case, an elephant's) head. 109.64.135.116 (talk) 19:49, 16 November 2017 (UTC)


 * It looks like an Asian elephant looks. HenryFlower 19:54, 16 November 2017 (UTC)
 * A picture is worth several words →
 * 2606:A000:4C0C:E200:C9A:4B44:2E28:1611 (talk) 05:48, 17 November 2017 (UTC)


 * There also the scene towards the end of Alexander DeLarge on stage reaching up but unable to grab a pair of double domes due to his "treatment" in Kubrick's adaptation of A Clockwork Orange. But that's NSFW or for children, so I'll just allude to what occurred to me when I read the header. μηδείς (talk) 00:32, 18 November 2017 (UTC)
 * Oddly enough, that connects with the question on another desk, about the connection between Beethoven and Hitler. ←Baseball Bugs What's up, Doc? carrots→ 00:52, 18 November 2017 (UTC)

Sun's helium flash
Judging by the article about helium flash, our Sun will apparently exhibit this when it starts fusing helium. Do we know how bright this flash will be? Will it affect life on Earth? Is this the part where the Sun engulfs Mercury and Venus? Also, I suppose there will be a process of dimming once the hydrogen supply is exhausted while the Sun is collapsing. Is this near-instantaneous or will it take minutes/days/millenia? 93.136.10.152 (talk) 20:40, 16 November 2017 (UTC)
 * You can read Sun. Ruslik_ Zero 20:50, 16 November 2017 (UTC)
 * Thanks, didn't think to look there. So the compression of the core is more or less gradual as Sun reaches the end of the red giant branch, until the moment of the helium flash? 93.136.10.152 (talk) 21:15, 16 November 2017 (UTC)
 * There's also a carbon flash with three heliums kung-powing into carbon and so on in stars with enough mass (more than the Sun). If the star's massive enough it can reach 1 million sunpower with only tens of sun mass and build up central iron ash till it loses structural integrity (since stars can't run on heavy elements). The star collapses till 200 billion Fahrenheit, bounces off, and explodes with the light of billions of Suns (and up to about a trillion sunpower of neutrino radiation). When the center reaches the density of a supertanker in a pinhead it becomes extremely resistant to further collapse (but not invincible) since there's only so many neutrons that can fit in a space unless it can force a black hole (or possibly get crushed into smaller particles). Sagittarian Milky Way (talk) 00:01, 17 November 2017 (UTC)
 * Will it affect life on earth? No, because by that time all life on earth will have died off. 2601:646:8E01:7E0B:5917:3E80:D859:DF69 (talk) 06:43, 17 November 2017 (UTC)
 * Relevant quote from the article: In the case of normal low mass stars, the vast energy release causes much of the core to come out of degeneracy, allowing it to thermally expand (a processes requiring so much energy, it is roughly equal to the total energy released by the helium flash to begin with), and any left-over energy is absorbed into the star's upper layers. Thus the helium flash is mostly undetectable to observation, and is described solely by astrophysical models. "Flash" in this context is meant in the sense of "in a flash", since the helium fuses extremely rapidly, not in the sense of a visible flash of light. The Sun will engulf Mercury and Venus when it transitions to a red giant. A helium flash happens in stars that have already been red giants for a long time. As for what happens once the core's hydrogen is exhausted, Stellar evolution seems to answer this. --47.138.163.207 (talk) 09:41, 20 November 2017 (UTC)