Wikipedia:Reference desk/Archives/Science/2016 July 21

= July 21 =

hyperferremia vs hemochromatosis
What is the distinction between the two? Can you have a hyperferremia that is not hemochromatosis? (you can find references where a patient is described having both hyperferremia and hemochromatosis, so they must not be synonyms). DTLHS (talk) 02:00, 21 July 2016 (UTC)


 * If I'm reading this article correctly, hyperferremia refers to hereditary hemochromatosis. Based on that (it certainly appears to be a reliable source), I've redirected the redlinked hyperferremia to iron overload, which is where hemochromatosis pointed. It should get noted on the target page somehow. I'll give it a shot if I get a few minutes. Matt Deres (talk) 03:24, 22 July 2016 (UTC)
 * Digging deeper, I'm not sure if I've redirected that appropriately; maybe HFE hereditary haemochromatosis would be better? I'm out of my depth here and neither page seems to have much activity on the talk pages. I've opened a ticket at WikiProject Medicine. Matt Deres (talk) 03:34, 22 July 2016 (UTC)

Quantum precognition and time reversibility
Suppose you have a system in which some electrons, nuclei or other objects can have spin up or spin down. Normally they are 50% each. You can transiently apply a magnetic field that makes either spin down or spin up the lower energy state, and the particles in the higher energy state emit particles to drop to the lower spin state. Eventually there is a strong disequilibrium. Then the field is removed, and for some period of time afterward, the formerly lower energy spin state continues to prevail if the state of these particles is "read".

In other words, the applied magnetic field over time (0 or 1 for various times) is 000000111000000, while what is presumed from ideas of causality is that the degree of spin polarization on a scale of 0 to 9 might perhaps be 000000369753211

Yet I have heard it said that quantum physics is time reversible, and it would seem like the times before and after the magnetic field is applied might be symmetrical. So it would seem plausible according to this way of thinking that among these particles the to-be lower energy spin state would be observed more often before the magnetic field is applied. In an extreme case, this would be 112357999753211, but conceivably it could be some compromise like 000001369753211, if for some reason the "decay" is faster, but not infinitely fast, when looking at past times than when looking at future times.

There are some issues here that would need to be looked at carefully, and doubtless I'm not looking at some of them carefully. For example, the time reversal of photons being emitted after the field is applied would seem to be for photons to be absorbed before the field is removed, and this seems to imply that the level of background photons is important. I assume no one has actually published a "quantum precognition" result or I'd have heard much about it, but has the experiment been done with any great precision? Wnt (talk) 02:31, 21 July 2016 (UTC)


 * I tried to read the article on T-symmetry, but I don't get it. The section on electric dipole moments seems similar to what you're talking about here, but that one class I took on quantum physics ten years ago has not prepared me for this material. Someguy1221 (talk) 03:12, 21 July 2016 (UTC)


 * You could make the same argument in classical electrodynamics. I think the resolution there is that either there is friction (which breaks time reversibility; it's an aspect of the second law of thermodynamics) or else there is no increase in the amount of magnetic dipole alignment. Whatever the answer is, it's not changed by quantization. -- BenRG (talk) 07:01, 21 July 2016 (UTC)


 * As BenRG says, it is probably not a quantum question. The bottom question is "how can macroscopic equations be irreversible, when microscopic equations are". WP has a few articles on the subject, such as arrow of time, but they are not fantastic to be honest.
 * Notice also that the Curie principle is just that (a principle); in reality, symmetry breaking events happen. One of the implications is that, even though the equations of motion might respect some symmetry (e.g. time-reversal), their solutions might not. Now, whether this is a good argument for the defense of the causality principle, I do not know, but it certainly invalidates the "obvious" assumption that time-symmetric equations must lead to time-symmetric solutions. Tigraan Click here to contact me 11:00, 21 July 2016 (UTC)
 * "All models are wrong; some models are useful". It is also important to remember that, philosophically, QM is still a model, it is a powerful and highly accurate model, but insofar as the model does not match observed phenomena, (and ALL Models have aspects that do not match observed phenomena, by their very nature).  Macroscopic time reversal, FTL travel, "into the past" time travel, etc. are all examples of phenomena predicted by the equations of QM but which have never had any evidence of any sort to support.  With all of the caveats around "absence of evidence is not the evidence of absence", it is most important to maintain a strict adherence to the null hypothesis regarding the reality of such phenomena.  QM is powerful in the way it aligns with observable phenomena.  That doesn't mean we need to accept, on faith, the veracity of predictions it makes that do not align with observation.  -- Jayron 32 11:14, 21 July 2016 (UTC)
 * Well we know there's some problems with modern physics, but this is not related to anything like that. No problem in Quantum mechanics is being talked about. All that's being talked about is our expectations. As to the question what is the meaning in talking about observations that might have been made but weren't? Dmcq (talk) 12:06, 21 July 2016 (UTC)


 * Again, this is not a QM question, at least historically it came about thermodynamics.
 * In any case... At any moment, there are three kinds of predictions made by any model: those that are verified experimentally, those that are falsified experimentally, and those that have not been tested (for any reason - it could be impossible to test, or just really hard, or it just had not been done yet). The heart of the experimental part of the scientific method is to transfer predictions from the latter to any of the first two categories, and while doing so one should indeed be "unbiaised", to validly put the theories to test.
 * However, when one is not testing the models, it is completely reasonable to "believe" the predictions of models that are successful. That is just Bayesian inference: there is a priori no reason that the ratio of model predictions that would come true if tested among the untested ones is any lower (or higher) than among those that have been tested. This is an outrageous simplification of Bayesian principles, of course - notably, the a priori probability of any given model is not independent of that model. "Leprechauns did it" is less convincing than serious alternatives, even before we begin to search for leprechauns. There is indeed the leap of faith (it could be that hardest-to-test predictions are also more likely to falsify the model), but it is faith in the method rather than in a particular model (you do not expect gravity to just stop at the 300th anniversary of Isaac Newton's death, even if none has made experiments after this date yet). Tigraan Click here to contact me 12:30, 21 July 2016 (UTC)
 * It's not that the model is necessarily wrong, I should probably say, it is that out inferences and interpretations of the model are not necessarily consistent with observable behavior. The classic example of this is that of the gravitational singularity; the model predicts regions of infinitesimal volume and thus infinite density; such mathematical singularities, when mapped onto reality, usually are interpreted to mean a breakdown of the theory rather than a prediction of expected behavior.  No one really thinks that infinite density exists, as noted in the gravitational singularity article, the prediction of such a singularity at the Big Bang is assumed to be a breakdown of the predictive power of the model rather than the existence of such a literal singularity.  To varying degrees, other such fantastical predictions of the QM model (functionally the leprechauns of the theory) involving reverse time travel and other violations of causality, should be similarly understood.  In the same way that the prediction of a gravitational singularity should be interpreted not as an expectation of its existance, but as a limitation of the theory, similarly time reversal should also be thought of in the same way.  "The model predicts that time reversal is possible" is not the same thing as saying "time reversal is possible".  -- Jayron 32 16:14, 21 July 2016 (UTC)
 * Everyone knows leprechauns are extinct. InedibleHulk (talk) 15:01, 21 July 2016 (UTC)
 * Yes, but has anyone told the leprechauns? {The poster formerly known as 87.81.230.195} 2.123.26.60 (talk) 19:35, 21 July 2016 (UTC)
 * The article Delayed choice quantum eraser discusses these type problems and shows what you might consider as 'precognition' in action. Dmcq (talk) 14:51, 21 July 2016 (UTC)
 * This is a fascinating phenomenon, but does not actually reach that standard. The issue is that the entangled photon is measured at one of the four mirrors, not put there; so it's not possible for a decision made at the future time point to affect that in the past.  I think... though there are many possible permutations of the idea.  My interest here isn't on a fancy entanglement scheme, but just a very basic brute-force measurement to see if there's even the slightest increase in possibility of seeing a 1 or a 0 in a formerly indeterminate bit of data on some sort of storage medium if it is going to be set in the future.  I feel like somebody ought to have done this experiment, and I'm just wondering how well - people check all kinds of things, whether the gravitational law is right, whether constants change, do they check this?  All the talk about leprechauns misses the point that causality (physics) is nothing more than a religion that contradicts millennia of popular belief - it's not even a theory, as far as I know; at least, not unless many careful experiments of this sort have actually been done. Wnt (talk) 23:12, 21 July 2016 (UTC)
 * I wonder if anyone's done that experiment too. Methinks it wouldn't magnetize before you write because that would either give you an unstoppable urge to magnetize it or it would be a source of free energy if you can convince someone to not magnetize before he does it. And the laws of physics don't care what a sentient being wants. Sagittarian Milky Way (talk) 23:34, 21 July 2016 (UTC)
 * I would go with the first model. It is true that trying to paradox the situation here would be problematic... my guess is such efforts might be defeated if they simply generate small amounts of noise in the experimental apparatus to foul the result, though I suspect that weird macroscopic phenomena, like one of the experimenters going mad and shooting up the place, might be even lower in energy than such noise.  I also admit to having the personal opinion that certain configurations of such an apparatus with looped causal characteristics could actually assume sentience in their own right... I am not sure what sentience without intelligence looks like. Wnt (talk) 00:13, 22 July 2016 (UTC)
 * The Aspect experiment (and its successors) is probably the best-known example of this sort of thing. Tevildo (talk) 00:30, 22 July 2016 (UTC)
 * That links to Bell test experiments where, as far as I know, there is no observable difference in measuring the 'future' or the 'past' of the entangled particle; only the fact that you're going to measure it matters. I think the entanglement issue may simply be something different. Wnt (talk) 00:47, 22 July 2016 (UTC)
 * All this talk about particles and "entanglement" is still discussions of the model; when you look at all of these experiments that start with such precepts as "whatever we're looking at is either a wave or a particle at some point in its path" and even the fairly common interpretation such as superposition assumes that it should be one or the other and that it somehow "chooses" (as though it has agency) or is "chosen" as one or the other by act of observation itself; all that still starts with the presumption that "particle" and "wave" are binary choices. Light is light; the properties that light displays depends on the nature of the measurement.  That's the Occam's razor interpretation of all QM experiments; it makes a minimal number of presumptions based on available evidence.  When we pile expectations upon the phenomenon (by asking questions like "Is it a particle or a wave?" or even "What makes it behave like a particle or a wave?") we're bringing assumptions into the model.  Statements like " If the experimental apparatus is changed while the photon is in mid‑flight, then the photon should reverse its original "decision" as to whether to be a wave or a particle"  presumes that light was either or both or some combination or superposition of both.  It was light.  All the experiment proves is that the properties we measure are dependent on the way we measure them.  The interpretation of the data says more about the psychology of the interpreter than it does about the nature of light.  -- Jayron 32 11:11, 22 July 2016 (UTC)


 * The delayed choice quantum eraser has nothing to do with backward causation. The only people who think it does are people who don't understand the difference between correlation and causation. See this answer. I'll quote the key bit here:
 * You have a bag containing 4 balls, 2 red and 2 black. You draw a ball. There's a 1/2 chance it will be red. If it is red, there's a 1/3 chance that the second ball you draw will be red.
 * But if you don't look at the first ball, there's a 1/2 chance that the second ball you draw will be red, and if it is, there's a 1/3 chance that the first ball you drew was red. If you collect data over many trials, conditioned on the second ball being red, you'll find that indeed about 1/3 of the first balls you drew were red.
 * Backward causation??1? Of course not. If X is correlated with Y, then Y is correlated with X. It doesn't matter whether Y happened after X.
 * The argument for backward causation in the delayed-choice quantum eraser experiment is exactly the same as the argument for backward causation in this classical experiment.
 * -- BenRG (talk) 17:37, 22 July 2016 (UTC)
 * As always Ben, a most relevant, accessible, and well-written answer. -- Jayron 32 17:42, 22 July 2016 (UTC)


 * What's the reverse of a debris field turning into a tornado turning into a charcoalized forest turning into fire which shrinks to a broken glass field which becomes a Molotov cocktail which rockets up till it slows into a hand in an airplane and transfers its fire to a match which is scratched by a dude and unlights itself? The equations are time reversible so this is totally possible. Sagittarian Milky Way (talk) 23:18, 21 July 2016 (UTC)


 * Well, I didn't think I was suggesting exceptions to the second law of thermodynamics here. I suppose that a trace of magnetization in advance would indeed need to become steadily stronger until the bit is written, which suggests a reversal of entropy, but that decrease in entropy is eventually paid for by an increase in entropy somewhere else when the bit actually is written, and of course we know that bits can indeed be written this way, it's just a question of precisely when. Wnt (talk) 00:03, 22 July 2016 (UTC)


 * http://arxiv.org/abs/quant-ph/0105101 Count Iblis (talk) 00:13, 22 July 2016 (UTC)
 * Fascinating, and it would be more so if I understood the other 2/3. Nonetheless, even in describing their 'time machine' (of sorts) the authors dismiss signalling to the past because it 'contradicts causality'. Wnt (talk) 00:44, 22 July 2016 (UTC)


 * There is an apparent time asymmetry in the usual formulation of quantum mechanics: wave functions collapse, but don't uncollapse. I think the point of this paper is to show that it's an artifact of the formalism, not a real asymmetry. The only relevance of this to your experiment is that you could try to argue that the time symmetry in your experiment comes from wavefunction collapse. This paper (if you believe it) says that that's wrong. But in the classical case, you could never make that argument in the first place, because there's no analogue of physical wavefunction collapse and everyone agrees that the laws are time symmetric. So the classical version of your experiment is stronger than the quantum version, in this respect. -- BenRG (talk) 17:37, 22 July 2016 (UTC)
 * Is it really true that wave functions don't uncollapse? For example, in a quantum eraser scenario where a particle has two possible pasts and now you don't know which.  I would think that just as any observed particle has multiple futures, so it could have multiple pasts. Wnt (talk) 01:05, 23 July 2016 (UTC)
 * In traditional quantum mechanics (which was taught to me in college, and may be what people mean when they say "Copenhagen interpretation"), wavefunctions collapse after measurement, and never uncollapse, by fiat. It's an explicitly asymmetric rule.
 * The modern view is that the "measurement effect" that was traditionally attributed to the collapse is really due to quantum decoherence, which is irreversible because of the second law of thermodynamics. The wavefunction collapse then has no observable consequence, and I suppose there's there's no reason it couldn't uncollapse too. (This is like saying "maybe other realities that we can never detect are appearing or disappearing all the time".)
 * In the DCQE experiment, detection is irreversible. The photon heats up the detector, the heat spreads, the detector emits slightly more blackbody radiation, etc. The reverse process never happens because of the second law. If it did happen, a reverse collapse model would correctly describe it. -- BenRG (talk) 18:29, 23 July 2016 (UTC)


 * Advanced potential may be relevant. Unfortunately Wikipedia doesn't seem to have an article on it (that link redirects to "retarded potential"). There is an article on Wheeler–Feynman absorber theory but it's not very readable. In short, it's an open question why oscillating particles emit radiation in the future but not in the past. But again, this problem isn't specific to quantum mechanics; it was recognized in the 1800s. -- BenRG (talk) 17:46, 22 July 2016 (UTC)


 * The way I read that article is that there is no need to consider backward in time ('advanced') transmission of EM waves because there is no "free EM" going off to infinity; every wave that is emitted 'advanced' is matched with some other wave that is absorbed (as a normal retarded wave). In this thought problem, if some kind of advanced spin polarization develops before the bit is written, it means that something interacted with those spins to set them in one direction, which can be looked at as a "writing" process in forward time.  (This is why I suggested there needed to be some sort of background of thermal photons at the beginning)  But can the sign of that writing process be dependent on what signal will subsequently be sent to align the spins all one way, whether or not it occurred first? Wnt (talk) 19:05, 22 July 2016 (UTC)