Wikipedia:Reference desk/Archives/Science/2013 April 8

= April 8 =

Science or religion?
If scientists believe that they or humans will eventually know everything, isn't that equivalent to a religion? Or even if they believe that they will know more than they currently do that might not always be true, so that is a belief in something without proof. Or is this Q more appropriate for the humanities desk?68.36.148.100 (talk) 00:02, 8 April 2013 (UTC)


 * A belief that humans will eventually know everything is indeed a declaration of faith (which is what a religion is), but what makes you think genuine scientists do belive that? I have worked closely with, and supervised, researchers working at Ph.D level.  None of them believed that.  Have you not heard of the expression "The more I know, the more I know what I don't know" - a favorite among researchers I know. The second part of your question reflects a common misconception about science on your part.  Science is not really about asserting facts, it's more about discovering theories that fit known observations, and thus enabling calculations/predictions of results/outcomes for any practical set of conditions.  An example is electromagnetics:  Over 100 years of electrical engineering has never thrown up a situation where the theory gave a wrong answer.  But just what a magnetic field is, or what an electric field is, is something we don't know.  Until the theory does throw up a wrong answer to a situation, scientists aren't much concerned about it being merely a theory, and engineers don't care at all - for them it just works. Ratbone 124.182.150.1 (talk) 00:30, 8 April 2013 (UTC)
 * I doubt any scientist thinks it is possible to "know everything", regardless, I think you have two major flaws in the premises of your questions: What scientists believe is not what defines science, and similarly 'believing things without evidence' is not what defines religion. Science is based on only a very few fundamental premises, like methodological naturalism, which them selves are not "arbitrarily" chosen, but refined and condensed through hundreds of years of philosophy. Even now those fundamental premises are NOT taken for granted or completely beyond question. If there was any good reason to question or alter those fundamental premises, scientists would do so. Trying to equivocate science to religion is almost only ever done by the religious in an attempt to demote science's privileged epistemological position to 'their' level, which I think is quite funny when you think about it. You'll never hear a scientist accuse a religion of being just another "science". Vespine (talk) 00:41, 8 April 2013 (UTC)
 * doesn't that mean that if any scientists have this belief that it is religion to them? I hope for scientists' sake none do.  I bet there are a multitude of references of legit scientists who have alluded to these "beliefs" as part of their "work"68.36.148.100 (talk) 00:49, 8 April 2013 (UTC)
 * You already asked this question, and we have already answered it. Why don't you go look for those references, since you have faith that they exist? Ratbone 124.182.150.1 (talk) 01:08, 8 April 2013 (UTC)
 * why so hostile, isn't this a legit question? Any way I'm too busy watching wrestle mania with my son.  And I know those references exist but I don't do reference or bibliographies68.36.148.100 (talk) 01:18, 8 April 2013 (UTC)


 * Definitions of "religion" are at religion, and additional definitions are accessible via http://www.onelook.com/?w=religion&ls=a.
 * —Wavelength (talk) 01:38, 8 April 2013 (UTC)
 * Holding a belief, without any evidence, even a dogmatic one, may be a necessary condition of religion, but it's not a sufficient one. Vespine (talk) 01:48, 8 April 2013 (UTC)
 * Working scientists usually take for granted a set of basic assumptions that are needed to justify the scientific method: (1) that there is an objective reality shared by all rational observers; (2) that this objective reality is governed by natural laws; (3) that these laws can be discovered by means of systematic observation and experimentation. Philosophy of science seeks a deep understanding of what these underlying assumptions mean and whether they are valid.

The belief that all observers share a common reality is known as realism. It can be contrasted with anti-realism, the belief that there is no valid concept of absolute truth such that things that are true for one observer are true for all observers. The most commonly defended form of anti-realism is idealism, the belief that the mind or consciousness is the most basic essence, and that each mind generates its own reality.[17] In an idealistic world-view, what is true for one mind need not be true for other minds. — Preceding unsigned comment added by 68.36.148.100 (talk) 02:38, 8 April 2013 (UTC)
 * What's your question? ←Baseball Bugs What's up, Doc? carrots→ 02:51, 8 April 2013 (UTC)

I was wondering why scientists do what they do. To make things bigger faster stronger, etc. I came to the conclusion that it is for the betterment of mankind. But obviously there are many reasons. Although ultimately there needs to be that same belief that the future state can be better than the present one. A faith in the natural universe being further understood or revealed for our benefit. 68.36.148.100 (talk) 04:16, 8 April 2013 (UTC)
 * If it has worked in the past and you have no good reason to think it will not work in the future, it's not necesarrily an illogical assumption. Especially if the alternative is to sit on your hands in a solipsistic coma. Also, strictly speaking, science doesn't make things better, science is used to discover things about the world. Scientific discoveries can then be applied to make things better, applied science is considered engineering, the fruits of which are technology. Vespine (talk) 04:38, 8 April 2013 (UTC)
 * There is a difference between natural science and supernatural science, where science is "knowing". Natural science describes the ordinary functioning of the universe according to its observable laws.  Supernatural science can describe divine interventions that alter this normal functioning, or alternate universes or dimensions not accessible to experiment.  However, the most important aspect of religion may be in the interpretation of what we do - often the facts are clear, but the moral reaction is anything but agreed upon. Wnt (talk) 04:42, 8 April 2013 (UTC)


 * Your assumption that scientists believe that they or humans will eventually know everything is a dubious premise that you have not established, and which looks a lot like a straw man fallacy. What is much less dubious is that scientists believe that through science, scientists or humans will know more than they do currently, which is all that is necessary for a career in science to be a worthwhile endeavor. And that belief is not just a matter of faith.  In science, the thing that determines whether or not something is to be believed is scientific evidence, and there is rather trivially evidence that science will cause additional things to become known in the future.  Scientists learn new things every day, and there is no evidence that would suggest that tomorrow, that might all stop happening.


 * The argument against believing certain hypotheses proposed by religions isn't really a matter of thinking that science will eventually have all the answers, so the answers given by religion should be ignored, but rather a matter of considerations like the answers that are given by religion often being contrary to the available evidence, or the answers being given by religion going against Occam's razor by hypothesizing a bunch of stuff which has no predictive value due to the hypothesized stuff being unfalsifiable. Red Act (talk) 05:45, 8 April 2013 (UTC)


 * Q1: If scientists believe that they or humans will eventually know everything, isn't that equivalent to a religion?
 * Bad premise. Scientists not only do not believe that we'll eventually know everything, they know for absolute certainty that they cannot possibly ever know everything.
 * We have Gödel's incompleteness theorems that says that there are things in mathematics that are true that can neither be proved nor disproved.
 * The total amount of information about the universe requires something at least as complex as the universe to store it all in. So we can never know every detail.
 * We have quantum theory that says that we can never know the precise position and momentum of a particle at the same time - and which forces randomness into the universe in fundamental ways. This will always limit our ability to know everything.
 * Chaos theory which proves that some systems are fundamentally unpredictable: For example: we can never predict the weather more than a few weeks or months in advance.
 * Science has shown that science has limits.
 * Q2: Or even if they believe that they will know more than they currently do that might not always be true,
 * It's obvious that our state of knowledge is always going to be in a state of flux. We are always learning new things - and there is little reason to suppose that we'll ever run out of things like that.  But on the other hand, we're continually forgetting things too.  It seems certain that one day we'll arrive at a steady state condition where we're forgetting things as fast as we're learning them - but we're nowhere near that point yet.
 * Q3: so that is a belief in something without proof.
 * No, I don't see that. Scientists don't claim that we'll eventually know everything - quite the opposite in fact.  So your premise is flawed and you can't produce this conclusion from it.
 * SteveBaker (talk) 16:03, 8 April 2013 (UTC)


 * One of the big differences between religious belief and scientific belief is that science is effective. You don't see industrialists gathering groups of theologians to study religious works and from the fruits of that building a better car engine or getting clean water or anything really where one can put down money and bet on which will give a better outcome. Would you say industrialists as engaging in a religious belief? What scientists do is in essence no different. Dmcq (talk) 00:35, 9 April 2013 (UTC)

Logic, creativity and ignorance
Some claim that logic can explain everything and that when new things are discovered, it is simply ignorance to that logic which has been overcome. I agree but I don't think logic always works because its basically thinking inside the box, I.e. following a set of known principles. Is my thinking correct? — Preceding unsigned comment added by 2.124.100.141 (talk) 00:35, 8 April 2013 (UTC)
 * Logic is an extremely broad subject, I think you might be conflating different uses of the term. At its core, formal logic is described as either valid or fallacious, that makes it clear it doesn't have anything to do with inside or outside the box. Fallacious logic will never lead to justified conclusions, it can by chance lead to correct conclusions, but they will not be justified. On the other hand, some discoveries can seem "illogical", like the fact that a photon can have seemingly incompatible properties of waves and particles, outside the box thinking was required to come to terms with that discovery, but that isn't the same meaning of the word logic. In that case, faced with the evidence that was present, outside the box thinking was in fact logical. Vespine (talk) 01:00, 8 April 2013 (UTC)
 * Logic doesn't imply that you cannot question your principles. Logic is mainly concerned to deduction from some assumptions to some conclusions. If the assumptions are wrong, everything is wrong. On the other hand, any explanation has to be logical. If you find a contradiction somewhere, it's time to drop some part of your theory.OsmanRF34 (talk) 01:12, 8 April 2013 (UTC)
 * I think these are actually very good inquisitive questions, you are starting to scratch the surface of some pretty deep philosophical ideas, about epistemology, metaphysics, etc... If you are genuienly interested, I would strongly recommend something like the free Philosophy for Beginners introduction course from Oxford University. I'm only a novice to philosophy and found these extremely enlightening. Vespine (talk) 01:21, 8 April 2013 (UTC)


 * Logic, by itself, cannot explain much. You need to feed it some "axioms".  So, for example, no amount of logic will tell you whether the earth is flat or round - you have to feed it some observations (such as the fact that the earth's circular shadow is cast onto the moon during an eclipse - or that ships disappear over the horizon gradually, with their hulls disappearing before their masts).  Logic (and more broadly, mathematics) is a tool to help you organize thoughts and data and to extract conclusions and expose weaknesses in "obvious" arguments.  Science proceeds with observations and experiments - and logic and mathematics are mostly just tools to help in the process.  Logic can be used to produce crazy answers if you give it crazy assumptions.  For example, if you start with the assumption that zero is equal to one (or true implies false or something else which isn't true in the real world) - then the rules of logic will happily crunch away on that and produce unending streams of nonsense.  This is actually occasionally useful.  For example, we know that the three angles of a triangle add up to 180 degrees out here in the real world - but if you deliberately set up a logical system in which that fact is denied - then what emerges is some very interesting non-euclidean geometries that prove useful for all sorts of things. SteveBaker (talk) 16:26, 8 April 2013 (UTC)
 * Another way to put this is to contrast deduction with induction. (Forget Sherlock Holmes, he had them backwards.) Induction means (more or less) reasoning to a large conclusion from smaller ones. In other words, I go into the world, I see five birds of the same species, I use them to generalize for the whole species (because I cannot actually look at every member of the species). Deduction means (more or less) reasoning to small conclusions from larger ones. In other words, I know that all birds have feathers, and when asked whether a specific bird (say, a duck) has feathers, I can logically deduce that this is true, a priori. Deduction is generally the realm of formal logic: I know certain things, thus I know other things. But as you can see with my examples given, deduction alone is pretty silly when you are trying to talk about things in the real world and not hypotheticals. How do I know, a priori, that all birds have feathers? Could there be a species out there which does? Deduction alone doesn't let you answer that question. It's why induction became the name of the game for science — go out in the world, see what's there, then work backwards to find what you presume are the "real" rules, and from there you can move tentatively forward with deduction, at least up until you reach the limits of what you know. Induction isn't perfect by itself (see problem of induction — what if the five birds I looked at were, coincidentally, not at all representative?), and neither is deduction (we don't really have a formal problem of deduction but it's basically "how do you know your axioms are correct in anything other than pure philosophy or mathematics?"), but together they are pretty powerful. --Mr.98 (talk) 17:53, 8 April 2013 (UTC)
 * Yes, exactly. Formal logic can say:
 * AXIOM: All birds have feathers.
 * AXIOM: A duck is a kind of bird.
 * DEDUCTION: All ducks have feathers.
 * Which is great - logic told us something that we may not have known at the outset. It's not so obvious that it helped in this case - but for more complex systems where the conclusion is perhaps not so clear, application of formal logic can tell you things you didn't know and wouldn't have guessed.
 * But: that exact same set of formal logic can go horribly wrong:
 * AXIOM: All birds can fly.
 * AXIOM: An ostrich is a kind of bird.
 * DEDUCTION: All ostriches can fly.
 * Clearly this is an incorrect deduction - because the first axiom isn't true. This system of formal logic has no way to know that.  But that doesn't make formal logic wrong or useless: what it's telling you in this case is NOT that ostriches can fly.  It's saying that if all birds can fly - then ostriches can fly...which is a true statement, albeit not a very useful one!
 * So logic is a tool - and a very handy one - but it's not enough by itself to tell us anything at all about the world. Some poor biologist has to go out there and slog his/her way across the planet, looking at all the birds to see if there are any flightless ones out there - then come back and rewrite that first axiom.  Now we have:
 * AXIOM: Some birds can fly.
 * AXIOM: Some birds cannot fly.
 * AXIOM: An ostrich is a kind of bird.
 * DEDUCTION: &lt;nothing&gt;
 * SteveBaker (talk) 19:04, 8 April 2013 (UTC)
 * Can I throw the paradox of the ravens and material implication into the mix? Thanks. Tevildo (talk) 19:48, 8 April 2013 (UTC)
 * And the problem of our ability to deal with postulates like "Everybody agrees that a unicorn has one horn" and "Everyone agrees that unicorns do not exist" simulataneously, without being frozen like a robot in a 60s TV show mumbling "Does not compute... does not compute" Gzuckier (talk) 06:22, 9 April 2013 (UTC)


 * But that's not a problem with the laws/rules/mechanisms of formal logic. It's just a dumb misstatement of the axioms: "Unicorns are fictional creatures" and "Unicorns are described as having one horn" would be a better statement of those axioms - and would allow one to deduce the useful and true statement that "Some fictional creatures are described as having one horn".  But if you put nonsense into a logical system - you get nonsense back out again: "All wibbles are bloings",  "All bloings are pfnaargs" allows you do deduce that "All wibbles are pfnaargs"...which doesn't make you any more enlightened.  But even "All elephants are tomatoes", "All tomatoes are made of pink lace" gets you "All elephants are made of pink lace" - which is clearly a false statement about the real world - but a perfectly valid deduction from those axioms.  All that formal logic is telling you is that "If all of the axioms are true - then this statement is also true."


 * The Principle of explosion is a superb example of what happens if you have bad axioms. All you have to do to prove absolutely anything you like is to add an axiom into your system that asserts that something is both true and false.  The example in that article is a good one:


 * "Consider two inconsistent statements - “All lemons are yellow” and "Not all lemons are yellow" - and suppose for the sake of argument that both are simultaneously true. If that's the case we can prove anything, for instance that "Santa Claus exists", by using the following argument: 1) We know that "All lemons are yellow". 2) From this we can infer that (“All lemons are yellow" OR "Santa Claus exists”) is also true. 3) If "Not all lemons are yellow", however, this proves that "Santa Claus exists" (or the statement ("All lemons are yellow" OR "Santa Claus exists") would be false)."


 * So the smallest error in your choice of axioms can make utter nonsense of the results. This doesn't mean that formal logic is somehow "wrong" or "useless" any more than ones' failure at using a hammer to drive a 2" length of string into a board means that a hammer is a useless tool.
 * SteveBaker (talk) 16:47, 9 April 2013 (UTC)

Rolled up dimensions
I am trying to get my head around the concept of "rolled up" dimensions found in string theory. Higher dimension problems can often be visualised using a two-dimensional world as an analogue and then considering the effects of a third dimension upon it. I am having a failure of imagination here and cannot see how the third dimension could be "rolled up". I have seen Steve Baker's clever analogue of a one-dimensional world in the archives where he rolls up the second dimension into a cylinder around the one-dimensional line people, but I cannot see how this analogy is extended to two-dimensions. Further, a cylinder is three-dimensional, a third dimension was required to enable the second dimension to wrap. In a two-dimension analog would a fourth dimension be required to enable the third dimension to be wrapped?  Spinning Spark  11:56, 8 April 2013 (UTC)
 * Its simply an analogy because we live in 3-spacial dimensions and related to this is the holographic principle, which I have not studied. Although we generally work with three spacial dimensions, I tend to think of space as one-dimensional though, because I can slice it up into countable discrete volumes with division: simply cut it up into discrete unit volumes such as bricks or tiles and then cut each of these into smaller pieces recursively into infinitesimal volumes. For me, its an interesting conceptual exercise, even if its not all that useful (its utility may be limited to object order algorithms). Of course, with the physics of string theories, one has to consider the dimension of time as well with Einstein's principles and the concept of spacetime. -Modocc (talk) 12:43, 8 April 2013 (UTC)


 * It has nothing to do with the holographic principle, which is about information in a volume of space. Space is not 1 dimensional in any meaningful way, even for object order algorithms (because you can't tell what a voxel's neighbors are).
 * To the OP: The ant on a cylinder, say a garden hose, sees 2 dimensions: one along the cylinder axis and one along the circumference. We're imaging the ant to be a 2D inhabitant of the cylinder surface.  If you want the third dimension to be rolled up, our ant has to be already living in 3 dimensional space, but then (as you said) you need to embed this 3D space into a 4D space to visualize it.  Our universe is one example of a curved 3D space*, but although you can measure the effects of this curvature, you can't imagine what it looks like unless you can imagine 4D.  Similarly, it's pretty easy to predict what a 3D ant would see if the third dimension were rolled up: just read Flatland.  --140.180.248.141 (talk) 14:31, 8 April 2013 (UTC)
 * * Actually, our universe is very close to having no curvature, but the point is that it would be highly curved if the universe had more mass. --140.180.248.141 (talk) 14:31, 8 April 2013 (UTC)
 * Sure a one-dimensional space is meaningful. As I said, a space can be divided iteratively with an algorithm. If I systematically number all the bricks and you select the ith brick, I certainly have enough information to tell you, based on the algorithm used, what its exact neighbors happen to be. Its not necessarily as efficient as linear representations, but its fundamentally sound. As for the holographic principle, its my understanding that it came into play because string theorists discovered that their different dimensional models were related. --Modocc (talk) 14:40, 8 April 2013 (UTC)
 * That's not workable. Sure, you can turn a 2D chessboard into a 1D representation by numbering the squares from 1 to 64 - but for an arbitrary point on a 2D plane, you need two numbers...that's fundamental.  You can't represent an arbitrary point on a 2D surface with one number - or in 3-space with less than three numbers...it's impossible.  The multi-dimensionality of the universe isn't something you can argue away like that. SteveBaker (talk) 15:28, 8 April 2013 (UTC)
 * Its simply a matter of utilizing a different representation of a line, plane or volume. I can for instance divide a line into a grid of segments by mapping them to natural numbers, for exampple: {(O,1),(0,.5),(.5,1),(0,-.5),(-.5,-1),(0,1/4),...,(1,2),(1,1.5),(1.5,2),...} For a volume, you can systematically divide the bricks into smaller units likewise with each iteration as well as continue adding unit bricks to the perimeter of the space. Note that each element is uniquely numbered, whether it be a segment, area or volume and each has the exact same dimension as other elements. Also, although points and infinities are not represented, these are not even measurable. -Modocc (talk) 15:55, 8 April 2013 (UTC)
 * If you're only interested in labelling every point in nD space, you can just take the decimal representations of each coordinate (x, y, z, w, etc.) and then interleave them (e.g. (0.763, 0.184, 0.952) becomes (0.719685342)). That gives you a unique symbol number for every point in the space. Double sharp (talk) 16:05, 8 April 2013 (UTC)
 * It's better than that: you can take a single continuous 1-dimensional curve, and use it to fill up all of space. That's called a space filling curve.  This doesn't change the fact that space is 3 dimensional, and that while it might be fun and interesting to represent space with a single real number, it has no physical significance.  The 3D nature of space, on the other hand, makes a fundamental difference to the laws of physics.  --140.180.248.141 (talk) 16:15, 8 April 2013 (UTC)
 * Working with most coordinate systems definitely makes for simpler laws! Although, I tend to distinguish between whatever space is that we measure (and other things as well) and our models or representations of it. -Modocc (talk) 16:35, 8 April 2013 (UTC)
 * I cannot see that the 2D ant on the surface of a cylinder works as a 2D analog of the 3D situation. We cannot readily detect that our 3D space is anything other than flat.  The ant, on the other hand, will pretty rapidly detect that her second dimension is anything but flat simply by circumnavigating the cylinder. And yes, I have read flatland, and its completely useless for actually understanding anything and certainly doesn't deal with rolled up dimensions.  Spinning  Spark  15:05, 8 April 2013 (UTC)
 * It's not supposed to be a 2D analog - it's a 1D analog. One dimension with a "rolled up" second dimension.  The ant in the analogy is a 1D creature - aware only of the direction along the length of the hosepipe.  When the "rolled up" dimension around the circumference of the hose is large - then the ant can indeed detect that it's there so he knows he's living in a rather odd 2D world.  But the rolled up dimensions envisaged by things like string theory are curved up much more tightly than that...the "diameter" of these extra dimensions would be microscopic...much, MUCH less than the diameter of an atom.  The 1D ant can't perceive that he's walking around the circumference of the hose because the diameter is smaller than he can see or feel or anything.
 * I agree that "Flatland" is a horrible analogy - it's more about the political views (including horrific class bias and appalling misogeny!) of it's author - which are quite painful to read to modern eyes. He's also confused about how his 2D world works, talking about walls, doors and roofs of his 2D houses.  A *MUCH* better book about that is "Planiverse" by A.K.Dewdney - I could spend hours looking at the drawings of 2D machines, musical instruments, spacecraft and such.  How does a 2D creature eat and defecate without falling in half?!  It's exceedingly well thought-out and the story/plot ain't bad either. SteveBaker (talk) 15:40, 8 April 2013 (UTC)


 * (What's wrong with class bias and misogyny? It gives you a perspective on a historical society, allowing you to be grateful for modernity.)
 * To expand on what SteveBaker said, I brought up Flatland because the entire point of a "curled up" dimension is that you don't know about it until you go to the smallest lengths. If we lived in a 3D world where the third dimension was curled up, for example, we wouldn't know about it, and it would look exactly like a 2D world, aka Flatland.  --140.180.248.141 (talk) 16:15, 8 April 2013 (UTC)
 * Yeah - by all means read Flatland if you're interested in the weird political views of the time - but if you want to get a feel for how a 2D world might be - dump that book in the trash immediately!
 * If we lived in a universe with a 'curled up' extra dimension, then there are several possibilities:
 * If we had two normal 'flat' dimensions and the radius of curvature of the 3rd dimension was much larger than the size of the visible universe, then we'd be in a world pretty much exactly like the one we live in. Without some very fancy astronomy, we'd never know it.  In fact, there is a good case to be made that all three of our spatial dimensions are really like that.
 * If the radius of curvature of a fourth dimension were much smaller than the diameter of an atom (as string theory suggests) then we'd believe that we were living in a 3D world - and it would be exceedingly hard to devise experiments to prove otherwise (which is why we can't prove or disprove string theory right now).
 * If the radius of a fourth dimension was something on a "human scale" - an inch...ten feet...a mile or two...even a few tens of lightyears - then the universe would be an exceedingly weird place. It's fun to try to imagine what that would be like...but it's clear that there are no hidden dimensions that work like that in our universe.
 * SteveBaker (talk) 16:45, 8 April 2013 (UTC)
 * Oh, come on, Steve. Flatland is no more about the class prejudice and misogyny of the author, than A Modest Proposal is about the author hating Irish babies. Satire: how does it work? 86.161.209.128 (talk) 08:48, 10 April 2013 (UTC)
 * My original analogy (I was going to repeat it - but you dug it up from the archives!) helps me to understand it...but because we have such a hard time wrapping our brains around a universe with four or more spatial dimensions - it's truly necessary to pursue a lower-dimension analog. The "hose-pipe-world" of a 1D creature with a curled up second dimension goes some way to explaining what a curled up 4th dimension might be like.  As we imagine the diameter of the hose shrinking down to something less than the creature can perceive, the 1D approximation of reality for that creature is perfect.  Our 1D "ant" can only meaningfully move in 1 dimension and the existence of that curled up second dimension is just not noticeable to it.


 * It's true that to follow that analogy, we need to visualize the plight of our 1D creature as a 3D thing - that's because the concept of "rolling up" entails curving the second dimension into a cylinder - which is a 3D object. Hence we can't imagine a 1D world with a rolled up 2nd dimension by drawing it on a flat 2D plane (like a piece of paper).


 * That makes visualizing a 2D world with a rolled up 3rd dimension very tough for us...we almost need a 4th dimension in order to see it in our mind's eye. That doesn't invalidate the analogy...it just poses a problem for our imaginations.


 * I suppose we could try though:


 * Imagine a large room with a very low ceiling...a REALLY large room. The ceiling is so low that we have to crawl around on our bellies to be able to live inside it.  This can be considered as something close to a 2D world for us.  Let's turn off gravity too (in our mental world, it operates in the 3rd dimension and that confuses this analogy).
 * Now imagine that there are many holes in the ceiling and floors - and that something magical happens. When you move through a hole in the floor - you reappear through a nearby hole in the ceiling...and if you climb up through a hole in the ceiling, you reappear through a nearby hole in the floor.  (Just like in that video game "Portal" if you put one portal in the ceiling and the other in the floor just below it).
 * But it's not really "magic". What's happening is that in this world, the 3rd dimension is curled up into a loop and the "circumference" of that 3rd dimensional loop is the same as the height of the ceiling...just a couple of feet.
 * For you, the act of travelling through one of these holes is *almost* like nothing happened...you moved just a foot or two in the third dimension and came back more or less where you started - so moving in the 3rd dimension doesn't do much to your position within the 2D room.
 * Now, imagine the ceiling getting lower and lower...the effect of falling through a hole (in the 3rd dimension) has less and less effect on you.
 * When the 3rd dimension is microscopically thin (and the ceiling is similarly insanely low) - then moving through that 3rd dimension is just like nothing happened. Imagine the ceiling is much lower than the size of an atom...that's how big these hypothetical "curled up" dimensions must be if they really exist.
 * Now, instead of holes, the ceiling and floor are completely permeable...you can pass through them at will at any point - looping around the 3d dimension and arriving back almost exactly where you started.
 * Beings in this world would not know that there was a 3rd dimension...they would assume that the ceiling had zero height (because it's too small for them to perceive). They wouldn't be able to see any distance through the 3rd dimension because photons would also loop around it and arrive back at the same place.


 * Now stretch the analogy to a 3D world - with a coiled up 4th spatial dimension. We can freely move in the 4th dimension - but the motion is so slight that we can't measure it - light might travel around this 4th direction - but the curvature is so tight that you can't tell.  It's a small step from that to imagine many coiled up extra dimensions...dozens of them...enough to make string theory work in fact.


 * It's a bit of a mental stretch - but it can be done.
 * SteveBaker (talk) 15:28, 8 April 2013 (UTC)


 * The analogy makes sense, but there's something that doesn't quite sit right. According to this analogy, if the three dimensions are equivalent, I should be able to shine a flashlight (or at the atomic scale, emit a photon) at an 80 degree angle toward the ceiling, and it should spend most of its time going up and around.  So why don't we see random photons from a source moving more slowly than c? Wnt (talk) 16:10, 8 April 2013 (UTC)
 * I wonder that too. It's an interesting question.  Perhaps the problem is that you can't arrange to launch a photon in that direction to start with...(naively - the flashlight won't "fit" into the extra dimension - so it has to point nearly parallel to the unwrapped dimensions) - so the error in speed is too small to measure.  Are there processes which emit photons in completely random directions?  I don't think so - but I might easily be wrong. SteveBaker (talk) 16:31, 8 April 2013 (UTC)
 * Since it is an analogy, I wouldn't worry too much about whether or not a flashlight or other stuff fits into it. -Modocc (talk) 17:28, 8 April 2013 (UTC)
 * Thanks Steve, that really does make a lot of sense to me. By the way, I realised your hospipe analogy was a 1D analog (as I think my original question makes clear) but I was replying to IP 140 who seems to be offering a 2D hospipe analogy which I still think doesn't work.  By by the way, I have read Planiverse as well and agree it is much more interesting from an engineering perpspective, if not a literary one.  Spinning  Spark  17:15, 8 April 2013 (UTC)
 * I was offering the exact same analogy as Steve. The ant lives on the 2D surface of a garden hose, but one of the dimensions is contracted, so it's as if the dimension didn't exist.  To the ant, the world seems 1D.  I believe that's exactly what Steve is saying as well.  --140.180.248.141 (talk) 19:06, 8 April 2013 (UTC)
 * Well in that case it wasn't an answer to my question - I already understood the 1D case as I thought I made clear - so its not surprising I was confused.  Spinning Spark  21:22, 8 April 2013 (UTC)
 * I spent exactly 1 sentence going over Steve's analogy. My point was that in Steve's analogy, the ant's "universe" is 2D, but has one rolled-up dimension.  The rest of my post is about why you can't visualize a "universe" that's 3D and has 1 rolled-up dimension: because in order to visualize the curvature of a 3D universe, you need 4 dimensions, just like how you needed 3 dimensions to visualize the ant's 2D universe.  Then I explained what an inhabitant of such a 3D universe with 1 rolled-up dimension would see.  They would see exactly what the Flatland inhabitants would see, because they'd think their universe was 2 dimensional.  Steve said essentially the same thing later on, but in more detail.  --140.180.248.141 (talk) 22:14, 8 April 2013 (UTC)
 * Well actually I can visualise it after Steve's explanation.  Spinning Spark  23:18, 8 April 2013 (UTC)

You could also twist them and then roll them up so that you get the topology of a moebius strip, see here for a funny article about that. Count Iblis (talk) 23:41, 8 April 2013 (UTC)

Citing retractions
Hi,

Probably the wrong desk, but I figure someone here is most likely to know the answer. My question is, what's the proper way to deal with a retracted article when writing a publication. I'm writing a thesis about a topic which has previously been studied in an article which has since been retracted. So I need to mention this article. The options I can think of are:


 * Mention the article but don't put it in the refs.
 * Mention it and reference it normally.
 * Mention it and reference the retraction (what's the format for that!?)
 * Don't mention it all, despite them basically doing the same experiments as me.

I haven't found any official guidelines, does anyone know what the best approach would be?

Cheers,

Aaadddaaammm (talk) 14:13, 8 April 2013 (UTC)


 * I would reference the article as usual: however, I would mention the fact that the article has since been retracted, either in the body of the work, or in a footnote. If you have the date of the retraction, I think I'd put that at the end of the reference, something like this: "Article X, published in the Journal of Y dated xxx, retracted in the Journal of Y dated zz". --TammyMoet (talk) 14:25, 8 April 2013 (UTC)


 * I don't know about "official guidelines" but it seems to me that when a paper is retracted, it is precisely because the authors or publisher no longer believe it should be referenced. Our Retraction article states that: "In science, a retraction of a published scientific article indicates that the original article should not have been published and that its data and conclusions should not be used as part of the foundation for future research. ".  For me, "should not have been published" means that you should not reference it...it has (effectively) been "unpublished" and should be considered not to exist - and "should not be used as part of the foundation for future research" is a strong indication that you shouldn't be basing anything on it - and therefore should have no cause to reference it.  By referencing a retracted article, you're risking people asking whether your work was in some way based upon it - which would be a huge negative for your work.  As our article further notes, if the problem with the paper were not extremely serious, a "correction" would have been published...even if the author says "we messed up, this is all wrong", that would typically result in a correction rather than a retraction.  Retractions are generally reserved for serious issues like plagiarism or downright intellectual fraud...and such papers really shouldn't be referenced. SteveBaker (talk) 14:51, 8 April 2013 (UTC)


 * My view is that a retracted article should be treated as a primary source. You can cite it to support the fact that it exists and contains certain statements, but you can't cite it as a reliable source.  For example in our article on the Schön scandal it would be legitimate to cite his retracted papers when explaining the claims that he made.  (As a matter of fact, though, only the retraction notices are cited in our article.) Looie496 (talk) 15:14, 8 April 2013 (UTC)


 * Actually I misread the question to be about Wikipedia. For a thesis, I would simply cite both the article and the retraction notice. Looie496 (talk) 15:16, 8 April 2013 (UTC)


 * Steve Baker has a point, but I don't agree with his approach for all circumstances. You need to consider the possiblity that your reader may discover for himself the article, but not the retraction.  Retractions sometimes come long after an article was published.  Why was it retracted?  Did they make a simple obvious error that got thru somehow?  Or was it a very difficult to spot error that a competent work in the field of study would most likey make as well?  Or was it retracted because the author(s) faked their data or something?  Was it retracted in error - such things can happen?  Such things are difficult to spot. You don't want a reader to rely on the retracted article and junk your work, for want of a sentence or two.
 * How you should cite it depends on what significance it has to your work.
 * If your thesis agrees with the article, I suggest you cite it normally and include it in the list of references, and include a discussion stating that it was retracted and explain why you believe the retraction should not have occurred. And cite and list the retraction (which I assume was in the form of a letter to the journal in which it was published).
 * If the article was retracted due to faked or incorrectly recorded data, and there are lots of other directly relevant references which support that the article was wrong, then I think Steve's approach is valid.
 * As with any writing, whatever the logic or rules may be, there is one fundamental rule: write for your reader.  If it happens that whatever your thesis discusses has not had much prior work, and so the reader is certain to discover the article if he does some kind of a search, then you must cite it and list it in your References section, and you must make it clear in your text body it has been retracted (by who?  the journal editors? or the author(s)?) and cite the retraction, and list that in your references section.  In such a case, you ought to give your own view on the retraction, and summarise where the author(s) went wrong, in a footnote.
 * Retractions are often just letters to journal editors, but may be short articles. Either way, it makes no difference, cite and list it in the same way as any other reference.
 * Ratbone 60.230.248.167 (talk) 15:29, 8 April 2013 (UTC)
 * I agree that you also have a point! This isn't simple.  If you replace the word "retraction" with "correction" - then I absolutely agree with you.  But retractions are supposed to be for such egregious problems that the work is beyond redemption.  If (as our retraction article suggests) papers are mainly retracted because of:
 * Plagiarism - then you should obviously be able to reference the original article from which the material was plagiarized.
 * Duplicate/concurrent publishing (self-plagiarism) - then it's the same deal, just reference the original work.
 * Serious errors amounting to scientific misconduct. Well, that's the tough one.  Did the serious errors result in a paper that's totally devoid of value?  You'd think so, or else a simple "correction" would have been released.  The tarnished reputations associated with a retraction are sufficiently great that you'd imagine people would fall over themselves to issue a correction notice rather than retracting the entire work.
 * So I think the only reason to mention a retracted paper at all would be in the context of "Don't read this - it was retracted"...in which case, I'd reference the retraction notice rather than the paper itself. Remember, people often judge the worth of a paper by the number of people who referenced it - and you want to boost the notoriety of the retraction, not lead people to offer undue respect to the original paper.
 * The tricky corner case (and this case might fall into that category) would be when you re-did an experiment from a retracted paper - but did it "right" this time. You'd certainly want to avoid someone saying "Oh - you did it *that* way?  Didn't so-and-so get their paper retracted for doing that?"...so perhaps there is a case then for explaining how come your version is acceptable.  But that leads me back to wondering why a simple experimental, numerical or statistical slip up of a kind that you could fix, would result in a retraction rather than a correction of the original paper.
 * SteveBaker (talk) 17:01, 8 April 2013 (UTC)
 * This is gut feeling: I don't think anyone has the right or ability to dictate good practice on this issue. I am inclined to discard the argument about inflating statistics of a retracted paper out of hand.  It doesn't matter what its statistics are.  That said, it would depend on the citation.  If you're writing that "a widely-disseminated finding that Foo1 strongly represses Bar7 was subsequently retracted", the retraction should be enough.  If you're writing that "despite the retraction of Joe Bloggs' group's work for an unrelated misconduct issue, their technician was not implicated in the matter, and we were able to replicate his high-quality immunoprecipitation data for Foo1 with an aliquot the group provided us and have subsequently confirmed the identity of the antigen with a commercial antibody", then cite both paper and retraction at the appropriate places in the sentence. Wnt (talk) 19:33, 8 April 2013 (UTC)
 * I If a part of the article was not up to the desirable standard, then the journal, the institution or the author would issue a partial retraction. For smaller stuff, an errata or a correction would suffice. For bigger stuff, like being marred by fraud, a total retraction. That already means that the data is not reliable, so don't use it. As [] point out, citing a retracted article without knowing about it can be embarrassing. But what you are doing is citing it although you know it was retracted. That's bad practice. Don't worry about anyone finding the article anyway, since it won't be present in electronic resources. And don't worry about re-using an idea from a retracted article, since it is not copyrighted. OsmanRF34 (talk) 20:00, 8 April 2013 (UTC)
 * I'm concerned about your last sentence. As far as I know, copyright applies to retracted articles.  I've never seen anything in copyright law that says otherwise. SteveBaker (talk) 20:42, 8 April 2013 (UTC)
 * The 'it' in my last sentence refers to 'idea' not to 'retracted article.' I know that the retracted article gets its copyright at the time of writing. However, no matter what flaws it had, I think the OP saw something of value in the retracted article, but it's not sure how to use it. I just meant that ideas as such are not copyrighted. If someone had a good idea and used a novel procedure, but didn't know how to implement it and massaged the data to make it 'work', then the paper will be retracted, but you still can use the same procedure in the right way.  OsmanRF34 (talk) 21:27, 8 April 2013 (UTC)
 * It's not apparent from his question whether the OP's thesis will be about pure research or applied research. Its also not apparent whether the retracted article was published in a peer-reviewed journal having a board of editors and high repute, or a commercial journal put out by a magazine publishing company (some are peer reviewed and/or subject to subject matter expert editorial reveiew, or even a company house journal.   House journals have ranged from mere carefully written advertising through to publications of very high repute, such as Bell System Technical Journal and Australian Telecommunications Research.
 * For research papers printed in peer reviewed journals of high repute, the probability of human error causing invalid data or invalid conclusions is low. So retractions are unusual and for serious reasons, such as faked data or other fraud. So you should carefully consider whether to ignore or cite a retracted article for the reason that Steve Baker gave.
 * However, plenty of articles published in commercial journals and occaisonally house journals have relavence to thesis and peer reviewed jornal articles. In commercial journals, human error can be significant, and retractions due to human error and not a deliberate attempt to defraud are relatively common, as are debates and opposing views in the form of Letters To The Editor.   In such cases it is often of great value to cite both the article and the retraction, and discuss the error.  Often, the error is such that another worker, expert in the field, could also make the same error and in some cases, most likely would.
 * You just can't be serious if you suggest that any search of the literature on a subject will always bring a reader's attention to retractions. — Preceding unsigned comment added by 124.178.45.169 (talk) 03:27, 9 April 2013 (UTC)
 * OsmanRF34, I'm concerned about a lot of what you wrote there. "Don't worry about anyone finding the article anyway, since it won't be present in electronic resources" just isn't true, in most cases.  In the biomedical sciences, for instance, retracted articles aren't deleted or hidden from PubMed searches, but the articles are clearly annotated as 'retracted' and a reference to the retraction notice is provided.  Publishers often don't delete retracted papers from journal websites, but good journals will indicate clearly – in tables of contents, abstracts or summaries, and ideally as a conspicuous notation or watermark on the PDF of the paper itself – that the paper has been retracted.  Telling a reader "don't worry about re-using an idea from a retracted article, since it is not copyrighted" is missing the point&mdash;plagiarism and copyright violation are two distinct forms of misconduct that sometimes but not always overlap.  (This is a problem we regularly encounter on Wikipedia, actually; there's a common misconception that one can copy as much as one likes from a public domain work without attribution, because copyright isn't engaged.) TenOfAllTrades(talk) 13:03, 10 April 2013 (UTC)


 * While the standards and expectations may vary by field, I'm reluctant to endorse the scorched-earth, down-the-memory-hole, it-never-happened-and-we-shall-never-speak-of-it approach. Suppose Smith et al. did (what would or should have been) an interesting experiment, but had to retract their results because they unwittingly used a contaminated reagent or defective instrument that wasn't caught in the peer review process.  If Jones et al. decide to repeat Smith's experiment because Jones thinks it was a sensible, elegant approach, likely to make a worthy contribution to human knowledge, it strikes me as both deceptive and unethical for Jones not to give credit to Smith for (in effect) providing the design for Jones' experiment.  Using someone else's work – published, retracted, or otherwise – without giving appropriate credit is serious academic misconduct.
 * The Committee on Publication Ethics offers guidelines for retractions: when they are appropriate, how they should be reported and handled. In particular, 'disappearing' a retracted publication is strongly discouraged; instead, a publication's retracted status should be made clear, but it should remain available on the record.  (Ideally with an explanation for why the retraction took place, though most journals are disappointingly vague on this point.)
 * This paper presented at an ACRL conference offers an interesting look at what happens to the citation of papers post-retraction. Troublingly, they often continue to be cited without any indication of their retracted status, possibly because the citing authors are unaware of the retraction.  Very interestingly, the paper's authors identified at least four cases where a retracted clinical study had been identified and considered for inclusion in metaanalyses (as it otherwise met the inclusion criteria laid down for incorporation into the metaanalysis; the retraction occurred because the study's authors had not received proper institutional review board approval, rather than due to any technical flaws in their work).  Two of the metas discarded the study's findings as fruit of a poisoned tree; two incorporated the study's data anyway&mdash;though it's not clear whether or not these authors were fully cognizant of the studies' retractions.
 * It seems that a reasonable and generally-accepted method of citation is to use a normal citation format, but indicate clearly (ideally both in the text and in the references section) that the paper has been retracted. A good format might be the one used in this PLoS ONE article: the footnote number is followed by the annotation (paper now retracted) at its first appearance in the Introduction and Discussion sections, and the reference is followed by [retracted] in the references list. TenOfAllTrades(talk) 22:51, 9 April 2013 (UTC)

Hydrochloric Acid
How does the hydronium ion explain the acidity of hydrochloric acid. Just to help explain what I am thinking: is there something to do with a Bronsted acid? Are Bronsted acids applicable to hydroxide (-OH) reactions?Curb Chain (talk) 15:39, 8 April 2013 (UTC)
 * ion is a proton donor and therefore a Bronsted acid. ion is a proton acceptor and therefore a Bronsted base. Ruslik_ Zero  19:30, 8 April 2013 (UTC)
 * Acidity can be explained many ways. There are at least three prominent theories of acid-base chemistry, so you need to consider which theory you are working within before answering the question of "How does..." something about acids and bases work.  It entirely depends on which model you're working within.  None is wrong, but they are all different, and have their different uses.  Acid–base reaction explains all three.  Incidentally, HCl is an acid according to all three theories.
 * The Arrhenius theory explains acids and bases based on the ability of a substance to produce certain ions in water solution. So, if something is added to water and produces hydronium ions, it is an acid, while if something is added to water and produces hydroxide ions it is a base.  The basic definition of pH comes from Arrhenius theory, as it is based on ion concentrations in water solutions.
 * The Brønsted–Lowry theory explains acids and bases in relation to each other rather than to water, removing the restrictions of having water present to explain acids and bases. According to Brønsted–Lowry, an acid and a base are defined based on their place in a "model" reaction:
 * HA + B A- + HB+
 * whereby HA is an acid, and B is a base. On the product side, A- and HB+ are called conjugates of the reactants, thus A- is the conjugate base of the acid HA, and visa-versa.  Conjugate acid-base pairs explain Buffer solution behavior, for example.  In the case of the reaction:
 * HCl + H2O Cl- + H3O+
 * HCl is behaving as the acid, while H3O+ is the conjugate acid of the H2O.
 * Lewis theory explains acids and bases based on the formation of new covalent bonds. The Lewis acid-base reaction is based on electrons which migrate from species with excess electrons (usually in the form of unbonded pairs of valence electrons, or sometimes as pi bonds) towards a species that is electron deficient.  The thing with the excess electrons is called the "lewis base", while the thing that is electron deficient is the "lewis acid".  In this case the newly-created bond is between the hydrogen atom from the HCl molecule and the oxygen atom in the water molecule, that hydrogen atom being the lewis acid (being bonded to the more electronegative chlorine makes the hydrogen electron deficient) while the oxygen atom has excess electrons in the form of unbonded "lone" pairs, the new O-H bond that turned water into hydronium being the focus of the Lewis theory definition of acids and bases.  I hope all of this helps.  -- Jayron  32  19:54, 8 April 2013 (UTC)
 * So equilibrium does not exist in Arrhenius or Lewis Theory? Why isn't the relationship between metals such as  (or other hydroxides such as ) and  the same?
 * How is this different from reduction and oxidation reactions?Curb Chain (talk) 03:50, 9 April 2013 (UTC)
 * Who said equilibrium concepts didn't exist in Arrhenius or Lewis? I certainly didn't.  Also, I'm not sure I follow your question regarding metals and hydroxides is.  Could you restate it or elaborate what you're asking about?  Also, it is quite different from redox: redox reactions are those which change the Oxidation state of an element.  Acid base reactions don't change the oxidation state of anything (if you follow the formal rules for assigning oxidation number to atoms in any acid-base reaction, you'll find that none of them involve changes in oxidation state).  But if you could restate your first two questions in detail: what exactly do you not understand or what do you need answered, that'd be great, because I can't follow what you're asking here.  -- Jayron  32  04:21, 9 April 2013 (UTC)
 * Clarification for Question 1 : In my grade 12 chemistry, the equilibrium sign indicated a reaction is reversed and forward in real time.  I don't remember Arrhenius and Lewis (acid-base) reactions having the possibility of reversing.
 * Clarification for Question 2 : For example: +  -> .  Why can't acid-base theory(s) be applied to such a reaction (such as  considered an acid (or base)).
 * Clarification for Question 3 : Following from Clarification for Question 2, why do reduction and oxidation reactions not treated the same as acid-base reactions.Curb Chain (talk) 16:22, 9 April 2013 (UTC)
 * 1:Equilibrium just means that the species in the reaction are not static; that there is a dynamic interaction between a series of reversible reactions whereby the reactants (as written) make the products and the products make the reactants; but where the end result is a situation where the relative concentrations of the species are stable, even if they are continuously reacting. There is nothing in Arrhenius or Lewis theory which denies the reality of this dynamic process.  I'm not sure where you got notion that somehow they denied the existence of equilibrium, that the meaning of a double arrow in one specific equation in my explanation somehow could be interpreted to mean that the other theories someone denied this reality.  It is perplexing how you jumped to that conclusion, but let me assure you, they do not.  I don't know what more to say on the matter as I still can't find the leaps of logic that lead to that conclusion.  All three theories are compatible with observed behavior, and the reality of equilibrium is not denied by any of them.
 * 2: They could be, specifically (and in this case only) Lewis theory explains this in terms of acid-base reactions. In the case of the reaction you just gave, Sodium is a Lewis acid and Chloride is a Lewis base.  Lewis theory is decidedly more broad than the other theories.  Basically, Lewis theory contains B-L theory as a special case where the acid is always the hydrogen ion, whereas B-L theory contains Arrhenius theory as a special case where one of the two reactants is always water.  You can look at the three theories as building on each other, progressing from the more restrictive (Arrhenius) to the broader (Brønsted–Lowry]] to the even broader still (Lewis).  But they all still work together.  For example, if you place Iron (III) Chloride in water, you get a solution with a low pH.  Arrhenius theory would call this an acid, unambiguously.  Lewis theory explains this by showing how Iron acts as a Lewis acid, by forming Coordination complexes with hydroxide (a Lewis base) and leaving behind excess hydrogen ions.  To bring it back in for simplicity's sake, Lewis theory is a "special case" and generally, when we call something an "acid" in an unqualified manner we're usually speaking about it from the perspective of Arrhenius or B-L theory; "acids" are basically molecules that give up H+ ions easily.  If we want to indicate something which behaves like an acid according to the Lewis model, but not in the other models (like the Na+ in your example), we usually use the term "Lewis acid" to indicate we're using that definition, that is if something is a Lewis acid, but not a B-L acid, we always use the qualifier "Lewis" acid.  Unqualified, the word "acid" always means B-L acid or Arrhenius acid (i.e. either something that is a proton donor or something that produces a low pH in water).  In some contexts, a Lewis acid is called by the name electrophile, while a Lewis base is called by the name nucleophile; these terms are near perfect synonyms, and the "electrophile" and "nucleophile" terminology developed to avoid confusion with the terms "acid" and "base" (when referring to the B-L theory, for example).  But that's just a semantic or linguistic distinction.
 * 3:Because redox reactions operate under different conditions and produce different results. As I already noted, redox only involves those reactions where the oxidation state of an element is different on the product side than it is on the reactant side.  In acid-base reactions, there is no change of oxidation state, so they aren't redox reactions.  You can prove this to yourself by using the rules for assigning oxidation number to any acid-base reaction.  Let's use those rules on the one you gave in the very first bit:
 * HCl + H2O Cl- + H3O+
 * On the left side, in the HCl, the H has an oxidation number of +1, the Cl has an oxidation number of -1. In the water molecule, the H has an oxidation number of +1, the oxygen has an oxidation number of -2.  On the right side the lone Cl ion has an obvious oxidation number of -1, and in hydronium the H has an oxidation number of +1 while the oxygen has an oxidation number of -2.  Since no atom has different oxidation numbers between the left and right sides, there is no redox reaction.
 * Does all of that work? -- Jayron  32  18:16, 9 April 2013 (UTC)
 * Responses:
 * In response for Question 1: I thought → stated reactants on the left and products on the right. I guess maybe my teachers were wrong!
 * General response: What I don't understand is why Bronsted theory is required. I am trying to grasp the concept, but why does one compound: water, make a chemical theory, that being Bronsted theory, required?Curb Chain (talk) 20:08, 9 April 2013 (UTC)

(undent)As written, for the purpose of doing things like calculating equilibrium constants, the term "reactants" is always used for what is written on the left side of the equation even if, fundamentally, as written the reaction is not "extensive". Its just a terminology thing. Secondly, I think you mean Arrhenius theory. Water is not a necessary component in B-L theory. The reason why water-based solutions are part of Arrhenius theory is that's what Arrhenius based his much of his life's work around. He's actually famous for two things: The Arrhenius equation in kinetics, which isn't relevant to our discussion, and the concept of ions, which is. The modern understanding of what an ion is and how it works is all Arrhenius. the concept was actually first discovered and named by Michael Faraday, but he had no idea what they were or how they worked. Arrhenius worked out what is known as the dissociation theory of ions in water, that's what he won the 1903 Nobel Prize in Chemistry for. His theory explains the behavior of what we now know as electrolytes or ionic compounds by their behavior when they dissolve, or more properly dissociate, in water. His acid-base theory is merely an extension of that work, he explains acids and bases based on the concept of Self-ionization of water, which is part of his dissociation theory of ions. pH, which is just a measure of ionic strength of H+ ions, comes out of that explanation. So, Arrhenius's definition is based on water-based ions because that's what he did. As they say, if what you have is a hammer, you see every problem as a nail. Arrhenius explains acids and bases in terms of relative ionic strength in water, based on the equilibrium of the self-ionization reaction: acids (by Arrhenius) shift the equilibrium towards the hydronium side, while bases shift the equilibrium towards the hydroxide side. But the concept is intimately tied to water. Brønsted and Lowry, working some three decades later, take Arrhenius's basic model and generalize it by removing water as a necessary component. They explain acids as H+ donors and bases as H+ acceptors. Now, the Arrhenius theory is fully integrated into B-L theory as a special case whereby either the B-L acid or the B-L base is water; water is thus an amphoteric substance because it can do that. But B-L theory works to explain acid base reactions in other media, thus removing the need to explain it in context of water. For example, here's a reaction you could do fairly easily with materials in a H.S. chemistry lab: What you do is get a container of highly concentrated ammonia solution, a container of "glacial" (highly concentrated) acetic acid, and open them inside a big glass box, like an aquarium, and what you'll see is a white powder forming in the air between the two containers. What you have here is a gas-phase Brønsted–Lowry acid-base reaction: The ammonia molecule "accepts" an H+ ion from the acetic acid molecule, forming ammonium ions and acetate ions. These ions instantly crystalize out in midair (since there's no water to dissolve them) and you get a "smoke" of ammonium acetate crystals forming. But this is an example of where B-L theory is able to more completely model acid-base reactions by expanding the definition beyond that of water-based solutions. Does that answer your questions? -- Jayron  32  21:32, 9 April 2013 (UTC)
 * NH3(g) + HC2H3O2(g) --> NH4C2H3O2(s)
 * So you are telling me that Arrhenius requires water (such as diluted (with water) hydrochloric acid). But how does Brønsted–Lowry theory explain  +  →  as reaction between an acid and a base?Curb Chain (talk) 23:17, 9 April 2013 (UTC)
 * It doesn't. That isn't an acid-base reaction according to Brønsted–Lowry theory.  Only according to Lewis theory.  Remember, each theory in succession acts to expand upon the definition of acids and bases that the previous theory established.  The oldest (and most restrictive) Arrhenius theory is expanded upon by the broader, and more inclusive Brønsted–Lowry theory, while Lewis theory is yet again more inclusive.  There are thus chemical reactions (many, indeed) which are acid-base reactions via Lewis Theory that would not be so recognized by Brønsted–Lowry theory, but the reverse does work: every Brønsted–Lowry theory acid-base reaction is also a Lewis theory reaction.  -- Jayron  32  03:34, 10 April 2013 (UTC)
 * Oh, so when we learn about these 3 theories, we are learning about the history of science. Arrhenius should not be learned, because it is not inclusive (when compared with Bronsted), and Bronsted should not be learned, because it is not inclusive when compared with Lewis.  Why shouldn't we just learn Lewis, at least from a young age?  So when something is very acidic and or very basic, shouldn't we just say it is very corrosive, or in some cases, reactive?Curb Chain (talk) 17:14, 10 April 2013 (UTC)
 * NO! Arrhenius theory is a very useful theory.  You don't "ignore", you use it because it is useful.  Without Arrhenius theory, you have no concept of pH, so you cannot merely pretend as though it doesn't exist.  Likewise, Bronsted theory is highly useful in looking at proton-transfer reactions, which are very common and exist all over the place.  No, the point is not that the older, more restrictive theories aren't useful, it's just that each of the three models takes a different perspective on acids and bases.  They're all useful insofar as they allow you to gain an understanding of how acids and bases work in some way.  You can't merely "not learn" the older theories, or treat them as historical artifacts, because they are useful.  The only purpose of a model or a theory is to be useful.  Those are all useful, so we learn all three.  That would be like claiming that since the invention of the car, there's no point in teaching children to walk anymore.  Regardless of the invention of more advanced means of locomotion, walking is very useful.  Likewise, though Lewis theory is more advanced and modern, it doesn't make the older theories obsolete.  -- Jayron  32  02:45, 13 April 2013 (UTC)

AIDS in non-human animals
disease like AIDS will affect the other animals as they sexually related to more than one opposite partner? — Preceding unsigned comment added by Titunsam (talk • contribs) 17:19, 8 April 2013 (UTC)


 * As our HIV/AIDS article explains, HIV originated in chimpanzees, and similar diseases have been found in other primates. So yes, it is possible. Looie496 (talk) 18:02, 8 April 2013 (UTC)


 * Then there's feline AIDS. StuRat (talk) 03:05, 9 April 2013 (UTC)

What's the study of historical people's illnesses called?
You know like Charles Darwin or George III? Given the medicine at the time didn't know as much as today. Surely it must have a greco-latin type name ending in logy? Barney the barney barney (talk) 19:15, 8 April 2013 (UTC)


 * Paleopathology? OsmanRF34 (talk) 19:31, 8 April 2013 (UTC)
 * Retrospective diagnosis, rather. Or maybe historical clinicopathology. Deor (talk) 21:34, 8 April 2013 (UTC)

MBTs Armor
Does Challenger 2 and M1 Abrams have composite armor at their hull sides or it is only at the front of the hull ? — Preceding unsigned comment added by Tank Designer (talk • contribs) 19:55, 8 April 2013 (UTC)
 * Don't know about the Challenger, but our Abrams has composite armor all around. 24.23.196.85 (talk) 05:56, 9 April 2013 (UTC)
 * I'm pretty sure that the composite armour is used on all parts of both tank types. The thickness presumably varies between different parts of the tank though given that this is a standard part of tank design. Nick-D (talk) 12:01, 9 April 2013 (UTC)

Thank you everyone, but what about Russian T-72, T-80, T-90 I think they don′t have composite armor at their sides — Preceding unsigned comment added by Tank Designer (talk • contribs) 14:07, 9 April 2013 (UTC)
 * I happen to be knowledgeable on Russian tanks (being a Russian-American), and I know that all models of the T-80 and T-90 have composite armor, as well as the later models of the T-72. The early-model T-72s had plain hardened-steel armor, though. 24.23.196.85 (talk) 04:38, 10 April 2013 (UTC)

Very good, but why in the war of Chechnya the rebels were able to destroy the T-72 and T-80 by only using rpg-7 against side armor and exploding the ammunition of the tanks , that means the side armor is very weak to be not able to repel an rpg-7 round — Preceding unsigned comment added by Tank Designer (talk • contribs) 14:55, 10 April 2013 (UTC) I remembered an important question, why russian army cancelled the T-80 tank from the ground forces ? (please don′t understand my questions to be aggresive) — Preceding unsigned comment added by Tank Designer (talk • contribs) 08:28, 11 April 2013 (UTC)
 * (1) This is not so much a question of armor as that of ammo storage -- both the T-72 and T-80 have all their shells (along with the highly explosive combustible-cased propellant charges) stored in the turret crew compartment, unlike our M-1 Abrams in which the shells are in a separate compartment at the rear with an armored partition between it and the crew compartment. So even if a shell or an RPG pierces an Abrams tank's ammo compartment and detonates the ammo, the tank could survive -- whereas if the same thing happens to a T-80, it would get blown to atoms along with the whole crew.  (The Russians never did care much about the well-being of their servicemen, obviously.)  (2) The T-80 burned too much gas, and broke down too often -- this is why it was retired from service. 24.23.196.85 (talk) 21:58, 11 April 2013 (UTC)

Thank you very much

not to be annoying, but had more question... (concerning hydrocarbons)
There are oil eating microbes, and hydrocarbon seep communities, and studies that show that a large percentage of hydrocarbons on the surface are fossil derived. I am casually investigating the possibility that we are negatively affecting our organic fertility by over-harvesting fossil fuels. The "hypothesis" is that fossil fuels, while in the ground, are part of the "microbial food chain", which is in turn responsible for such aspects as soil fertility. The proposed issue is that by oil pumping, coal mining, and gas pumping (lately "fracking") are causing a loss in the ecological capabilities of harvested regions. So, fossil fuel harvesting might be contributing to desertification. As there is less food for life, there is less life. — Preceding unsigned comment added by 134.29.38.118 (talk) 20:14, 1 April 2013 (UTC)

An interesting proposition, but your entire post amounts to a statement. This Reference Desk is intended for asking questions to which responders can give (preferably referenced) factual answers: did you have a question? If so, please ask it explicitly. Conversely, this Reference Desk is not intended for hosting debates and discussions, or soliciting opinions, so if your post was an implied invitation for either of these, it is inappropriate here. {The poster formerly known as 87.81.230.195} 90.197.66.100 (talk) 21:23, 1 April 2013 (UTC)

I think the implicit question is reasonably clear, but I can't see how that could work out. Soil fertility is determined by the top few feet of soil (often much less), but hydrocarbons are found much farther down. Also some of the most fertile soils are found hundreds of miles from any known hydrocarbon deposits. Looie496 (talk) 21:46, 1 April 2013 (UTC)

Agreed. Also, fracking is a problem precisely because it contaminates the groundwater with hydrocarbons, which, under this theory, would be a good thing. StuRat (talk) 22:45, 1 April 2013 (UTC)

...assuming that the other chemicals associated with fracking are OK. Of course, having seen what happens to plants when they are exposed to crude oil, I have my doubts about this whole concept. --Guy Macon (talk) 00:09, 2 April 2013 (UTC)

In response, pollution is unnatural. Natural hydrogeological blah blah would run the hydrocarbons through bacteria, and each step would take them through some modification very slowly, but very steadily. Pollution just puts refined and broken hydrocarbons where they weren't before. There certainly are bacteria all the way up and down, we've got a lot of articles on it over the last decade or so. There are also hydrocarbon seep communities, which are unique, but then food for passing life, and what I'm talking about is basically hydrocarbon seep, but rather than through a hole or fault, just slowly via osmosis. — Preceding unsigned comment added by 66.188.222.186 (talk) 01:52, 2 April 2013 (UTC)

It takes a very special class of organisms to use hydrocarbons as their main source of food; most other organisms (including all multicellular plants and animals) not only cannot use hydrocarbons in this way, but are positively harmed by them, either directly (acute toxicity, sunlight deprivation for aquatic plants, clogging of gills in fish and of feathers in birds, etc.) or indirectly (increased BOD caused by proliferating extremophile bacteria, possible infection by same, etc.) So, in other words, an oil spill is a VERY bad thing -- and in fact, for some of the same reasons that phosphate runoff is a bad thing (which would have been a "good thing" by the OP's reasoning, too). 24.23.196.85 (talk) 01:02, 3 April 2013 (UTC)

start unanswered.... Ok, so it does indeed take a very special class of organisms to use hydrocarbons. The place we originally found them was in the reserves or in hydrocarbon seeps. I'm ballparking. When there has been an oil spill, probably (I recall) microbes then did also proliferate in these locals, and utilize these hydrocarbons. modern science has made strains that are more rapid in their use of the hydrocarbons. rewinding, and commenting also on the soil fertility. it's true, we measure fertility in the first few feet of soil, but the hydro-logical cycle of the subterranean environment means that much more of the earth has an impact on the small region which is this few feet. If there is a water table 60 feet below the surface (Depending on other geological formations) it will have an impact on the soils conditions. I am suggesting that likewise, having gas shale (such as in minnesota) 60 feet below the surface would also have an effect, and here's why. Naturally occurring within the shale are hydrocarbon "eating" microbes, which then convert the hydrocarbons into other forms of organic matter. Through osmosis, and other hydro-logical activity, some of the contents of the shale would be released towards the surface. Although this would occur very slowly, and very little at any given time, it would be like the growth of any other permanent, fixed organic community, and very continuous. My QUESTION is mostly is this accepted as common understanding, and why or why not, and what science supports our common understanding? btw, I am the original poster... thanks! — Preceding unsigned comment added by 66.188.222.186 (talk) 20:03, 8 April 2013 (UTC)


 * Fertility is mainly determined by levels of nitrogen, phosphorus, and potassium, along with proper pH and absence of plant toxins; see NPK rating. None of those are present at significant levels in oil or natural gas. There is lots of carbon, but plants get their carbon from atmospheric CO2, not from the soil. Looie496 (talk) 20:33, 8 April 2013 (UTC)


 * To answer your question: No, this is not the common scientific understanding. Your ideas are vague and hard to understand, so I cannot say exactly where you go astray of the common consensus. Though bacteria that metabolize hydrocarbons are fascinating (and currently highly studied), there is no "textbook" support for the notion that they make meaningful contributions to the biogeochemistry of terrestrial ecosystems. See also soil fertility and soil biology. As the latter article points out, this is still a very active field of research, and there are certainly things we still don't understand about the ecosystem ecology of the deep earth. SemanticMantis (talk) 02:08, 9 April 2013 (UTC)


 * Also note that oil, gas, and coal reserves are millions of years old, so, if they were being eaten at a significant rate, they would be gone by now. The conclusion, they can't possibly be eaten by microbes at a significant rate, at least while sequestered underground.  Once on the surface, where sunlight and other forces can assist in the breakdown, then the bacteria can go wild on it. StuRat (talk) 04:57, 9 April 2013 (UTC)

Orange dwarf stars left main sequence yet
Did any orange dwarf stars left their main sequence stage yet? I thought 0.8 solar mass stars might have left main sequence. Is 0.8 sun like star or is it more like a orange dwarf star? I am confused a little bit. Because stellar classification said orange main sequence stars are between 0.6 and 0.9 solar mass, orange dwarf detailed article said it is 0.5 to 0.8 solar mass. When orange main sequence stars leave main sequence do they go to a red giant? If it goes to a red giant, will it be s smaller sized red giant, or about the similar orb as if sun might go through. When red dwarf leave its main sequence will it become a red giant? Or we don't know what will happen to these red dwarfs when it leaves its main sequence, or more likely they will shrink straight to white dwarf. I am not sure what is Subdwarf B and A is for. If that what red dwarf or orange dwarf might end its life?--69.226.42.134 (talk) 23:27, 8 April 2013 (UTC)


 * Red dwarf says:
 * "According to computer simulations, the minimum mass a red dwarf must have in order to become a red giant is 0.25 solar masses; lesser massive objects, as they age, increase their surface temperatures and luminosities becoming blue dwarfs and from that finally become white dwarfs."
 * We only have computer simulations, since it will be 10s of billions of years before the first red dwarfs start to die.
 * An Orange Dwarf is a main sequence star - see K-type main-sequence star. The mass ranges are estimates, so different sources will give slightly different figures (although we should try and be consistent between Wikipedia articles...), but they are describing the same stars by a different name. That article says they remain on the main-sequence for 15-30 billion years. The universe is about 14 billion years old, so presumably none of them have left the main-sequence yet. When they do, I think they will become red giants, although our article doesn't say so (if yellow dwarfs do and large red dwarfs do, then orange dwarfs that are somewhere inbetween presumably will as well). --Tango (talk) 11:15, 9 April 2013 (UTC)
 * The speed of evolution depends on metallicity. Population II stars having low metallicity evolve faster than Sun-like stars. The population II stars within the mass range 0.8–0.9 Solar masses, which formed about 13 billion years ago, now undergo red giant phase. See for instance HE0107-5240, which is a red giant. Ruslik_ Zero 18:55, 9 April 2013 (UTC)

Evolution of higher mass stars
I am confused about the white dwarf, neutron star and black hole stuff. I thought stars between 1.5 and 3 solar mass suppose to end to a neutron star. When a larger white main sequence star leave main sequence does it become a bright giant or only it becomes a red giant. I thought white main sequence stars (2 or 3 solar mass) will eventually become a red supergiant, or before it hits supergiant it has to go through red giant first? I always learn when large blue stars (between 3 to 8 solar mass) runs out of hydrogen it suppose to become a red supergiant. I wonder how can a blue main sequence stars (3 to 8 solar mass) become just a red giant? I thought they will eventually hit a red supergiant. i am not sure how will they end up with a white dwarf. Or before they reach neutron stars or black hole they will have to go through white dwarf first. I always learned that white and blue main sequence stars end life in neutron star or black hole. --69.226.42.134 (talk) 23:38, 8 April 2013 (UTC)
 * Neutron stars are that size but they are the small remnants of massive stars after they supernova. Stars which start out that size end up as white dwarfs. At least as far as I can tell from our articles which could be more clear about mass ranges. Rmhermen (talk) 03:20, 9 April 2013 (UTC)