Wikipedia:Reference desk/Archives/Science/2012 February 21

From Wikipedia, the free encyclopedia
Science desk
< February 20 << Jan | February | Mar >> February 22 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 21[edit]

blocking unsolicited junk email[edit]

I'm tired of receiving unsolicited junk email from online dating services. Is there any way to block them?24.90.204.234 (talk) 04:56, 21 February 2012 (UTC)[reply]

The Computer Ref Desk would be a better choice for this post. If they always use the same email address, it's simple to blacklist that address in most email systems. If not, then it gets trickier, and some type of spam filter must be used instead (which might also block other emails). StuRat (talk) 06:46, 21 February 2012 (UTC)[reply]

SIngle-Atom transistors—The end of "Moore's Law?"[edit]

Greetings!

For a few years, now, I've noticed that transistors in electronic devices keep getting smaller, and, inversely, said devices keep becoming cheaper, robuster, and more powerful. I've also noticed the current brouhaha over Intel's miniaturizing (desperately, it seems) its end-user transistors to 22 nanometers this year, 14 nanometers by 2014, and perhaps 8 nanometers by 2016.

Just two days ago, however, this Purdue University Report came out stating that some researchers over there have produced a functioning transistor consisting of a single phosphorus atom. I'm curious, what (if anything) may this mean for "Moore's Law," and the near future of end-user electronics?

At 100 picometers in diameter, a phoshorus atom is about 1/220 of what one now considers cutting-edge. May Intel—or one of its competitors—now drop its current roadmap, and instead pursue further research on this? Also, what does this mean for the possible advent of quantum computing?

And lastly, is this (as one of the sources in the article proclaims) really the lower limit? There are eight elements that are both smaller than phosphorus and solid at room temperature. Carbon, for instance, is less than half as massive as phosporus—12 grams per mole and 31 grams per mole, respectively. Why not, one day, a carbon-atom transistor, or the like? Pine (talk) 06:59, 21 February 2012 (UTC)[reply]

This was reported in Australian media, in somewhat better prose - see for example http://www.abc.net.au/news/2012-02-20/team-designs-world27s-smallest-transistor/3839524. This articles state that if ready for commerical exploitation by 2020, then Moore's Law will be matched. Researchers have been working on this for sometime, and don't hold your breath expecting to see something in the shops. High-tech & a real achievement though it may be, with respect to large scale commercial applications, it is not even equivalent to Bell Labs researchers figuring out the physics for point contact transistors during World War 2, from which (just stretching a little bit) you can trace Intel's latest products back to. Just because the active (ie switching) element is based on a single atom does not mean that a usuable transistor is that small, any more than the size of current transistors is just the size of the gate or PN-junction. You have to surround it with essentiallly clear field space many times larger. And don't forget it is so small that quantum effects are significant (read: a single transistor is unreliable), and it only works at temperatures requiring liquid helium as a coolant. In this context, your second question about a carbon-atom transistor is not actaully a useful question. In any case, phosporus is a donor atom for silicon or germanium. Carbon, analogous to Si or Ge, would be the substrate. Keit124.182.138.90 (talk) 07:57, 21 February 2012 (UTC)[reply]
"drop its current roadmap, and instead pursue further research on this" is presenting a false dichotomy. It's not an either/or kind of thing. Intel has many researchers. At any given time, Intel is simultaneously looking at multiple ways of making advancements in the near term, the medium term and the long term. Intel isn't shortsighted enough to only focus on near-term advancements, nor foolhardy enough to risk everything by focusing entirely on one particular research result which may evolve into something commercially valuable some year down the road, but might not pan out, such as is very often the case with this type of thing. Red Act (talk) 18:54, 21 February 2012 (UTC)[reply]
I should point out that quantum computing holds out the idea of multiple calculations done simultaneously, and once you're at a single-atom scale you're getting into the realm where quantum computing seems close at hand. So I wouldn't say for sure that reaching that scale means coming to a dead end in development. Wnt (talk) 10:43, 25 February 2012 (UTC)[reply]

Accuracy of the speed of light in relation to FTL Neutrinos[edit]

So, we've all heard the headlines of the faster-than-light neutrinos. Everyone's crazy about checking the timing and distance to super-accurate atomic clocks, GPS coordinates, etc., all for a difference of 60 nanoseconds, yet has never mentioned the possibility of c being off.

Why has noone discussed the possibility that c is just 60 ns faster than previously thought? Perhaps neutrinos travel at the TRUE, faster c and light in a vacuum (or quantum foam of said vacuum) slightly slows light down by a very a small amount?

I'm merely wondering why this hasn't been discussed at all. Humans measuring c as 60 ns off seems a lot more plausible than faster than light travel with all the paradoxes that it introduces like breaking causality. — Preceding unsigned comment added by Ehryk (talkcontribs) 09:06, 21 February 2012 (UTC)[reply]

I'll just say that any science relating to tachyons, right now, is embryonic at best; namely, we haven't even established their existence with any reliable certainty. It may be that c was miscalcuated by 60 ns (highly unlikely), or that somebody at CERN's calculations were off (somewhat likelier, although the results were the same every time).
This is just an anecdote, but it may (partly) answer your question:
As far as I know, Albert Einstein himself never said—one way or the other—that FTL travel was impossible; rather, that as an object aproached c, its mass would increase to the point that it could never achieve said speed, but only travel at a speed arbitrarily close to it. Also, Neils Bohr (to whom—along with Ernest Rutherford—we owe the current, subatomic model) even went so far as to suggest that c may not be so much an "upper limit," as much as an "assymptote." Viz., an object can travel either faster or more slowly than c, just not at c.
At any rate, FTL neutrinos, right now, remain highly speculative, so please take all this with a grain of salt. Pine (talk) 09:31, 21 February 2012 (UTC)[reply]


If c was only important for the speed of light, this may be an option. However, the quantity is interrelated with other physical constants and formulas. For example, E=mc2, which is used in radioactive decay, c2 = 1/(ε0μ0), which influences electromagnetism. It's not that simple to redefine c, even our standard unit of length (meter) is defined relative to the speed of light. I would think 'fixing' the neutrino problem in this way, 'breaks' a whole lot of other scientific models. -- Lindert (talk) 09:36, 21 February 2012 (UTC)[reply]
This possibility was actually discussed here in November. See item #1. 98.248.42.252 (talk) 09:52, 21 February 2012 (UTC)[reply]
  • According to Faster-than-light neutrino anomaly, "The particles were measured arriving at the detector faster than light by approximately one part per 40,000". According to Speed of light, "in 1975 the speed of light was known to be 299,792,458 m/s with a measurement uncertainty of 4 parts per billion". 86.176.209.243 (talk) 14:54, 21 February 2012 (UTC)[reply]


The speed of light, like any dimensionful constant, isn't really a true physical constant. You can set all such constants equal to 1 by choosing appropriate units. So, these constants are nothing more than an artifact of the freedom to use inconsistent units for the same physical quantity. So, if you want to add up two distances, you are free to to use different units for the two distances, but then you need to correct for that by multiplying one by some conversion factor. You are also free to declare on obscure philosophical grounds that the two distances are somehow fundamentally different, hence you need to assign different dimenensions to them, which then leads to that conversion factor having to cancel out whatever dimensions you assign to the distances.

This is exactly how the speed of light has to be understood. There exists a four dimensional space-time with the time and space linked into the same structure. However, due to historical reasons we have ended up measuring time in different units than spatial distances, but when in special relativity the two are brought back together and we insist on using the old historic units, conversion factor of powers of c will appear in the equations. Count Iblis (talk) 16:12, 21 February 2012 (UTC)[reply]

Is that right? I know very little about this, but for some reason I thought that the time dimension was of a fundamentally different nature from the space dimensions (in which case different units would seem to be justified). Is that wrong? Are all four dimensions made "of the same stuff"? 86.176.209.243 (talk) —Preceding undated comment added 21:41, 21 February 2012 (UTC).[reply]
I'm very inclined to say "no, this isn't right;" but that would be unfair. Count Iblis is explaining a very commonly-held interpretation of modern physics, which is predicated on the idea that space and time are treated in a similar fashion. This isn't right or wrong: it's a way to interpret certain physical facts. You can interpret physics any way you like; the more you understand the physics, the more freedom you have in your mind to play with different interpretations, and select the ones that help you do what you want to do - whether you want to harness physical laws to improve an application in engineering, or if all you want is to pursue pure philosophical understanding of physics.
Count Iblis is hinting to a stylistic approach to physics: the interchangeability of time and space in certain equations. Personally, I don't find this approach very useful, at least not in my day-job, even when I need to deal with the intricacies of light physics.
But, when somebody says that the time axis is "identical" to the spatial axis, only measured using different units - this is just wrong. I've heard this explanation many times from many physicists. Nonetheless, it is incorrect: some physical and empirical laws are different with respect to the time and space variables. For example, consider the heat equation or the diffusion equation; these equations have an opposite sign on the differential operator with respect to time, and identical signs on each spatial differential operator. Consider the Schrödinger equation (and the Dirac equation), where there is an explicit derivative with respect to time, (not space), equated to a complex differential operator that defines the system's total energy. Consider Maxwell's equations; there is no time-and-space symmetry in the equation for the corrected Ampere's law. The differential operator with respect to time evolution is equal to a geometric operator on three spatial axes. (You can't swap time and space variables and "switch units" and get a correct answer!) Space and time are not interchangeable; they are not equivalent-but-measured-in-different-units; the speed of light c is not a scalar constant that magically turns time into space. While that makes for neat philosophical rumination, it is exactly wrong. It defies empirical measurements; and physics is not mathematics because we use empirical measurements to keep ourselves and our equations grounded in reality.
When somebody says "time and space are the same," they're glossing over a lot of empirical facts in order to make a hand-wavey metaphor that isn't even very helpful. It doesn't help you in applied situations; and it surely doesn't help you understand the complex interplays of energy and matter as they evolve in time and space - because the equations don't work that way. How can you apply relativity - the study of the relationships of relative motion - when you can't even define motion rigorously? How can you speak quantitatively about the speed of light, and entirely fail to consider the physics behind the defining equations for the speed of light?
With respect to the original question, the simplest way to explain the discrepancy is that the neutrino speed was measured "relative" to the speed of light; as has been explained above, the precision of the measurements, relative to measured speed for light, is very high. It is plausible that the accuracy of the measured neutrino speed is not very good. Nimur (talk) 22:14, 21 February 2012 (UTC)[reply]

Just this afternoon it was announced that the whole FTL neutrino episode may have been caused by a loose fiber-optic connection between one of the GPS receivers and a computer. [1] Dragons flight (talk) 23:03, 22 February 2012 (UTC)[reply]

Announced by whom? I can not find this announcement, or a publication, or a press-release, on either the OPERA Gran Sasso web page nor on the CERN main news page. Have you found this announcement in a reliable source yet? Perhaps we should wait for a real review before jumping to conclusions. Nimur (talk) 23:15, 22 February 2012 (UTC)[reply]
I did find this report, Error Undoes Faster-Than-Light Neutrino Results, on the "blog" for Science, which tends to be a reputable publication; but it still sounds like an unofficial break. I can't say I'm surprised; I've suspected instrumentation error since the earliest reports. I'll still wait for a full write-up. I'm mostly baffled how so many reputable physicists could have failed to account for such a glaring instrumental accuracy anomaly while reporting such precise numerical results to peer-reviewed publications. Nimur (talk) 23:21, 22 February 2012 (UTC)[reply]
(edit conflict) It's on the website of Science (in a news / blog section) [2]. A blog on Nature's website [3] also reports the story and quotes from an "official OPERA statement". Dragons flight (talk) 23:25, 22 February 2012 (UTC)[reply]

Falling balls[edit]

Since gravity is always the same, would a ball dropped from a certain height hit the ground at the same time as a ball rolling down an inclined plane from that height? Whoop whoop pull up Bitching Betty | Averted crashes 13:44, 21 February 2012 (UTC)[reply]

No. The rolling ball has to deal with rolling resistance and with more wind resistance than the vetically plummeting ball. --Tagishsimon (talk) 13:49, 21 February 2012 (UTC)[reply]
What about a frictionless inclined plane? Whoop whoop pull up Bitching Betty | Averted crashes 13:50, 21 February 2012 (UTC)[reply]
Well, no: consider a plane "inclined" so that it was a kilometer long and dropped by a meter (ignoring Earth's curvature). How would your puck (as it's not rolling, no reason to use a ball) cover that distance in the 0.45 s it takes to fall a meter? (The mathematical answer is that your speed at any given height is the same between sliding and falling, but since you obviously slide sideways, you're not moving down as fast.) --Tardis (talk) 13:59, 21 February 2012 (UTC)[reply]
(edit conflict) No. For a ball dropping free the potential energy is converted into kinetic energy, so you can calculate the vertical velocity from the height dropped (if you neglect the air resistance). Rolling down the inclined plane you need to provide rotational energy, and also to get to the ground at the same time the speed down the plane would need to be faster than the vertical component, hence you'd need more energy to get there at the same time. - David Biddulph (talk) 13:55, 21 February 2012 (UTC)[reply]
Or in force terms the inclined plane is still always pushing upwards ob the ball (has an upward component to its reaction) which slows the acceleration downwards. --BozMo talk 14:36, 21 February 2012 (UTC)[reply]
That only matters in the real world. In a frictionless world, it's irrelevant. StuRat (talk) 20:05, 21 February 2012 (UTC)[reply]

I see whoop whoop is back after a few days absence asking weird questions he probably already knows the answer to, or does he just have a giant book full of daft questions to ask at a geek party, and just trots out a couple each time to see if there is anyone out there to talk to? Stop it Sir. Wickwack60.230.221.101 (talk) 15:24, 21 February 2012 (UTC)[reply]

Do you mind?! Whoop whoop pull up Bitching Betty | Averted crashes 16:57, 21 February 2012 (UTC)[reply]
I like to think of the Wikipedia Refdesk as building up a huge database of (sometimes) answered questions, which someday your phone will be able to rummage through whenever you or your child decides to ask it something. So asking a lot of questions, if they're interesting and haven't been addressed before, is still useful. That said, I think frequent questioners should think about starting a project to better index the archive, in the process documenting more clearly that they've consulted it. For example Wikipedia:Reference_desk/Archives/Science/2008_May_3#Rolling discusses whether a ball would roll on an inclined plane, so we'd have been a bit better off if it had been linked right from the top of the question. Wnt (talk) 19:54, 21 February 2012 (UTC)[reply]
It's not for me to mind, whoop whoop, but I am dissapointed. I agree with Wnt that RefDesk is a potential resource of answered questions, but doubt that it will ever serve a reference role anywhere near as good as Wikipedia itself, expecially if it mostly comprises Dorothy Dix questions from a minimal number of OP's like whooop whoop. There are a number of internet alternatives to RefDesk, but they all seem to be aimed at high school students seeking help with assignments. There is a need for something aimed at people with good minds pursuing science as an advanced hobby. There is a need for something for people who have genuine questions on anything at any intelectual level. RefDesk is the best available for this purpose, as it is for people just seeking answers to anything that is not to do with high school syllabi. Unfortunately RefDesk is somewhat clogged with contrived weird questions, and a percentage of answers provided by persons who express opinions, as distinct from referenced facts. Both discourage people from submitting good questions that adress a real need, and discourage subject experts from having a browse and maybe submitting an answer on something they know about. If people think that OP's have a real need, then respondents are likley to spend time helping. If potential respondents think the questions are submitted merely to provoke discussion, or just to bulk up the material, they won't want to waste their time on it. I think it more important to adress this than worry about smart indexing. If whoop whoop or anybody else thinks some aspect of knowledge is missing, write an article and put it in Wikipedia. WP is fabulous, and where we look first. This is not to say RefDesk isn'y valuable - it is. But it's nowhere good as it could be. High school students should be directed to sites that cater for them. Wickwack58.170.151.127 (talk) 01:10, 22 February 2012 (UTC)[reply]
If the plane is frictionless, the ball wouldn't roll. DMacks (talk) 16:22, 21 February 2012 (UTC)[reply]
Correct, it would then slide. In this case the block would still slide to the bottom last, since it would need to accelerate to a faster speed, which takes more time. However, it wouldn't be as slow as if it also had to spin up to speed, too. StuRat (talk) 20:09, 21 February 2012 (UTC)[reply]
Draw one vector pointing straight down from the inclined plane - that's gravity. Now draw another vector perpendicular to the inclined plane until you get back to it - that's the force the inclined plane has to deliver to keep the object from moving straight through it. The remaining line, equivalent in magnitude to the gravity times the cosine of the angle the plane makes from the vertical, is the effective force on the object. And the distance the object has to travel is increased by the inverse of this amount. So, unless I'm wrong, the effective time to reach the end should be increased by the inverse square of the cosine of the angle from vertical. Wnt (talk) 20:18, 21 February 2012 (UTC)[reply]
To further complicate the question, what if a massless particle with an electric charge is either pulled directly to a location of opposite charge, or slides down a frictionless plane towards it, in a vacuum either way (and assuming the plane doesn't conduct the charge any better or worse than the vacuum, so that the attraction is the same at each step) ? I'm guessing that it would indeed arrive at the same time both ways, if such a set-up were actually possible. And, to take this thought experiment to the extreme, what would happen if the particle sliding down the incline had to go over the speed of light to arrive at the same time ? StuRat (talk) 20:13, 21 February 2012 (UTC)[reply]

If there are no non-conservative forces (like friction & air resistance, etc.) then energy would be conserved. If both start from the same height then they have the same gravitational potential energy. Therefore, they will have the same kinetic energy when they reach the ground, and therefore the same speed. So no, they will not arrive at the same time, but they will have the same speed when they reach the ground. The one on the inclined plane will arrive later. PhySusie (talk) 21:27, 21 February 2012 (UTC)[reply]

In the original post, yes, but not in the massless version I suggested, where there is no kinetic energy. StuRat (talk) 07:17, 22 February 2012 (UTC)[reply]
Yes - I was referring to the OP not your version StuRat (which is why I didn't indent) - thanks for the clarification. PhySusie (talk) 15:50, 22 February 2012 (UTC)[reply]

Discharge tubes[edit]

Source of Questions A and B
Source of Questions C and D

About discharge tubes:

A) Why is it that, for helium, neon, and argon, the central portion of the tube is brighter than the ends, but for krypton and xenon the ends are brighter than the central portion?

B) What does a discharge tube containing radon look like?

C) Why do hydrogen and deuterium have different spectra when their electron configuration is exactly the same?

D) Would the discharge tube for tritium look as different from the ones for hydrogen and deuterium as the deuterium tube is from the hydrogen tube? And what about the tubes for quadium and quintium?

Whoop whoop pull up Bitching Betty | Averted crashes 14:12, 21 February 2012 (UTC)[reply]

Answering (c): probably because those are two different pictures (H2, D2), and the camera settings are not the same. The hydrogen exposure time is shorter than the deuterium exposure time; thus, it appears dimmer. — Lomn 14:29, 21 February 2012 (UTC)[reply]
...Though for the sake of completeness I'll note that the atomic emission spectra of hydrogen-1 and deuterium really are slightly different (but you wouldn't expect to see it without sensitive instruments). The derivation of the Rydberg constant assumes a stationary nucleus—that is, that the center of the nucleus sits at the center of mass (barycenter) of the nucleus-electron system. In practice the nucleus is not infinitely massive with respect to the electron, and so there are (very small) adjustments to the apparent Rydberg constant made when using the Rydberg formula for each isotope of hydrogen. The lowest-energy Balmer line is centered at 656.3 nm for hydrogen, and 656.1 nm for deuterium, for instance. There are also appreciable differences in the ultraviolet spectra of the molecular H2 versus D2, though this contribution to the photographs should be small for discharges imaged through UV-opaque glass. TenOfAllTrades(talk) 15:40, 21 February 2012 (UTC)[reply]
Radon emission spectrum. I don't have a ref handy for the actual intensities of each. DMacks (talk) 16:18, 21 February 2012 (UTC)[reply]
I imagine that the answer to C) is that the central portion of the tube is narrower, making the discharge more intense. I am not sure that you are correct that Kr and Xe are brighter in the end sections than the centres, although the contrast between them is much less. They do have a bright area at the very ends of the tubes around the electrodes, but then so do all the other tubes. SpinningSpark 15:37, 22 February 2012 (UTC)[reply]
That's A). And I meant that for He, Ne, and Ar, the middle third of the tube is the brightest, but for Kr and Xe, the outer two thirds are the brightest. Whoop whoop pull up Bitching Betty | Averted crashes 16:45, 22 February 2012 (UTC)[reply]

event horizon[edit]

How far away would I have to be from the event horizon of a black hole with an event horizon of 100,000,000 light years so as not to get sucked into it? — Preceding unsigned comment added by 165.212.189.187 (talk) 15:47, 21 February 2012 (UTC)[reply]

That depends on how fast you're traveling and what direction you're going relative to the black hole. Whoop whoop pull up Bitching Betty | Averted crashes 16:46, 21 February 2012 (UTC)[reply]
...and on how much thrust your propulsion system can produce. (If you have a hyperdrive, leave this question blank). Gandalf61 (talk) 16:48, 21 February 2012 (UTC)[reply]

(edit conflict)See here. Whoop whoop pull up Bitching Betty | Averted crashes 16:50, 21 February 2012 (UTC)[reply]

If we make lots of simplifying assumptions (non-rotating black hole, ignore relativistic corrections etc.), the formula for the Schwarzschild radius tells us that the acceleration due to gravity at an event horizon of radius r is c2/2r where c is the speed of light. A quick back-of-an-envelope calculation shows that the acceleration due to gravity at an event horizon of radius 1 light year is 1/2 a light year per year, which is about 4.8 m/s2 or roughly half g. As this falls off with 1/r, the equivalent figure for r = 100,000,000 light years will be very small indeed. Gandalf61 (talk) 17:06, 21 February 2012 (UTC)[reply]
I don't think you can ignore relativistic corrections in this case. :-) If your distance from the event horizon (as measured with a ruler) is d, and that distance is small compared to the Schwarzschild radius, then the acceleration needed to stay put is about c2/d. Note that this goes to infinity at the event horizon. -- BenRG (talk) 00:05, 22 February 2012 (UTC)[reply]

Wouldn't r=50,000,000 ly? So given the assumptions... how far away? — Preceding unsigned comment added by 165.212.189.187 (talk) 18:04, 21 February 2012 (UTC)[reply]

As noted, it depends on how fast you can accelerate and how far away from the black hole you are. Assuming two objects at rest with respect to each other (and handwaving everything else away), gravitational acceleration will always bring them together. Thus, there is no distance at which you can absolutely say "I will never fall into a black hole". — Lomn 19:01, 21 February 2012 (UTC)[reply]
Except infinity. Whoop whoop pull up Bitching Betty | Averted crashes 20:28, 21 February 2012 (UTC)[reply]
A diversion utterly irrelevant to answering the question. I will, however, note dark flow, a gravitational effect on objects we can see by still-more-distant objects which are forever beyond our light cone. You can't escape gravity by distance alone; the gravitational effects of any mass extend infinitely with no known or theorized means of artificially attenuating them. — Lomn 21:14, 21 February 2012 (UTC)[reply]
Dark flow is just an observed anomalous motion of galaxies. It may be systematic error or a fluke. The authors of the original paper speculated (probably incorrectly) that it's due to gravitational influence from matter beyond the source of the CMBR. That matter would be outside the visible universe by some definitions of that term, but it's still in our past light cone. Gravitation obeys causality just like everything else.
In a ΛCDM universe (where the expansion goes exponential and stays that way forever) it is possible to get so far away from a black hole that you can never fall in, even if you want to. The critical distance, given measured cosmological parameters, is (from memory) about 18 billion light years. -- BenRG (talk) 00:05, 22 February 2012 (UTC)[reply]
But you will fall towards the black hole very very slowly. If you start 1m away from the event horizon and not moving compared to it, it will take you over an hour to fall past the event horizon. Gandalf61 (talk) 19:15, 21 February 2012 (UTC)[reply]
No, it will take about 1/299792458 of a second. (Proper time. Of course, anyone who stays outside the event horizon will never see you cross it.) -- BenRG (talk) 00:05, 22 February 2012 (UTC)[reply]
What? If you start at rest, you aren't going to instantly accelerate to the speed of light... I haven't done the maths, but Gandalf's result looks plausible. --Tango (talk) 12:16, 22 February 2012 (UTC)[reply]
It's easy to see that Gandalf's answer isn't plausible with even a basic, qualitative understanding of black holes. If the event horizon was only very slowly accelerating or moving relative to the reference frame of an infalling object in freefall, then it would be very easy for a beam of light to travel from a point inside the event horizon to a point outside the event horizon, which of course isn't actually possible.
Ben's answer looks about right to me, at least in the right ballbark. In the reference frame of an infalling particle in freefall, the location of the event horizon is moving at the speed of light by the time the event horizon reaches the infalling particle. So yes, in this case it's 0 to c in 1m. You can't accelerate an object that fast, but if the infalling particle is in freefall, then nothing is actually accelerating. The infalling particle is moving at a constant velocity, which can be taken to be zero. It's actually the event horizon that's accelerating, and the event horizon is just a coordinate location with no physical object attached to it, so it doesn't take any energy for the event horizon to accelerate to c. Red Act (talk) 19:06, 22 February 2012 (UTC)[reply]
The event horizon is moving at the speed of light? Where do you get that from? In Schwarzschild coordinates, the event horizon is stationary unless mass is added to the black hole. If an object starts off at rest in Schwarzschild coordinates, just outside the event horizon, then it is at rest relative to the event horizon, it is not moving at the speed of light. --Tango (talk) 00:41, 23 February 2012 (UTC)[reply]
It's awkward at best to say that "in Schwarzschild coordinates, the event horizon is stationary", and just wrong to say "if an object starts off at rest in Schwarzschild coordinates, just outside the event horizon, then it is at rest relative to the event horizon". The problem is that near an event horizon, Schwarzschild coordinates are about as far as you can get from forming an inertial frame of reference. Schwarzschild coordinates have a coordinate singularity at the event horizon. Inside the event horizon, the roles of the Schwarzschild r and t parameters are reversed. Within a region of spacetime inside the event horizon that's small enough that it's approximately Minkowskian, two events that differ only in their Schwarzschild r coordinate are time-like separated, and two events that differ only in their Schwarzschild t coordinate are space-like separated. So it doesn't always work to just look at a curve such that dr/dt = 0 and say that it's describing the path of something that's stationary, because dr/dt isn't even a speed when r <= rs. If you'd like a reference, see section 31.3 of MTW.
Although quantities as measured in Schwarzschild coordinates become undefined at an event horizon, spacetime itself is actually perfectly well-behaved near an event horizon. I.e., spacetime in a small enough region around an event on an event horizon is approximately Minkowskian, and it's possible to define a local frame of reference that's approximately inertial there. Such a region could actually be fairly large in the case of a black hole with rs=100M light-years, because there'd be very little tidal force in such a case. But the event horizon isn't stationary in any such local inertial frame of reference, even momentarily, so you need to choose something else as defining the origin of the inertial frame of reference spatially. When dealing with an infalling object, the natural choice is to choose an inertial frame of reference in which the object is at rest, which is what I did.
If the speed of the event horizon remained subluminal as measured in any such local inertial frame of reference, then it would be possible for light to escape from inside the event horizon to outside of the event horizon. The local inertial frame of reference could be the proper frame of a small spacecraft whose forward end contains a laser that gets turned on very shortly after the front end of the spacecraft crosses the event horizon. If the event horizon just consisted of a set of points that's moving subluminally in that local inertial frame of reference, there'd be no reason why the light from that laser couldn't reach the back of the spacecraft before the back reached the event horizon, which would violate the whole idea of what an event horizon is. Red Act (talk) 05:57, 23 February 2012 (UTC)[reply]
The nearest an unpowered object can orbit a nonrotating black hole indefinitely is 1.5 times the event horizon radius. Closer than that, you need active propulsion to avoid falling in. -- BenRG (talk) 00:05, 22 February 2012 (UTC)[reply]

Wow! to every body else, Thanks to Ben.165.212.189.187 (talk) 15:29, 22 February 2012 (UTC)[reply]

Earth/Moon tidal locking - timescale[edit]

When did Earth's moon become tidally locked wrt the Earth? The tidal locking article gives a formula for calculating the time the locking will take, but without knowing the initial rotation speed of the Moon, that's not directly helpful. Given the different physical characteristics of the two hemispheres of the Moon, one might surmise that the locking is pretty ancient. Every source I've found simply describes the process whereby bodies come into lock, and doesn't give even the handwaviest guestimate at when this happened. Given the somewhat uncertain origins of the Moon, does this simply mean that nobody really knows when they came into lock? I can't really think of a sensible experiment to resolve this, either. -- Finlay McWalterTalk 16:55, 21 February 2012 (UTC)[reply]

I'm always up for a bit of hand-wavy guessing. The article on tidal locking shows a few estimates assumptions we can make to reduce the formula to
thus eliminating the need for the initial rotation speed. Plugging in the values, I get a figure of somewhere in the region of 3.5 million years. This seems quite short to me, so please check my working, but it doesn't conflict with the claim here that "[The Moon] seems to have been tidally locked in its current position for more than 3.5 billion years" (if the Moon is assumed to be 3.5 to 4 bn years old), so it might be correct-ish. - Cucumber Mike (talk) 17:32, 21 February 2012 (UTC)[reply]
Good link! Just for future reference I'll snag the literature reference from that forum [4] which says that "We show that there is less than a 2% probability that the oldest lunar impact basins are randomly distributed across the lunar surface. Furthermore, these basins are preferentially located near the Moon's antapex of motion, and this configuration has less than a 0.3% probability of occurring by chance. We postulate that the current “near side” of the Moon was in fact its “far side” when the oldest basins formed. One basin with the required size and temporal characteristics to account for a 180° reorientation is the Smythii basin." Ai, redlinks! 4 million articles and still not enough! Wnt (talk) 20:01, 21 February 2012 (UTC)[reply]
Thanks Mike, that's just the kind of thing I was looking for. -- Finlay McWalterTalk 12:19, 23 February 2012 (UTC)[reply]
Perhaps the time since the last rotation of the Moon relative to Earth can be discerned from the amount of wobble remaining (presuming that the wobble attenuates over time) ? Of course, having been struck by meteors since then might have introduced new wobbles, complicating matters. StuRat (talk) 19:59, 21 February 2012 (UTC)[reply]
Libration#Lunar libration gives the causes of the "wobble" and they aren't things that I would expect to attenuate. Remember, there isn't friction and air resistance and things in space, so your usual intuition about damped oscillations doesn't apply. --Tango (talk) 22:39, 21 February 2012 (UTC)[reply]
Won't the same force that causes tidal locking (the heaviest part of the Moon wanting to always face the Earth) continue to attenuate the wobble over time ? StuRat (talk) 04:59, 22 February 2012 (UTC)[reply]
That's not what causes tidal locking. I suggest you read the articles. --Tango (talk) 12:11, 22 February 2012 (UTC)[reply]
Since both the Earth and Moon have a tidal bulge, meaning more gravitational attraction is experienced towards the bulge, any attempt by either body to rotate, even slightly, from the position where the two bulges point towards each other should be resisted. This should continue, although to a smaller degree, no matter how little the wobble is. Thus any wobble should be reduced until this force is countered by equal "pro-wobble" forces. Have we reached that equilibrium point yet ? StuRat (talk) 18:38, 22 February 2012 (UTC)[reply]

Living planet[edit]

Would a living planet be possible? Whoop whoop pull up Bitching Betty | Averted crashes 22:49, 21 February 2012 (UTC)[reply]

Isn't that what the Gaia hypothesis is all about? Viriditas (talk) 22:51, 21 February 2012 (UTC)[reply]
I can imagine a situation where one very successful organism (say a plant we accidentally introduce to a planet we visit) covers the entire surface of a planet, except the poles, perhaps. Not quite the same as the planet being alive, but close. StuRat (talk) 04:57, 22 February 2012 (UTC)[reply]
Microbial life already does that. Viriditas (talk) 05:00, 22 February 2012 (UTC)[reply]
I mean a single organism, like a giant plant, not trillions of separate organisms. StuRat (talk) 05:04, 22 February 2012 (UTC)[reply]
What about a colonial plant, like the quaking aspen? Plasmic Physics (talk) 09:06, 22 February 2012 (UTC)[reply]
I can see an extremely successful fungus-like organism spreading over an entire planet's surface. Seems unlikely that any organism could spread through a planet's core, though I suppose not impossible. Goodbye Galaxy (talk) 14:33, 22 February 2012 (UTC)[reply]
The Earth may well be described as a living planet per the Gaia hypothesis; but if we put that aside and consider something more akin to The Immunity Syndrome (Star Trek: The Original Series), the problems are more conceptual than physical. After all, ordinary comets, if low in rock content, are made of the right ingredients to compose familiar living organisms; if one possessed a method for maintaining a sturdy "membrane" around itself (perhaps aided by being particularly massive) it could retain its gasses and expel its waste ingredients. Solar energy could maintain a slow metabolism. It might even (very very slowly) propel itself by releasing gas, and have the intelligence to spot other comets which, every once in a very long while when it was very lucky, it might try to devour.
The problem, of course, is how does this come to exist? In something the size of a planet, if there are multiple genomes scattered around which can independently change, then they take up arms against one another and define themselves as independent organisms, i.e. cancer. But if there is only one central genome that controls everything, competing only against itself, how can it evolve? In any case behaviors like rocket jetting and solar collecting need to be established somehow, and trial and error is unlikely to be an option. So in the absence of evolution we have to turn to creationist explanations - somebody made the thing as a spaceship, habitat, etc. Such a thing is plausible as an advanced biotechnology, though not necessarily sensible. Wnt (talk) 15:52, 22 February 2012 (UTC)[reply]
Considering such an organism, probably a big fungus-like colony organism that's developed on an earth-like planet - it's going to have to get its energy from the system's star (just like earth's ecology is ultimately almost 100% solar powered), so it would make sense to cover the surface of a planet. However I'd doubt there'd be much point the organism extending deep into the planet, there probably wouldn't enough exploitable energy there to be worth the energy cost of penetrating the rock. LukeSurl t c 18:38, 22 February 2012 (UTC)[reply]
Another scenario that's even more sci-fi - an advanced civilisation engineers a living entity they place in a stellar orbit in a dusty region. It's designed to get energy from the star, and gain its nutrition from the accretion of the dust. Let's say our aliens did a really good job in the engineering/got super lucky/expertly tended to the lifeform for eons and such at no point in the lifeform doesn't die. Theoretically you could have a living entity that's slowly growing and eventually becomes planet sized - though I'd imagine the inner parts would die for the reasons discussed above, and you'd have a living shell surrounding a non-living core. Crazy speculation I know, but hey, it's fun. LukeSurl t c 18:51, 22 February 2012 (UTC)[reply]
(ec) In the computer game Alpha Centauri, also known as Civ in Space, the colonists find large patches of pink fungus all over the planet. As the game progresses, the player eventually finds that (spoiler!) the fungus is a single sentient organism, so every time the player has been tearing up fungus, it's as though the player had been tearing out neurons from a living brain. This has been making the organism angry. Comet Tuttle (talk) 18:57, 22 February 2012 (UTC)[reply]
(ec)Note that the simpler colonial organisms described closely follow Armillaria solidipes and Pando (tree) - however, our colonial organisms inevitably reach a size and lifespan where they can no longer effectively evolve. I think that inevitably they must give way to, or at least hybridize with, their smaller cousins in order to maintain the genetic capability to survive, much less compete.
As for an entity feeding on dust, the main issue there is that anything it hits will be going very, very fast. Either it has to be made out of adamantium, or it has to be able to consciously match courses with its "prey" using a good knowledge of orbital mechanics. Wnt (talk) 19:00, 22 February 2012 (UTC)[reply]
Regarding the lack of evolution making it unable to compete, that's why I proposed a scenario where an already evolved life form is introduced to a new planet, with ideal conditions, but no competition. Just as rabbits overran Australia, you might find that one plant is able to take over the new planet. A changing climate might still be a problem, especially if the plant manages to change the climate itself. Perhaps it could spread more towards the poles or the tropics, as conditions change.
As for feeding on space dust, in an early solar system with an accretion disk, most of the dust will hopefully be going about the same speed and direction as the life form. StuRat (talk) 21:30, 22 February 2012 (UTC)[reply]

Mars analogue mission and food study[edit]

Why is this study being done?[5] I thought we already had most of this data. Mary Roach spends half her book talking about it in Packing for Mars: The Curious Science of Life in the Void (2010). Viriditas (talk) 22:49, 21 February 2012 (UTC)[reply]

It looks to be a sort of special internship program at the University of Hawaii NASA Astrobiology Institute. They've got a little bit of a sensationalized press release, and they're pitching it as a "Mars Mission" simulation, but if you look closely, their applicant prerequisites almost exclusively single out hard-sciences Ph.D. students who are about half-way through their degree program. The study's scientific objectives were made pretty clear in the link you sent. The same research group also runs a summer workshop in Iceland, "Water, Ice and the Origin of Life in the Universe". Nimur (talk) 02:07, 22 February 2012 (UTC)[reply]
They know the answers to most of these questions already, so it seems like they are repeating past experiments. In Roach's book, the previous experiments took place in hospital beds and labs, so this is more of an in situ experiment. However, one would also expect to see them test inflatable habs or modules like MARS-500 with aeroponic devices. After all, the long term solution to the problem described in the study is fresh food. And if they are going to send the habs in advance to areas of Mars that have subsurface water, why not send aeroponic habs staffed by agricultural robots, first? Viriditas (talk) 04:48, 22 February 2012 (UTC)[reply]