Wikipedia:Reference desk/Archives/Science/2015 September 15

= September 15 =

information re subject listed on wikipedia
I am researching drugs related to shallow angle glaucoma. One, xylometazoline, is marketed under many names in different countries, naturally, and by several different companies. A product, Otrivin, is a nasal spray, and seems to be the only one that carries a warning about the effects of this drug. This is not mentioned in the info on wikipedia. My question, is it appropriate for me to do research and then list it on the site, with the research history? I ask, becasue I am researching a case where someone has permanent eye impairment due to appropriate/necessary information. thanks Suzanne — Preceding unsigned comment added by 115.188.152.155 (talk) 00:18, 15 September 2015 (UTC)


 * It depends on how you are doing your research. Wikipedia does not accept original research, for instance. But if by "research" you mean reading published literature, and compiling information, that would be acceptable if the sources are good. See the guideline on reliable sourcing for medical topics. Someguy1221 (talk) 00:42, 15 September 2015 (UTC)

can durex lube be used for an ultrasound?
or does it has to be a special lube?--Hijodetenerife (talk) 02:36, 15 September 2015 (UTC)
 * As far as I can tell there is no such thing as "durex lube", durex is a company who make a wide range of lubricants, water and non-water based, with flavors, colors and other additives. Ultrasound gel is used to provide an air free bond between the ultrasound probe and the skin. In my experience, personal lubricants are "waterier" than ultrasound gel and would spread and disperse more quickly. IMHO it would probably work but not as well. Vespine (talk) 03:58, 15 September 2015 (UTC)

Data analysis: olive oil cuts risk of cancer by 62%
Does olive oil reduce the risk of cancer by 62%? What could the authors of the study (http://www.latimes.com/science/sciencenow/la-sci-sn-mediterranean-diet-olive-oil-breast-cancer-risk-20150914-story.html) have messed up? Disregarding fabrication of data (assuming good faith), and with a sample of 4,282 women and 35 cases of breast cancer, could that all just be a coincidence? --Scicurious (talk) 07:15, 15 September 2015 (UTC)


 * They concede that more trials are needed. "In particular, they wrote, future studies should include more women and more cases of breast cancer. All of the women in the PREDIMED study were white, between the ages of 60 and 80 and had Type 2 diabetes or at least three risk factors for cardiovascular disease, such as high blood pressure, too much 'bad' cholesterol or a history of smoking. That means these findings may not apply to women who are younger, in better health or from other racial groups." Bus stop (talk) 07:43, 15 September 2015 (UTC)


 * Interesting paper. They did control for weight, which was my first concern, among other factors such as smoking, drinking, age and family history. From the data itself, I think the most curious thing about it is that the participants in the olive oil group had the lowest incidence of cancer no matter how well they stuck to the diet. That's a thing that simply shouldn't happen, but I don't see any obvious reason for this. The authors claim a dose response, but I don't believe the data actually shows this with confidence. Regards the chance of those 35 cases of breast cancer distributing as they did, it would be very unlikely for this to happen by chance. I do feel like a six year study period is too short. The development time of many cancers from formation of the primary tumor to the appearance of symptoms can be twice that long, which makes me wonder if they're looking at something else - not cancer risk, but detectability or something along those lines. Someguy1221 (talk) 07:44, 15 September 2015 (UTC)


 * I always want to ask "versus what ?". They just said "versus a regular low-fat diet".  But, they also said "The women in the extra virgin olive oil-heavy Mediterranean diet group got 22% of their total calories from the oil, on average".  This makes me want to know where the control group got the calories to replace that 22%.  For example, if they ate more animal meat, then that could have caused more breast cancer in that group. StuRat (talk) 16:14, 15 September 2015 (UTC)
 * These are all valid questions, and the answers are not clear if you just read the popsci coverage like the LA Times link above. If you want to know the answers, you have to be willing to do some additional reading. The research article in question is freely accessible, and even linked from the LA Times link above. Dietary information for the groups is described and summarized in the supplementary content "eTable", available here . If you want to know more than what is published in the article or the supplementary material, or the other published papers documenting the methods and results of the PREDIMED trial, then you'll do better off asking the researchers than anyone here. SemanticMantis (talk) 21:43, 15 September 2015 (UTC)


 * According to prof. dr. Martijn B. Katan (one of the most cited authors in his field), evidence about nutrition is messy and "one study means no studies", i.e. only lots of studies reach enough evidence for health claims in nutrition. Source: his book, Wat is nu gezond? He said e.g. that there are preliminary results that chocolate and read wine would be healthy, but we will know for sure only 20 or 30 years later. That's the time required for doing proper research about such claims. Tgeorgescu (talk) 01:32, 16 September 2015 (UTC)
 * Scicurious, maybe Keys was the one that messed up, "What if Its All Been a Big Fat Lie?" --Stefan-S talk 14:50, 16 September 2015 (UTC)


 * I saw a documentary about an experiment upon two MDs (twins). One was forbidden to eat sugars and the other was forbidden to eat fats. At the end of the experiment, they were both worse off than before, with the fat-eating MD more dangerously so. The idea (supported by Katan's book) is to have diverse, balanced meals and avoid one-sided eating. Tgeorgescu (talk) 03:51, 17 September 2015 (UTC)

Static shock while cycling under a power line
Last Saturday, I cycled under a high-voltage power line, close to the pylon supporting it. As I passed under it, I heard a faint static crackling sound, and when I inadvertantly touched the metal of the handlebar, I got a slight shock. I think I've recieved such a shock every time I've cycled under this particular power line, but never when I've cycled under any other power line. What would cause this (and why only from this particular power line)? (If it is relevant, the road was on a bridge). Iapetus (talk) 10:01, 15 September 2015 (UTC)
 * Power lines carry high-voltage AC, which means that they radiate quite strong electric/magnetic fields at 50 Hz. These move electrons around your body, so you get electric charges forming on different parts of your body (this is called electrostatic induction). Cyclists are surprisingly well electrically insulated - they sit on rubber seats, hold rubber grips and cycle on rubber pedals - so the charge has nowhere to go... until you touch some metal, and electrons rush away to balance the charges of the two objects. You can read about how the British National Grid tries to prevent so-called "microshocks" here, although ultimately the only answer is "keep the electrical wires high up". Presumably the bridge lifts you closer to the power lines than you would otherwise be. However, what they recommend is making sure you're touching the metal before going near the power lines. That way, the charge is continuously discharged in a very weak current, rather than in a single spark. (It's also worth noticing that cyclists will occasionally get shocks anyway, even if they don't go anywhere near power lines, if they happen to be wearing clothes that easily generate a charge, like certain synthetics do). Next time you go through in the dark, try taking a compact fluorescent light bulb with you. If the EM fields are especially strong here, the bulb should glow – you can then compare this to the glow seem by other power lines. Smurrayinchester 14:01, 15 September 2015 (UTC)
 * I appreciate this question being asked. I've only a few times gotten minor shocks from the bicycle, and every time it's been while I was riding under a high-voltage power line.  I always suspected that the local atmosphere was electric, but I never bothered to research it; I just learnt not to touch the metal while under high-voltage power lines.  Nyttend (talk) 00:15, 16 September 2015 (UTC)
 * I think it's more likely the distance from the head to the power line is closer than the bike. The shock raises the bike potential to the head.  In some countries the voltage is high enough that birds cannot land on them as the electric field is strong enough to kill them.  Helicopters that bring lineman to inspect power lines us a pointy wand to reduce the field to zero.  It draws a 3 foot arc as it's applied and removed. Watch this crazy job. --DHeyward (talk) 06:26, 16 September 2015 (UTC)

Warning markings mimicry - why don't all animals do it?
I have a question about warning colouration in animals. Some animals advertise their dangerousness (poisions etc) with bold colour markings. Other animals take advantage of this with bold color markings of their own. My question: why haven't all of the animals in that environment adapted with their own fake warning colouration? It seems a very cheap way of avoiding predation (changing a few colour pigments vs manufacturing poisons, other things). This would of course result in the signal being muddied in so much noise that the markings would no longer have any purpose. But evolution does not have that kind of foresight, so that's not the reason. Or is it possible that this does happen, and all the examples of warning coloration are an evolutionary 'snapshot', before all the other animals exploit the signal and it becomes useless? 129.96.86.179 (talk) 10:39, 15 September 2015 (UTC)
 * I think your last point is the answer, or close to it. If mimicry is useful, then (more) animals may evolve it.  But the more animals that do, the less meaningful the colours become.  If too many animals end up evolving warning colours, then one or more of the following are likely to happen:
 * a) intelligent predators (those capable of learning) will realise that warning colours are meaningless and ignore them. This will obviously result in some of them eating poisonous animals.  Depending on how damaging this is, they may either evolve resistance, evolve an ability to distinguish between the real and mimiced dangers, or decline due to too many of them gettign killed or injured.
 * b) intelligent and observant predators will learn to distinguish real hazards (e.g. wasps) from fakes (e.g. hoverflies).
 * c) unintelligent predators (or those that are genetically hard-wired to avoid warning colours) will avoid everything (and therefore starve to death, go extinct, and be replaced by predators that follow a more useful strategy.
 * d) Any predator that currently ignores warning colours (whether due to stupidity or immunity) will carry on as before.
 * Also bear in mind that warning colours do have a cost - you stand out more, so are more likely to be eaten by predators that can resist your (alleged) poison, as well as those that haven't yet learned to avoid it.
 * The net result of all this is that there comes a point where evolving warning colours wouldn't help an organism to survive, and may even be counter-productive. At that point, any mutant individual that had gained warning colours will be less likely than its drab peers to pass on their mutant genes, so the species as a whole will not evolve mimicry.
 * I'm not sure what is a good reference for this, but Dawkin's The Selfish Gene has a lot about game theory and how this can affect what strategies evolve, how they can change, and particularly how what is useful depends on what everyone else is doing. On Wikipedia, Evolutionary_game_theory may be useful - and of course Mimicry. Iapetus (talk) 12:05, 15 September 2015 (UTC)


 * For general reference, this phenomenon is known as Batesian mimicry. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 13:16, 15 September 2015 (UTC)

The more mimic organisms (the harmless prey) there are, the less useful the markings are for the model organism (the organisms with powerful defenses). In other words, it's cheap for the mimic, but it's harmful for the model organism. It's also not a question of intent. In apostatic selection, it's the predator that dictates how their prey evolves. Aposematism (warning coloration) depends on one very important thing: that the predator remembers that they taste bad or are poisonous. The easier it is for predators to remember, the better. But when predators eat mimics that look like the model organism but don't taste bad, they get conflicting signals. The model organism loses the protection they gained from the warning coloration, because predators will start eating them too due to previous experience from eating the mimics.

This is reflected in the other kind of mimicry, Müllerian. Where two or more organisms that are BOTH unpalatable/poisonous/dangerous begin to look like each other, again due to the way predators choose prey. It's easier for them to remember when the "things I should not eat" all look alike. The ones which look different get eaten, the ones which look like the ones they remember to be bad, get ignored.

Prey organisms thus tend to form mimicry complexes, where bad-tasting prey evolve to look like each other, while they in turn are mimicked by other harmless organisms. The effectiveness of a mimicry complex relies on the ratio between the model organisms and the mimics. The more models, the more effective the complex is. So as soon as there are more mimics than models, they begin to evolve away from each other again, as predators once again learn to distinguish between them.-- O BSIDIAN  †  S OUL  13:47, 15 September 2015 (UTC)
 * Note that like Iapetus said, predators "remember"/"learn" etc. through either true memory in an individual (in higher animals) or through selective pressure (i.e. they begin to prefer prey with certain appearances after generations of trial and error get hardcoded into their genes) -- O BSIDIAN  †  S OUL  14:07, 15 September 2015 (UTC)


 * Note that evolution is a truism. It has no predictive value as every observation can/will be explained even if the the observations are completely opposite.  --DHeyward (talk) 08:07, 16 September 2015 (UTC)


 * Seriously, ? Do you have any evidence whatsoever to support your utter bullshit? μηδείς (talk) 03:01, 17 September 2015 (UTC)


 * Seriously, use evolution to predict something. There is nothing.  Now, using evolution, describe something.  Pretty much every living organism can be described with evolution.  Evolution is not Mendelian genetics, though.  You can find many prey species that evolved different anti-predation strategies even though the predator is the same.   Zebras have stripes and we use evolution to describe why they have them.  Gazelles do not have stripes but again evolution explains their anti-predation adaptations. Evolution is like bread crumb trails: It shows the way back but gives very little clues to the future.  --DHeyward (talk) 21:28, 17 September 2015 (UTC)


 * Evolution is not random. The fact that we can explain the path of evolution as it applied in the past means that there are patterns. Ones we can recognize. It may not be as simplistic as Mendelian genetics, but it's there. And patterns mean that we can model them. That's exactly what the scientific method is in the first place. Modeling conditions to discover a pattern that can then be assumed to apply when similar conditions occur. Experiments and theories.


 * It's not a question of impossibility, it's a question of accuracy. How accurately we can model the conditions determine how accurate the prediction is for future patterns. And that's pretty much true for everything from discovering new celestial bodies in astronomy to forecasting the weather. We may not be presently capable of saying something like "zebras will develop horns in the next 15,000 years", true, but it doesn't mean it's impossible to predict the evolution of zebras. Nor does it mean that the evolutionary theory is only applicable to a kind of retconning.


 * As this New Scientist article puts it: It might not be possible to predict exactly what life will look like in a billion years but what counts are the predictions that can be made.


 * Also see --  O BSIDIAN  †  S OUL  22:13, 17 September 2015 (UTC)


 * I didn't imply it is random or not science, just that any result always fits the theory. It's easy to show evolution exists.   The paper you cited "Eusociality in the Naked Mole Rat" is interesting because it highlights a number of things.  One is that not all vertebrates that meet all his criteria are eusocial, and therefore finding species that conform or don't conform, don't void his hypothesis.  In fact his first 5 conditions seem to be applicable to modern humans.  Based on that, I can predict that humans may evolve into eusocial creatures but if they don't, there will be an evolutionary reason to explain either course.  This is a lot like weather prediction.  Tropical cyclones, for example, are well understood in hindsight.  We know why they form, why they dissipate and a number of other things.  We don't know when they will form, though, or where or how many with any great skill beyond 5 days.  Evolution is a science with extremely low forecasting skill.  It tells a lot about how we are related to other organisms and how every organism has evolved and adapted.  Natural selection and adaptation provides lots of broad insight (i.e. that MRSA bacteria would emerge, etc).  It becomes a truism, though, because the forecast skill is so poor that any forecast fits the theory.  Zebra's developing horns or antlers is a good example - whether they do or not - still fits the model.  Zebra stripes are explained but spots would be just as explainable.  It doesn't make it less scientific, just that, by definition, every observable characteristic can be explained in hindsight.  But also the lack of that characteristic can be explained as well.  It's complexity, huge variety of life, and the multitude of possibility is what makes forecasting so difficult and hindcasting much easier.  --DHeyward (talk) 06:03, 18 September 2015 (UTC)


 * Again, by that characterization, then all science is truism. To put it in the simplest terms possible: science (including evolutionary biology) is about both observation and prediction. It's the very nature of science to predict things. Not only about the future but about the unknown. Every single hypothesis is a prediction. Their likelihood to be true is what experiments are for.


 * The progress of science itself is measured by how accurate our models have become. It will never be 100% accurate in most cases, but the goal is to get as close as possible. Because again, predictions are not prophecies, they're likelihoods. It is not a question of whether something will happen, but the probability of them happening. So yes, "broad insights" are predictions, as it gives you an idea of what is most likely to happen, whether or not they happen. For example, you have an 83.3% chance of not rolling a six in dice. But then you threw it and it came up with a six. Does that invalidate the earlier likelihood? No it does not. The ability to explain things in hindsight is thus also irrelevant. Everything can be explained in hindsight, whether or not they followed the predicted most likely path. Just because you did roll a six, doesn't mean the chances of not rolling a six was really 0% and not 83.3%.


 * From what I can tell, most of your arguments seem to hinge on one thing: that evolution is too complex in comparison to other processes to ever be reliably predicted. While evolution being complicated is true, it is also arbitrary. The difference between the accuracy of the predictions between, say, eclipses and earthquakes is not an unbridgeable gap. It's a reflection of what we have discovered and what we haven't... yet. While we may not be able to say with confidence that zebras will evolve horns in the future, we do know that zebras are far more likely to grow horns, than it is for them to evolve photosynthetic capability (though again, even if they did evolve photosynthesis, it does not in any way invalidate the earlier predicted likelihood).


 * Not to mention, I think you underestimate the "predictive value" of evolution too much. One of the most widely used methods in evolutionary biology is phylogenetic bracketing, where unknown traits are inferred based on known traits from their closest relatives in the phylogenetic tree. It's not 100% accurate, but it gets things right more times than it makes a mistake. And that is what matters.


 * I should have linked this more prominently previously: but here, they may have explained it better: https://www.newscientist.com/article/dn13677-evolution-myths-evolution-is-not-predictive/


 * Anyway, this is getting too off-track for the OP's question, so I'll stop. :P -- O BSIDIAN  †  S OUL  10:58, 18 September 2015 (UTC)


 * Hi all, OP here. Thanks for the detailed answers. The stuff about mimicry complexes looks particularly interesting. Although DHeyward, I think you are quite wrong when you say 'every observation can/will be explained even if the the observations are completely opposite'. If (for example) there were animals with predation-deterring warning colouration, and there were NO animals at all that exploited this with mimicry, this would be VERY difficult to provide an evolutionary explanation for. Therefore, not all observations are equally explicable via evolution. "Just-so stories" are real, but (as a layperson) I do think those concerns can be exaggerated. 129.96.87.62 (talk) 03:03, 17 September 2015 (UTC)
 * There are plenty of literature regarding the dynamics of mimicry complexes/rings if it interests you (e.g., , , ). How they evolve and diverge is quite fascinating. For example, in some cases, Batesian mimics actually evolve their own defenses (though usually lesser than the original model organism) and thus "graduate" to becoming Müllerian mimics. In essence switching from a parasitic relationship to a mutualistic one. Thus gaining the protection of the mimicry complex, while not completely devaluing its usefulness. There are also alternative hypotheses on the evolutionary origins of aposematic coloration, including inducing predator neophobia rather than experience.-- O BSIDIAN  †  S OUL  03:55, 17 September 2015 (UTC)

Dopamine chemistry
I'm currently reworking the dopamine article with the aim of bringing it to GA and perhaps eventually FA status. I wrote a short section on the chemistry of dopamine, but it isn't satisfactory and I'm not sure I'm capable of making it so. The basic problem is that chemistry is perhaps my weakest area. More explicitly, there are two things I could use help with: (1) Just reading it over to make sure that the material makes sense to a person who understands chemistry. (2) Sourcing. I scraped up the information from various places, but I don't know of a good Wikipedia-quality source. I suspect that all the information there can be found in a good biochemistry textbook, but I don't have easy access to any. If anybody could assist me with this, I'd be grateful. (We probably don't want more detail than the stuff I wrote -- I just want to make sure it's correct and appropriately referenced.) Looie496 (talk) 13:39, 15 September 2015 (UTC)
 * I would suggest looking for an active member of Wikiproject Chemistry here: WikiProject Chemistry/Participants and asking if they would be willing/available to help you out. Somebody from the overlapping (in this case) project WikiProject Pharmacology/Participants may also be able to provide assistance.--William Thweatt TalkContribs 05:41, 17 September 2015 (UTC)


 * I'm not a chemist, but giving it a brief glance the statement "dopamine is the simplest possible catecholamine" doesn't seem correct. For example, 3,4-dihydroxybenzylamine has one less carbon atom in the amine chain and thus would be a "simpler" catecholamine. --Paul_012 (talk) 08:41, 17 September 2015 (UTC)


 * Thanks for the suggestions, William and Paul. Looie496 (talk) 11:57, 18 September 2015 (UTC)

Would bread spoil if it is placed in a 100% sterile environment?
Would it be able to last forever? Or would it just go stale? Is there a difference between staleness and spoilage? What do people mean when something "goes bad"? 140.254.70.25 (talk) 14:47, 15 September 2015 (UTC)


 * Staleness comes from crystalization of certain compounds within the bread. Bread requires no bacteria to go stale, just time.  Staling covers the process.  -- Jayron 32 14:51, 15 September 2015 (UTC)


 * Bread also contains oils, which will go rancid. Bacteria can cause oils to go rancid, but it's not the only way.StuRat (talk) 14:58, 15 September 2015 (UTC)


 * When DSV Alvin famously sank in 1968, the crew's bologna sandwiches were on board the craft. It sank to the seafloor, about 1500 meters deep, in the anoxic zone, where cold water and very low oxygen makes for a very hostile environment.  The submarine was recovered about a year later... and the sandwiches were intact!  Here's an explanation from the SEAS "Ask-a-Scientist" program, sponsored by the National Science Foundation:
 * Why didn't the sandwiches decompose? Well, one reason is that they had been wrapped and packed inside a box, which prevented them from quickly being eaten by fish, crabs, or other animals. But why didn't bacteria start to decompose them? There could be several causes, including 1) there may not be many bacteria in areas of the deep sea away from hydrothermal vents, and 2) near-freezing temperatures in the deep sea may cause bacterial decomposition there to occur very, very slowly.
 * Nimur (talk) 15:36, 15 September 2015 (UTC)


 * Military style Meal, Ready-to-Eat (MRE) often include sandwiches that last 3 years. . However, Spock might be right when he said:  "It's bread, Jim, but not as we know it."--Aspro (talk) 15:44, 15 September 2015 (UTC)


 * You can tell anyone who's ever lived in Boston, by the fact that they don't freak out at canned bread, which keeps virtually forever until you open the can. The origin story the tour guides like to tell you that so many of the stricter religious groups in Massachusetts forbade baking on religious holidays, enterprising bakers took to canning their bread to preserve it; I do not for one minute believe this, since even the most bone-headed puritan could have figured out "bake it the day before", and I strongly suspect it originated as a food for ships' crews. &#8209; iridescent 16:28, 15 September 2015 (UTC)
 * Note that the Puritans rather strongly rejected religious holidays except for Sundays. The tour-guide story is further stupidified because there's a relevant biblical account (Exodus 16, beginning around verse 22), in which the Israelites were outright required to do more work on Friday to store extra food for the sabbath.  This is a well-known portion of the Torah/Pentateuch, and most of the children would have been familiar with it, as well as the adults; it's not some obscure statement hidden in an otherwise on-another-topic prophecy. Nyttend (talk) 17:05, 15 September 2015 (UTC)
 * Above the nutrition label at that link it says it "Contains Diary". Well, I certainly don't want bread made from shredded diary pages.  I might get diarrhea (or, if they shredded a ship's log instead of a diary, I might get logorrhea). StuRat (talk) 16:38, 15 September 2015 (UTC)

When the yeast did its work, it is being killed by baking. It left food for mold which also contaminated in the bread and the air around. As the humidity of bread is high, the mold can grow. It has good conditions in the bread. Cookies and pastry which is kept dry expires much later. It is also conserved by the use of sugar. The bread of burgers is sweet as well. -- Hans Haase (有问题吗) 17:32, 15 September 2015 (UTC)
 * A neat trick is to take stale chocolate chip cookies and put them in a plastic bag with a piece of bread. Voila, overnight the cookies are magically healed.  --DHeyward (talk) 06:55, 16 September 2015 (UTC)


 * Certainly is nice trick but I have never know chocolate chip cookies to survive from being eaten, long enough to ever go stale. Do you have to lock them away in a safe vault first? Perhaps we ought to exchange homes if you suffer from stale cookies. You can also use my Gym membership-card as I can't be bothered to walk there any more as I always come home to an empty fridge.--Aspro (talk) 17:07, 16 September 2015 (UTC)
 * I grew up doing things in the other order: put a few random-sized pieces of bread in a cookie container, and they can keep the cookies soft for at least as many days (often more than a week) as it took my family and me to eat the cookies. Nyttend (talk) 23:26, 17 September 2015 (UTC)

Total tooth extraction
I had a wisdom tooth pulled today, due to problems caused by improper eruption, so of course I'm temporarily prohibited from doing any chewing on that side and a bit restricted in what I can drink, but I'm allowed to chew food on the other side. I've heard of people having all teeth pulled in one sitting (apparently to facilitate the installation of dentures), which sounds rather inconvenient in the short term: how would you eat? Do you get put on IVs for a while? Do you just have to gorge yourself beforehand, so that you're not hungry later? Nyttend (talk) 16:56, 15 September 2015 (UTC)


 * From the lede of our article liquid diet: A liquid diet is a diet that mostly consists of liquids, or soft foods that melt at room temperature (such as gelatin and ice cream). A liquid diet usually helps provide sufficient hydration, helps maintain electrolyte balance, and is often prescribed for people when solid food diets are not recommended, such as for people who suffer with gastrointestinal illness or damage, or before or after certain types of medical tests or surgeries involving the mouth or the digestive tract. -- ToE 17:04, 15 September 2015 (UTC)


 * (ec) You would have to use a blender and a straw. Not pleasant if you try to eat a hamburger that way, but not bad if you stick to liquid meal replacements.  I would expect even that to sting at first, but soon you could handle liquids, while solids would hurt for much longer.  StuRat (talk) 17:08, 15 September 2015 (UTC)


 * If you search online for after-extraction care instructions you will find orders not to drink from a straw for from a few days to a couple of weeks, due to concerns that doing so will dislodge blood clots and interfere with healing. This is particularly important following the removal of upper wisdom teeth, as their nerve can extend up into the maxillary sinus, and inadequate healing may lead to a permanent oroantral fistula.  Closed mouth (or pinched nose) sneezing and vigorous blowing of the nose are similarly proscribed. -- ToE 18:01, 15 September 2015 (UTC)


 * I've bolded your important correction, hope you don't mind. Usually it doesn't matter so much when Stu gets something wrong because of his guesses, but in this case I felt extra attention was warranted. SemanticMantis (talk) 19:29, 15 September 2015 (UTC)


 * My dentist's generic how-to-act-after-extraction instructions sheet included a warning not to use a straw, but I figured that perhaps my dentist was more cautious than Stu's. Definitely obeying my dentist's instructions :-)  Nyttend (talk) 00:11, 16 September 2015 (UTC)


 * This is OR and 40 years ago, but when my mother had all her teeth removed, she had a set of false teeth implanted into the holes left. The instructions to her were, from memory, nil by mouth before the op (under a general anaesthetic), then tepid liquids for a day, soft foods for 2 or 3 weeks afterwards (mashed potato and gravy, soup, yogurt, that sort of thing). --TammyMoet (talk) 18:05, 15 September 2015 (UTC)