User:Boundarylayer/sandbox

Energy mix, World energy consumption
1PWh=1000TWh

In October 2012 the IEA noted that coal accounted for half the increased energy use of the prior decade, growing faster than all renewable energy sources.

Meaning, as the massive growth of nuclear energy slowed practically to a stop in the late 1980s, renewable energy has done nothing to reduce the proportion of energy from fossil fuels currently in use. If you look at the percentage of the energy generated from fossil fuels in the 1990s and compare it to now, it has gone up. Not down as environmentalists intend. Fossil fuels supplied 81.3% of all energy in 1990, and according to the table above, it is now 81.4% of all energy use.

To find out the new installed, and decommissioned energy sources by nameplate capacity, in the US, see the EIA ES3 and ES4 tables here.

Decommissioned nuclear power plants

 * Shippingport Atomic Power Station
 * Big Rock Point 67 MW. the fifth NPP to ever be commissioned in the US, occurring in 1963, it was returned to Greenfield status in 2006, with the land open to visitors.


 * Maine Yankee ~ 800 MW. First large commercial nuclear power station decommissioned in USA. From 1972 to 1996 it produced 119 billion kilowatt hours of electricity over its lifetime, before shutting down due to steam generator tube issues, the decom costs was about $568 million. Fuel in casks, majority of site area was converted to Greenfield status.
 * Yankee Rowe
 * Connecticut Yankee, 619 MW. decom finished 2007. Great picture of site before and after! The Independent Spent Fuel Storage Installation (ISFSI) is "3/4 of a mile" off site out of view. The 43 dry casks contain reactor vessel internals as well as its 1019 spent fuel assemblies.
 * Rancho Seco 913 MW commercial power operation began in 1975 decommissioned in 2009.
 * La Crosse Boiling Water Reactor 50 MW built in 1967 in SAFSTOR decommissioning ongoing?

almost all of the above have their spent fuel in dry cask storage on site.

What the rest of the world is planning to do with their spent nuclear fuel and the design requirements for dry cask storage vessels.

What MIT think should be done with SNF(spent nuclear fuel.)"SNF is a significant potential source of energy; however, we do not know today if LWR SNF is a waste or a valuable national resource. Because of this uncertainty, we recommend a policy that maintains fuel cycle options of long storage of SNF," of about a century. "Storage is a viable option because the quantities of SNF are small and the costs of storage are small relative to the value of electricity produced. A typical reactor produces 20 tons of SNF per year. The U.S. generates ~2000 tons of SNF per year in the process of producing ~20% of the total U.S. electricity. Total waste management costs (including SNF storage) are between 1 and 2% of the cost of electricity"... "For new reactor sites, the economically preferred option would be shipment of SNF to centralized sites afer the initial cooling period, as is done in countries such as Great Britain, France, Russia, and Sweden".

"The geology of nuclear waste disposal" in 1984 (http://www.nature.com/nature/journal/v310/n5978/abs/310537a0.html), "disposal in subduction trenches and ocean sediments deserves more attention."

The operational from 1999 Waste Isolation Pilot Plant military production reactor TRU transuranic material in dry casks underground repository is sent here. 4% of the waste is remote handled HLW waste, while 96% is contact handled(human fork life safe). video here.

"monitored retrievable storage" appears to be the same as interim storage. As opposed to permanent storage, which is what is occurring at WIPP. ~1000 year "permanent storage" will likely be needed for fission products such as Cs-137 no matter if reprocessing occurs or fast breeder reactors come on line.

All Transuranic waste could be burnt up in breeder reactors and or reprocessing. Whether this happens depends on economics, it is not an issue of technical possibility.

(I Have the 3 following vids in list.) nuclear flask and dry storage flask testing, impacts with trains and US tests including jet fuel fire.

Better video of UK test from reactor spent fuel pool to flask. Operation Smashhit. These trains tests were done to "enhance public confidence".

Better video, DOE, of spent fuel "shipping container casks" impact & burn tests.

In the USA "To cover decommissioning costs, NRC requires power plant owners to set aside cleanup funds. Every two years companies must submit reports to NRC, which the commission checks to ensure the funds are safe and adequate. Money is raised through a fee on electricity ratepayers, and funds are invested, like a person’s retirement fund."..."NRC regulations allow a combination of three decommissioning options: immediate dismantlement, a delay of up to 60 years before beginning dismantling, or permanent reactor entombment in which radioactive contaminants are permanently encased on-site. To date, none of the 29 reactors being decommissioned is being entombed, NRC notes"

Spent nuclear fuel/nuclear waste
link to Spent nuclear fuel shipping cask rather than Nuclear flask for the movement of nuclear fuel in countries other than the UK.

In response to the Mirror story of the lack of security at railway stations in the above link, where someone allegedly walked up and placed a package beside the SNF cask. A successful attack on hardened spent nuclear fuel vessels is unlikely due to there hardened nature. Moreover, if they were encased in a stand off spaced armor screen of double walled polycarbonate, for example, they would be nigh impervious to attacks by individuals.

Moreover, a successful attack would be unlikely to kill anyone directly, instead it is regarded as a "weapon of mass disruption", a radiological dispersal device (RDD) most commonly termed a "dirty bomb".

Terrorists are far more likely to attack soft targets with a higher chance of success, such as industrial chemical carrying train cars, chlorine, liquefied petroleum gas tanks etc.

Speaking of trains, here's a blast resistant 2013 passenger train car with high speed camera video of testing, part of the SecureMetro project.

The discharge isotopic composition of a fuel assembly with initial enrichment of 4.5 wt % U-235 that has accumulated 45 GWd/MTU burnup is: 93.16% uranium, 92.15% of which is U-238, 1.11% plutonium of which 0.56% is Pu-239, 0.09% minor actinides and 5.19% fission products.

Contribution, in watts of spent fuel fission and actinide product decay heat from 10 year to 10,000. Note that after around 5–10 years the heat content is not enough to melt the fuel, so the answer to your question of what substances contribute to the high decay heat of recently scrammed nuclear fuel is still unanswered, is it the short lived actinides or fission products?

Increasingly, reactors are using fuel enriched to over 4% U-235 and burning it longer, to end up with less than 0.5% U-235 in the used fuel. This provides less incentive to reprocess. Although less of an incentive according to World-nuclear, an automated reprocessing plant could theoretically remove the now high Pu-238 fraction of the spent fuel, and use it for the likes of space probes and vehicles such as the Curiosity rover, although probably at a higher cost than producing it from the spent fuel Np-237 + neutron manner in which it is presently produced for spacecraft. All the plutonium could be separated in a PUREX type reprocessing method and blended in with new fuel(MOX fuel) or used in a breeder reactor. Thus eliminating the proliferation problem of spent nuclear fuel (for tens of thousands of years at least) from the once through fuel cycle of Light water reactors. As the spent fuel would contain comparable amounts of U-235 as natural uranium ore and very low, amounts of the fissile plutonium isotopes, Pu-239 and Pu-241.

However the question then arises, great the once through nuclear fuel cycle is now sufficiently proliferation resistant, however if reprocessing is done to get the attractive (from an energy standpoint) plutonium out and made into MOX fuel. Then what is the discharge isotopic composition of that fuel? "during the burning of MOX the ratio of fissile (odd numbered) isotopes to non-fissile (even) drops from around 65% to 20%, depending on burn up. This makes any attempt to recover the fissile isotopes difficult"-MOX fuel. So it seems you end up with even more Pu-238 & Pu-240 and much less Pu-239 & Pu-241. Further making the spent nuclear fuel even more proliferation resistant, to proliferation proof, at this stage. Efficient burning of all the plutonium however can only be achieved with fast breeder reactors. So spent MOX fuel to --> fast breeders is a possibility. However spent LEU fuel --> fast breeders is likely more economical.

Either way the transuranium waste streams from presently operating fuel cycles LWR's, both MOX & LEU, can be burnt up in a breeder reactor operating in burner mode(as opposed to breeder, U-238 to Pu-239 mode). Leaving the nuclear waste content to be primarily fission products and not nuclear weapons usable isotopes. A major need for the future of nuclear fission power.

The fissile plutonium isotopes present in spent fuel, although dependent on a number of factors, generally do not increase significantly in the percentage composition at higher burnup, while the non fissile plutonium isotopes all increase significantly, Pu-238, Pu-240 and Pu-242. Note, anything over 6 to 18 GWd/MTU burnup results in Plutonium with a considerable and increasing amount of these non-fissile, and often hot, isotopes being present in the discharged Plutonium fraction of the spent fuel - making nuclear weapon manufacture considerably difficult if not impractical - that means, high burn up spent fuel is increasingly proliferation resistant.

A typical Pressurized water reactor(PWR) fuel assembly is a grid array("assembly lattice size") of 17x17 fuel rods and approximately 14 ft long with a mass of 700 kg. This grid array("assembly lattice size") arrangement is that which is to be seen in the AP1000, the US-EPR and the US-APWR, the reason being that this will decrease the need for special handling, as all fuel assemblies will be identical thus facilitating reprocessing.

Some fuel rods are clad with stainless steel (SS) specificially SS-304 and SS-348H in place of the more common and problematic zirconium containing alloys. For example, the Haddam Neck power station could be fuelled with XHN15MS and XHN15B fuel rods (page 72) which has a SS and SS-304 cladding, and which produced a burn up of 28,324 and 33,776 MWd/MTU respectively. Also further on the document(pg 94) states that either SS-316L or hastelloy produced fair or good cladding results for high enriched Uranium HEU, which as you know was, previously, widely used in research reactors to produce Mo-99 and perhaps submarines.

Why don't they always use stainless steel as fuel rod cladding instead of zirconium? Well to answer that you need to find out if SS also catalyzes water electrolysis at high temperatures, and how well it contributes to the neutron economy. I believe, for example, Cobalt-60 will form in any SS containing natural Cobalt-59 upon neutron bombardment, which adds to the radiological handling issues of the fuel rod. Also an issue is neutron embrittlement of the material.

Oddly, The fuel rod burn up rates tabled on page 77 onwards have a massive range, from 4,907 on up to 50,000 MWd/MTU. This appears to be somewhat explained by the low enrichment of the XDR06A fuel rod of 2.23% although XDR06G has lower enrichment again but achieved a much higher burn up. XDR06A was used in the Dresden-1 nuclear reactor and both XDR06A and XDR06G were Zircaloy-2 clad. What were they doing with XDR06A, perhaps it what faulty bursting before achieving the optimum designed burn up rate? Whatever the reason the vast majority of fuel rods achieve ~ 30,000 MWd/MTU. I am somewhat surprised that this document is available to the public, nefarious minded individuals could use the information to selectively target dry cask storage vessels to achieve the greatest purity of plutonium-239. Getting back to power production, the overall trend of fuel rods seems to be as more modern fuel rods are made, the burn up is higher, and therefore the purity of fissile Plutonium is expected to be lower.

Sovacool biographies of living persons
https://en.wikipedia.org/w/index.php?title=Wikipedia:Biographies_of_living_persons/Noticeboard This notice board is intended for editors who are repeatedly adding defamatory or libelous material to articles about living people over an extended period. However, none of the edits of his page have included anything but the status quo.


 * What issues does this user have exactly with his page? Going by the talk page of the subjects article -


 * http://en.wikipedia.org/wiki/Talk:Benjamin_K._Sovacool This IP address, talk, originally attempted to portray themselves as somebody who knows [Sovacool] well. Now they are claiming here to actually be Sovacool himself?


 * Is it, or is it not, WP policy to try and include references for a person that are not their own personal webpage? As, at present, the wikipedia page for this person is almost entirely a mirror of the persons personal page. Why bother even having a Wikipedia article if it is just going to be a mirror.


 * WP:BALANCE dictates that if there is a controversy surrounding a person, then this should be reflected in the article. The Yale University paper states emphatically. That "life cycle GHG emissions from nuclear power are...comparable to renewable technologies."Life Cycle Greenhouse Gas Emissions of Nuclear Electricity Generation Sovacool however instead has claimed that Renewable energy is "7" times more effective than nuclear power at combating climate change, and now, in there most recent paper that - Wind power is "96" times more effective. This completely goes against the exhaustive findings of researchers at Yale University, the IPCC and the National Renewable Energy Laboratory.Collectively, life cycle assessment literature shows that nuclear power is similar to other renewable and much lower than fossil fuel in total life cycle GHG emissions

 Collectively, life cycle assessment literature shows that nuclear power is similar to other renewable and much lower than fossil fuel in total life cycle GHG emissions.


 * The IP user on Sovacool's talk page declares that the authoritative Intergovernmental Panel on Climate Change values are "unfair" and goes even further suggesting that the IPCC's values are inferior to those of Sovacool's, they write "Sovacool's number is probably more complete, and accurate". This might be the IP users opinion, however wikipedia policy has a preference for authoritative references, and you can't really get more authoritative than the IPCC when it comes to climate change, except for the UNCEFF. However, this IP user talks down this authoritative organization, and also wishes to remove this material? I think the IP user also fails to understand that it is not so much the actual value Sovacool arrived at as being up for debate, but a careful reading of the rebuttal of their work highlights that Sovacool did not apply his same methodology to other energy sources, however despite this omission, Sovacool declared "Renewable energy is 7 times more effective at combating climate change than nuclear power". The criticism by Berten et al issue is that Sovacool declared "Wind power is 7 more effective at combating climate change than nuclear power". On the Sovacool talk page, they state that "there has been new research published in Environmental Science & Technology confirming Sovacool's numbers that should be acknowledged" however this paper is not "new research" and  which the IP user presents as evidence that "Sovacool is right" was printed with the assistance of well known, and highly public anti-nuclear advocates. Benjamin K. Sovacool is the principle author along with  M. V. Ramana Mark Z. Jacobson, Mark A. Delucchi , and [[Mark Diesendorf.


 * http://pubs.acs.org/doi/abs/10.1021/es401667h


 * As Johnfos has said, Johnfos routinely edits wikipedia and uses Sovacool as their prime source. However Johnfos is entirely misleading when they suggest that it is only myself who have found Sovacool and Johnfos' promotion of his statements, to be unsuited for wikipedia. Indeed, Johnfos has encountered another editor on wikipedia who has have found Sovacool's publications to be "silly" and unreliable. They are User:NPGuy. An encounter that can be read here. -
 * https://en.wikipedia.org/wiki/Talk:Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons


 * For a full disclosure, something that Johnfos has not done, is that it must be stated that Johnfos has in the past been forced into a "retirement" for copy & pasting material from anti-nuclear websites directly into wikipedia. They have also previously attempted to get me banned in times past, and failed. I began editing nuclear related pages of wikipedia as something seriously needs to be done about the sheer amount of disinformation being spread. A quick example being here Nuclear power, in 2011 Sovacool states that no "In the U.S. there are 13 reactors that have permanently shut down and are in some phase of decommissioning, and none of them have completed the process". A quick check of the Nuclear Regulatory Commission website however states exactly to the contrary - https://forms.nrc.gov/reading-rm/doc-collections/fact-sheets/decommissioning.html
 * Maine Yankee was completely decommissioned in 2005 for example.


 * Johnfos also falsely suggests that Sovacool is not an anti-nuclear advocate. On Sovacool's own site, he details that he supplied "legal advocacy" in India against a proposed power station. Sovacool also wrote Contesting the Future of Nuclear Power.
 * Boundarylayer (talk) 02:47, 29 June 2013 (UTC)

Anti-nuclear movement.
Having once been anti-nuclear seemingly by default as people always tend to fear what they do not understand. I was then sucked in by the movements sound bites and misleading statements. I've now, armed with an education, come to see them as wholesale being spreaders of nothing but exaggerated, misleading, Fear, uncertainty and doubt. Some are more cunning in their anti-nuclear message than others, being selective or economicial with the truth to fit their world-view. Frank von Hippel & Amory Lovins two controversial proliferation talking head.

"Plutonium demilitarization, despite its intrinsic arms-control and nonproliferation value, has not gained traction; it has been caught up in an ongoing ideological dispute over nuclear power. Part of the resistance can be traced to nuclear obstructionists, who are opposed to nuclear power regardless of compensatory benefits. Others averse to nuclear power might be called nuclear abstractionists because they tend to accept some nuclear applications, while objecting to others."..."Frank von Hippel and Amory Lovins are two prominent outspoken opponents of plutonium demilitarization. Examination of their papers and presentations reveals that both tend to omit evidence and citations that contradict their position on the supposed weaponization qualities of reactor and demilitarized grades of plutonium. While short in relevant credentials, each has been actively impeding arms-control and nonproliferation measures described below."

A plutonium nuclear weapon, which I will designate something that can create an explosive yield at least greater than 1 kiloton. It is practically impossible to make a nuclear weapon in a high burn up nuclear reactor, as the amount of Pu-240 and Pu-238 makes constructing such a thing practically impossible.

However it is possible to make an explosion with high burn up plutonium, the explosion however will have a low yield of less than 1 kiloton. It's possible get an explosion with the stuff. Fortunately, the technical hurdles, (dealing with heat generation from Pu-238 and premature initiation from the spontaneous fission of Pu-240, and gamma emissions etc.) are "daunting". Therefore, the concern is "overblown" according to the authors of a paper which appeared in the American Physical Society(and who were members of the Federation of American Scientists) a rebuttal to Mark Carson's musings is presented, and also criticism of the Union of Concerned Scientists for being part of the problem with their campaign against reprocessing.

The United States, for example, in 1988, considered spending a billion dollar on a laser enrichment Special Isotope Separation facility in Idaho national laboratory, to enrich low-grade plutonium from hanford tailings to weapons-grade. As low grade plutonium will sll its isotopic impurities just not being capable of getting crafted into a weapon, without undergoing isotopic enrichment.

Human Research at the Bomb Tests
For continued research in this area. See the DOE Openness: Human Radiation Experiment Related Site listing.

For example: The "DOE Office of International Health Programs - Marshall Islands Program homepage (http://tis.eh.doe.gov/ihp/marsh/marshall.htm) was created to provide access to documents which tell the story of U.S. nuclear weapons testing in the Marshall Islands from 1946 to 1958 and its impact on the lives of the Marshallese people, specifically the consequences of radioactive fallout on their environment and their health."

The following summary of the CONUS tests that involved humas was made after reading ''DOE Openness: Human Radiation Experiments: Roadmap to the Project ACHRE(Advisory Committee on Human Radiation Experiments) Report. The Defense Department's Medical Experts: Advocates of Troop Maneuvers and Human Experimentation. chapter 10.''

"The [Desert Rock] exercise was designed primarily to train and educate troops in the fighting of atomic wars. The exercise also provided an opportunity for psychological and physiological testing of the effects of the experience on the troops.". Bear in mind that the Desert Rock exercises exactly coincides with, and was likely highly influenced by, the ongoing Korean War at the time. Where there was a discussion of using tactical nuclear weapons if in the event the Chinese and Soviets began to successfully push the UN completely out of the Korean peninsula.

Not to let nuclear tests go to waste, those interested in biomedical effects of nuclear weapon detonations conducted a number of experiments while the nuclear warfare orientation training was being conducted in the majority of military participants. One of the experiments included - "Twelve subjects witnessed the detonation from a darkened trailer about sixteen kilometers from the point of detonation. Each of the human "observers" placed his face in a hood; half wore protective goggles, while the other half had both eyes exposed. A fraction of a second before the explosion, a shutter opened, exposing the left eye to the flash. Two subjects incurred retinal burns, at which point the project for that test series was terminated. The final report recorded that both subjects had "completely recovered." The flashblindness experiments were the only human experiments conducted under the biomedical part of the bomb-test program and the only human experiments where immediate injury was recorded.

Consent was obtained from at least some of the flashblindness subjects. In a 1994 interview, Colonel John Pickering, who in the 1950s was an Air Force researcher with the School of Aviation Medicine, recalled participating as a subject in one of the first tests where the bomb was observed from a trailer, and his written consent was required. "When the time came for ophthalmologists to describe what they thought could or could not happen, and we were asked to sign a consent form, just as you do now in the hospital for surgery, I signed one."

Later in non Desert Rock exercises, Operation Jangle, Jangle I included 8 volunteers moving into the area under a low air burst 4 hours after detonation to assess the ability of military protective uniforms (early NBC suits) to prevent fallout contacting human tissue, these volunteers were accompanied by radiation safety monitors. The Jangle II shot included volunteers doing military maneuvers, crawling, again with protective clothing, over contaminated ground soil at 5 days after the sub surface detonation, along with armored tanks air filters being tested by also driving around ground zero.

Aircraft crew experiments
Flashblindness experiments were conducted on airborne crew during the Desert Rock exercises, seemingly in tandem with ground trailer experiments. The flashblindness experiments began at the 1951 Operation Buster-Jangle, the series that included Desert Rock I, with the testing of subjects who "orbit[ed] at an altitude of 15,000 feet in an Air Force C-54 approximately 9 miles from the atomic detonation.he test subjects were exposed to three detonations during the operation, after which changes in their visual acuity were measured.[67] Although these experiments were conducted at bomb tests that potentially exposed the subjects to ionizing radiation, the purpose of the experiment was to measure the thermal effects of the visible light flash, not the effects of ionizing radiation.

Later Operation Teapot and Operation Redwing experiments included flying aircraft through the mushroom cloud, cloud-penetration experiments minutes to hours or so after detonation. This experiment was supported by data from drone flights through the clouds, but the air force wanted to be 100% sure, and probably to put aircrews worries to rest, to checked out that humans would be fine as long as they just dashed through the cloud at high speed. These experiments were led by Pinson who also flew a number of the experiments himself, General Pinson was still alive in 1995.

"What are the dangers to be encountered by the personnel who fly through the cloud?--How much radiation can they stand?--How much heat can the aircraft take?--Can the ground crews immediately service the aircraft for another flight?--If so, what precautions are necessary to insure their safety?"

Why was the Air Force interested in showing that atomic clouds could be penetrated soon after a detonation? Most important, the military wanted to assure itself that it was safe for combat pilots to fly through atomic clouds, if need arose during atomic war. But the research did not make much of a scientific contribution.

To read more go to the site and navigate to the next sections.

Sovacool
https://en.wikipedia.org/w/index.php?title=Wikipedia:Biographies_of_living_persons/Noticeboard
 * What issues does this user have exactly with his page? Going by the talk page of the subjects article -


 * http://en.wikipedia.org/wiki/Talk:Benjamin_K._Sovacool This IP address, talk, originally attempted to portray themselves as somebody who knows [Sovacool] well. Now they are claiming here to actually be Sovacool himself?


 * Is it, or is it not, WP policy to try and include references for a person that are not their own personal webpage? As, at present, the wikipedia page for this person is almost entirely a mirror of the persons personal page. Why bother even having a Wikipedia article if it is just going to be a mirror.


 * WP:BALANCE dictates that if there is a controversy surrounding a person, then this should be reflected in the article. The Yale University paper states emphatically. That "life cycle GHG emissions from nuclear power are...comparable to renewable technologies."Life Cycle Greenhouse Gas Emissions of Nuclear Electricity Generation Sovacool however instead has claimed that Renewable energy is "7" times more effective than nuclear power at combating climate change, and now, in there most recent paper that - Wind power is "96" times more effective. This completely goes against the exhaustive findings of researchers at Yale University, the IPCC and the National Renewable Energy Laboratory.Collectively, life cycle assessment literature shows that nuclear power is similar to other renewable and much lower than fossil fuel in total life cycle GHG emissions

 Collectively, life cycle assessment literature shows that nuclear power is similar to other renewable and much lower than fossil fuel in total life cycle GHG emissions.


 * The IP user on Sovacool's talk page declares that the authoritative Intergovernmental Panel on Climate Change values are "unfair" and goes even further suggesting that the IPCC's values are inferior to those of Sovacool's, they write "Sovacool's number is probably more complete, and accurate". This might be the IP users opinion, however wikipedia policy has a preference for authoritative references, and you can't really get more authoritative than the IPCC when it comes to climate change, except for the UNCEFF. However, this IP user talks down this authoritative organization, and also wishes to remove this material? I think the IP user also fails to understand that it is not so much the actual value Sovacool arrived at as being up for debate, but a careful reading of the rebuttal of their work highlights that Sovacool did not apply his same methodology to other energy sources, however despite this omission, Sovacool declared "Renewable energy is 7 times more effective at combating climate change than nuclear power". The criticism by Berten et al issue is that Sovacool declared "Wind power is 7 more effective at combating climate change than nuclear power". On the Sovacool talk page, they state that "there has been new research published in Environmental Science & Technology confirming Sovacool's numbers that should be acknowledged" however this paper is not "new research" and  which the IP user presents as evidence that "Sovacool is right" was printed with the assistance of well known, and highly public anti-nuclear advocates. Benjamin K. Sovacool is the principle author along with  M. V. Ramana Mark Z. Jacobson, Mark A. Delucchi , and [[Mark Diesendorf.


 * http://pubs.acs.org/doi/abs/10.1021/es401667h


 * As Johnfos has said, Johnfos routinely edits wikipedia and uses Sovacool as their prime source. However Johnfos is entirely misleading when they suggest that it is only myself who have found Sovacool and Johnfos' promotion of his statements, to be unsuited for wikipedia. Indeed, Johnfos has encountered another editor on wikipedia who has have found Sovacool's publications to be "silly" and unreliable. They are User:NPGuy. An encounter that can be read here. -
 * https://en.wikipedia.org/wiki/Talk:Treaty_on_the_Non-Proliferation_of_Nuclear_Weapons


 * For a full disclosure, something that Johnfos has not done, is that it must be stated that Johnfos has in the past been forced into a "retirement" for copy & pasting material from anti-nuclear websites directly into wikipedia. They have also previously attempted to get me banned in times past, and failed. I began editing nuclear related pages of wikipedia as something seriously needs to be done about the sheer amount of disinformation being spread. A quick example being here Nuclear power, in 2011 Sovacool states that no "In the U.S. there are 13 reactors that have permanently shut down and are in some phase of decommissioning, and none of them have completed the process". A quick check of the Nuclear Regulatory Commission website however states exactly to the contrary - https://forms.nrc.gov/reading-rm/doc-collections/fact-sheets/decommissioning.html
 * Maine Yankee was completely decommissioned in 2005 for example.


 * Johnfos also falsely suggests that Sovacool is not an anti-nuclear advocate. On Sovacool's own site, he details that he supplied "legal advocacy" in India against a proposed power station. Sovacool also wrote Contesting the Future of Nuclear Power.
 * Boundarylayer (talk) 02:47, 29 June 2013 (UTC)

Nuclear power for sustainable energy
Horrible page. Low-carbon power is much cleaner and well laid out, best to link to that, although I have not added anything to that page.

Time to improve it.

A report was published in 2011 by the World Energy Council in association with Oliver Wyman, entitled Policies for the future: 2011 Assessment of country energy and climate policies, which ranks country performance according to an energy sustainability index. The best performers were Switzerland, Sweden and France.

Nuclear fission fuel is inexhaustible. http://www.mcgill.ca/files/gec3/NuclearFissionFuelisInexhaustibleIEEE.pdf

Sustainable energy is the sustainable provision of energy that meets the needs of the present without compromising the ability of future generations to meet their needs. Technologies that promote sustainable energy include renewable energy sources, such as hydroelectricity, solar energy, wind energy, wave power, geothermal energy, tidal power, and nuclear power, and to a lesser degree technologies designed to improve energy efficiency.

Moreover, newer nuclear reactor designs are capable of reusing what is commonly deemed "nuclear waste"/spent nuclear fuel until it is no longer (or dramatically less) dangerous, and have design features that greatly minimize the possibility of a nuclear accident. (See: Integral Fast Reactor and Passive nuclear safety)

Such fuels that are produced by the electrolysis of water, or more efficiently from the thermochemical sulfur-iodine cycle, to make hydrogen that is then in turn fed in to the Sabatier reaction to produce methane, which may as usual then be stored to be burned later in power plants as synthetic natural gas,

Geothermal energy is produced primarily by tapping into the nuclear thermal energy created in the earth as uranium and thorium atoms emit heat as they decay, a process known as nuclear decay. It is considered sustainable because the thermal energy is constantly replenished, and as there is an abundance of uranium and thorium in the earths crust.

Approximately 7% of the heat energy produced in a nuclear fission reactor comes from the nuclear decay of fission products and nuclear transmutation products e.g. Plutonium-238, which is known as the decay heat of the reactor. The majority of the energy in a reactor, around 80%, comes from the inelastic collisions between charged fission product nuclei. Essentially the nuclear fuel becomes full of extremely charged nuclei following fission events, all these "like charged particles", like little bar magnets, instantly push away from each other, creating heat energy as there is now a large amount of internal motion from the "little bar magnets" being repelled around within the nuclear fuel. The charged fission products("the little magnets") moving/kinetic energy is rapidly converted into heat energy during the zillions of inelastic collisions that are occurring - much like how you would expect, if you were standing outside a hollow spherical steel tank(like an empty liquid petroleum gas tank used for cooking) that was being shot at from a machine gun from within the tank, the skin of the tank would be hotter where it was shot with a bullet than in an area where there are no impacts; multiply this by zillions of internal bullet impacts, and the steel tank would get so hot it would become red hot/incandescent, and if the rate of impacts/collisions increased even more, it would become white hot and molten due to the kinetic energy of the bullets being converted into even more heat energy, and if the rate was even faster again - (a prompt supercriticality chain reaction in nuclear fissions case)- the internal energy of the metal shell quickly reaches, and then surpasses, liquid metal going through the gas phase and then finally to the plasma phase of matter which will incandescence mostly in the X-ray region of the electromagnetic spectrum - far past the brightest visible light, that is, equal to, and then higher than, the heat energy released by high explosive reaction products e.g. following a TNT detonation. See relative effectiveness factor to get a feel for just how far past it can go depending on how long you can keep the fissile fuel from blowing itself apart when it reaches the power density of TNT, a feat achieved with self tampering to some degree in the case of the low yield Davy Crockett, but for Fat Man high yields, heavy metal tampers and explosive detonation pressure are primarily what keep it squeezed tight as long as possible to reach much hotter internal temperatures. Finally, for even hotter temperatures, explosively driven implosive shells of fissile fuel are what keep the more efficient designs from blowing themselves apart once they achieve the internal temperature of Fat Man type devices.

In a nuclear reactor operating at peak power however, a much more subdued chain reaction occurs - a delayed criticiality - in which for every nucleus fissioned, another is fissioned...and so on. There is no real net increase in the number of fissions occurring. Unlike a supercriticality event. The "delayed" part of the critical condition is in reference to the use of delayed neutrons, which may be produced by fission products and transmutations, to keep the chain reaction from ending. To start a nuclear reactor, and to increase its power from zero up to say 4000 MW of thermal energy, I believe they use the inherent spontaneous fission rate and then delayed supercriticality to begin, and then to increase, the rate of fissions from zero up to the desired amount.

Many people, including Greenpeace founder and first member Patrick Moore,   George Monbiot, Bill Gates, Richard Branson,   environmentalists and authors - Stewart Brand, Gwyneth Cravens, James Lovelock, NASA climate scientist James Hansen,   who was the recipient of the 2010 Sophie Prize for Environmental and Sustainable Development, and David J. C. MacKay the chief scientific adviser of the UK Department of Climate Change have all, either specifically classified nuclear power as sustainable energy, or described conventional nuclear power as safer than alternatives, faster at addressing dangerous climate change than other technologies, and essential to power a prosperous modern world. However critics of nuclear power, namely an organization now classified as a disinformation agent - Greenpeace disagree.

Nuclear power, with as of 2007 a 20% share of U.S. electricity production, is the largest deployed technology among current low-carbon energy sources.

Nuclear power, as of 2010, also provides two thirds(2/3) of the twenty seven nation European Union's low-carbon power.

According to a publication by the National academy of science, Presently operating nuclear power plants have a 20% share of U.S. electricity production, and therefore nuclear power is the single largest deployed low-carbon power electricity generating technology in the country.

In 2007, nuclear power supplied around one-seventh of the world's electricity. Numerous studies and assessments (e.g., by the Intergovernmental Panel on Climate Change, International Atomic Energy Agency, and International Energy Agency) suggest that as part of a portfolio of low-carbon energy technologies, nuclear power will continue to play a role in reducing greenhouse gas emissions.

Nuclear power

There are two sources of nuclear power. Nuclear Fission, which is used in all current nuclear power plants and is fueled by the metals uranium(or sometimes thorium), and nuclear Fusion which is the energy source that makes the stars shine, including our sun and is fueled by Hydrogen isotopes, such as Deuterium. There are also many conceptual combinations, combining the advantages of each - using fusion to burn up conventional spent nuclear fuel from fission reactors in Nuclear fusion-fission hybrid reactors. However, only nuclear fission has as of yet provided a steady state, high capacity factor and Energy return on energy invested output.

As of 2013 it remains impractical to generate electricity economically from fusion reactions for use on earth, as higher than break-even fusion reactors are not yet available, with the possible sole caveat of PACER technology. However, a sustainable, break-even, steady state, proof of concept fusion reactor is presently being built, and is expected to begin producing more output energy than input by approximately 2025, known as the International Thermonuclear Experimental Reactor - ITER.

Moreover, pure Nuclear Fusion, if achieved, would become the most sustainable energy source known to humankind, instantly more sustainable than all other sustainable technologies which ultimately require absorbing solar radiation from the sun as their input energy source. As there is enough economically extractable fusion fuel in the form of deuterium, naturally found in sea water, to potentially power the entire earth, at the present energy demand level, for longer than the predicted life span of our parent star, the sun. Furthermore, as fusion energy emitted from the sun is ultimately the primary driver of the majority of what are, commonly considered, conventional sustainable energy power sources - from the solar heating of the air molecules that makes wind power possible, to the evaporation of sea water, that lifts water great heights that is eventually tapped by Hydro power - the efficiency of pure fusion energy would be easily higher than all other energy sources, when it is achieved

In terms of presently operating nuclear power technology, conventional fission power is sometimes also referred to as sustainable and renewable, as it emits a comparable amount of total life cycle greenhouse gas emissions, as wind power, but classifying nuclear power as sustainable is controversial amongst some commentators, often due to reasons irrelevant to the question of whether or not it is a sustainable energy source. As Navid Chowdhury of Stanford University noted: "The IRENA (International Renewable Energy Agency), decision that it will not support nuclear energy programs because its a long, complicated process, it produces waste and is relatively risky, proves that their decision has nothing to do with having a sustainable supply of fuel."

The technology to economically extract uranium from the single largest known source of uranium in the world, the oceans, is as of 2012 approaching the economical equivalence level of uranium produced from conventional uranium resources, as sea water extraction, at 2012 technological levels is predicted to cost ~$300/kg in comparison to the conventionally mined uranium that routinely sells for ~$60–100/kg of uranium. Until seawater extracted uranium can directly compete with conventionally produced uranium, the lower cost of uranium sourced from conventional mining will continue to ensure it will remain the primary source of the worlds uranium supply. With the world supply of uranium not only going on to fuel the fleet of 437 fission power reactors worldwide, but also includes uranium necessary to fuel the world fleet of research reactors which continue to produce the majority of the worlds demand for medical radiopharmaceuticals, and also the fleet of nuclear marine propulsion reactors.

However, despite not becoming commercialized yet, the continued advancement in the economics of extracting uranium from sea water is largely removing previous concerns about the long term sustainability of conventional fission nuclear power. A sustainability issue that may have been encountered after a few centuries of continued consumption of the known land reserves of uranium, but a sustainability issue possible only in a scenario were one assumes humankind would not continue to find more conventional uranium reserves on the earths crust, and therefore add to the inventory of known reserves as the century unfolds, and, assuming that humanity would not improve upon the efficiency of the present fission reactor technology as the century unfolds either.

Moreover, an appraisal of the sustainability of conventional fission nuclear power by a team at MIT in 2003, and updated in 2009,(which is notably before the 2012 advancement in uranium extraction from sea water) stated that: "Most commentators conclude that a half century of unimpeded growth is possible, especially since resources costing several hundred dollars per kilogram($/kg) would also be economically usable...We believe that the world-wide supply of uranium ore is sufficient to fuel the deployment of 1000 reactors over the next half century."

There is, according to the OECD in 2008, excluding oceans, enough uranium in known conventional resources to power the current world fleet of 437+ nuclear reactors, which provides ~15% of the world's electricity, for 670 years at the present economically recoverable uranium rate. This is from combining all the known uranium reserves in total conventional mine resources and phosphate ores, and assuming that all the uranium will be burnt up in the most common present design of reactors and not burnt up in the more efficient generation III or generation IV reactors. Furthermore, the OECD noted that this is all 670 years of uranium recoverable at present 2008 prices, which are between 60 and 100 US$/kg of Uranium.

The OECD have further noted in 2008 that, with and without the likely improvements in reactor uranium burn up efficiency: "Even if the nuclear industry expands significantly, sufficient fuel is available for centuries. If advanced breeder reactors could be designed in the future to efficiently utilize recycled or depleted uranium and all actinides, then the resource utilization efficiency would be further improved by an additional factor of eight."

Finally, the OECD have determined that with a pure fast reactor fuel cycle, with a burn up of, and recycling of, all the Uranium and waste actinides, there is 160,000 years worth of Uranium in known conventional resources and phosphate ore. Again, excluding the largest, but presently unconventional, source of uranium - seawater.

As noted by the OECD, nuclear power has the potential to also significantly expand its renewability from a fuel and waste perspective, such as by the use of more breeder reactors, which breed more fissile fuel than they consume while also producing power(the breeding process is possible due to the reactor design making use of its excess neutrons to burn fertile fuel, much as fissile fuel is burnt in the present Light water reactors).

However, significant challenges presently exist in expanding the role of breeder reactors, and therefore also for nuclear fission to become truly renewable, namely one of the main issues is trying to compete economically with other reactor designs that do not breed more fuel than they consume, but instead are dedicated to solely burning fuel. It is for this reason, that although they have been successfully built and run, uranium breeder reactors have not yet become economically competitive with light water reactor designs, and they may stay that way, until, or if, uranium prices increase, and therefore uranium scarcity occurs.

See talk page history of sustainable energy to see what I was replying to -


 * Good catch man, predictably, wikipedia has a lot of anti-science types editing and removing material, especially anything possibly construed as pro-nuclear. I recently added a lot of detail to the nuclear power section, to give its proper amount of weight(as worldwide it is the largest sustainable energy source currently fielded). However I did not see the thorium material you reference(someone probably removed it again), if it was good stuff, could you add it again? I mean, what sort of sustainable energy page on an encyclopedia, that regards itself as respectable, doesn't include a proper treatise on Fusion? It is after all going to be the most sustainable of energy sources once it is achieved and commercialized. It'll out shine the sun in life span! so it is the epitome of sustainable, how much more sustainable do you need?

BoundaryLayer edit blanked by kim


 * Kim as other editors here are aware, the nuclear power section has been under consistent vandalism by numerous anti-nuclear elements. (1) Changing 'some' into 'many' is entirely consistent with the list of people who regard nuclear power as sustainable. They are not some fringe group but include NASA climate scientist James Hansen, the only fringe group in respect to nuclear power and the only 'controversy' you speak of comes from biased anti-scientific organizations - Greenpeace.


 * (2) Nuclear power is subject to more half-truths and propaganda(again largely due to the likes of Greenpeace and their disinformation campaigns) than any other source of power. For this reason it requires a substantial amount of scientific weight in the article to put their anti-science propaganda to rest, and to address the reality of if fission and fusion are sustainable. Another reason for the nuclear power section is that - it is the most sustainable form of power humanity presently knows of, long after the sun dies, fusion and even some fission technologies will still have more than ample amounts of fuel to keep on going - i.e long after the wind stops blowing and your solar panels stop working for example, so are you really arguing the most sustainable form of power shouldn't get more than a few measles lines? If this encyclopedia wasn't infected with so much anti-nuclear bias and the sustainability technologies were ranked in descending order of sustainability, Nuclear power would be on the top. A fact that is inescapable, and you know it. Another reason for having a fleshed out nuclear power section is that, as there is plenty of uranium and thorium abound, fission is sustainable for thousands of years, not many people are aware of this, and believe the opposite, so it is necessary to detail just exactly how much is available, which is dependent on which reactor technology is used.


 * (3) You remove the sulfur-iodine cycle for reasons that are inconsistent, as nanotechnology solar panels feature in the article, - and similarly they're not really in use either Kim. So you have just demonstrated a monumental bias that has resulted in yet more censorship vandalism on this article.
 * Please desist from this biased censorship conduct.
 * Thank youBoundarylayer


 * Kim, please provide one scientific source that supports your POV that it is controversial to regard nuclear power as sustainable. I think you'll find that btw it is you who is not mainstream.
 * MIT have stated that there is plenty of fuel for 1000 reactors to be built over the next half century. -Which I reference in the nuclear power section, a section which you have consistently censored.
 * Patrick Moore more than once regards nuclear power as sustainable in the video that I provided in the section, so yes I can use that video.
 * Richard Branson - The construction of modern nuclear reactors was a step that was already agreed upon in the effort to build a new system powered by sustainable energy.  http://www.entrepreneur.com/article/220496
 * James Hansen says it is very unfortunate that “a number of nations have indicated that they’re going to phase out nuclear power… The truth is, what we should do is use the more advanced nuclear power. Even the old nuclear power is much safer than the alternatives.”
 * http://theenergycollective.com/jcwinnie/60103/social-and-decision-sciences-and-engineering-and-public-policy
 * More than once he has described nuclear power as sustainable without specifically saying the word. You know this, but you're just filibustering for the sake of it at this stage.
 * James Hansen is also a recipient of the 2010 Sophie Prize for Environmental and Sustainable Development. So you should ask yourself, why they would give him a sustainable development prize if he was not for sustainable energy sources.
 * Read about the prize here - https://researchfunding.duke.edu/detail.asp?OppID=5241
 * You ramble on quite a bit here - but you fail to realize this simple fact - this article is about sustainable energy! Yes Kim, That means that the article should include long term, after the sun dies, sustainable energy. Kim, which power source would we still have running if the sun were to stop heating the earth tomorrow? Say for example, from a volcanic winter (like the 1800s Krakatoa etc.), or indeed when the sun dies naturally in a few billion years from now? If you answer nuclear power, as any logical thinking person would, you just proved my point. Nuclear Fusion is thy most sustainable source of power.
 * As for Solar energy, I think I'll let Mr. Gates respond to this one. - Gates argues that nuclear power is still safer than all other energy options, rich countries aren’t spending enough on R&D, and installing solar panels on your roof is not helping to reduce CO2 emissions. It’s merely “cute.” http://www.wired.com/magazine/2011/06/mf_qagates/
 * Oh and Solar energy is not sustainable PV definitely is not. - http://www.newscientist.com/article/dn16550-why-sustainable-power-is-unsustainable.html
 * So once again you're just spreading your own misunderstandings.
 * Boundarylayer (talk) 20:40, 6 March 2013 (UTC)


 * You are abusing your position and you know it. There are many contradictions and falsehoods now stated in the article, once again, thanks only to you.
 * Boundarylayer


 * It is due to their own interpretation, and lens through which they see the world, that they have assigned scare quotes to nuclear power as sustainable. Moreover, they have still yet to supply a single scientific reference to state nuclear power is unsustainable. They suggest they know what Branson meant and have the gist etc. etc. but I'm the one reading into what he says. Finally if someone defines(see the definitions section in the article, if you don't know what the word means) nuclear power as sustainable, without actually saying the word, then they have described a sustainable energy source. As for WP:WEIGHT, Tidal power, Solar PV etc. etc. all get far too much weight in the article seen as they supply negligible amounts of energy and Solar PV isn't sustainable at all. I will supply some peer reviewed papers in the coming days that state nuclear power is more sustainable than Solar PV.


 * http://gabe.web.psi.ch/pdfs/lca/Dones_EcoBalance_2006.pdf Sustainabilty of energy sources. Figure 2. Note that nuclear powers over all combined, Economic, Environmental & Social sustainability score is higher than Solar PV, and its sustainability score would be higher still, that is comparable to hydro power, if the hypothetical nuclear proliferation(Social sustainability) concern did not count against nuclear powers over all sustainability value. Due to Nuclear power being ranked higher than Solar PV in sustainabilty, it logically therefore follows that by the WP:WEIGHT criteria, nuclear power(fission) should get a greater amount of weight in the article than Solar PV, and the nuclear power section should also be positioned closer to the top of the article, rather than pushed down to the very end of the article in the very anti-nuclear POV manner that it currently is. As you wrote yourself - Write about nuclear proportionally to how the literature of sustainable energy is focusing on it. (WP:WEIGHT)


 * Another source is http://www.withouthotair.com/c24/page_161.shtml A book titled Sustainable energy by David J. C. MacKay who is the chief scientific adviser to the Department of Energy and Climate Change. Page 162 spends a great deal of time talking about the quantity of Nuclear fuel. Fuel, after all, being the major requirement for sustainable energy considerations. The book is a little old, and many advances in uranium extraction from sea water have occurred since publication. Therefore this is why I spent the majority of the nuclear power section discussing fuel supply.


 * I have also noted that this entire Sustainable energy article spends most of its time talking about Renewable energy(an entirely separate topic) rather than sustainable energy. If I had the time I would fix this too, but as the nuclear power section was under-represented, and indeed, entirely misrepresented with, need I remind you, constant censorship blank out's by anti-nuclear editors, the nuclear power section demanded a greater degree of urgent attention. So it has nothing to do with advocacy, but everything to do with writing an encyclopedia.


 * Nothing but opinion there friend, therefore nothing but anti-scientific reasons, I asked for scientific sources, with scientific reasoning to back them up. Instead you have linked me to a plethora of opinion pieces, none of which offer a single quantifiable reason why nuclear fission power should be classified as unsustainable.


 * Moreover some of those 'references' are truly laughable - the 'Socialist register' an anti-capitalist opinion publication, and others with titles such as industry propaganda...are you serious? Whereas, on the contrary my sources have scientifically crunched the numbers, and have ranked energy sources by their sustainability. http://gabe.web.psi.ch/pdfs/lca/Dones_EcoBalance_2006.pdf Sustainabilty of energy sources. Figure 2. http://www.withouthotair.com/c24/page_161.shtml A book titled Sustainable energy by David J. C. MacKay which is a book that also discusses the quantity of nuclear waste too, all of it fitting into a few swimming pools. I'd like to see the Wind or Solar industry manage to produce such a comparably small amount of waste per unit of energy generated.


 * So I'm still waiting. Where is the science to back up the opinion that nuclear power is unsustainable? All I can find is that it nuclear power is sustainable, moreso than Solar PV for that matter.


 * Indeed even some Concentrated solar power(CSP) technologies are presently unsustainable due to high water usage in areas where fresh water is scarce. Costs of reducing water use of concentrating solar power to sustainable levels: Scenarios for North Africa http://www.sciencedirect.com/science/article/pii/S0301421511003429 Scaling up CSP with wet cooling from ground water will be unsustainable in North Africa. and the paper generally discusses how dry cooling technology produces lower efficiencies and therefore CSP in Africa won't be economical, even under optimistic calculations until ~2030.
 * Solar energy is not all that sustainable PV definitely is questionable. - http://www.newscientist.com/article/dn16550-why-sustainable-power-is-unsustainable.html
 * Boundarylayer (talk) 14:49, 14 March 2013 (UTC)

A report was published in 2011 by the World Energy Council in association with Oliver Wyman, entitled Policies for the future: 2011 Assessment of country energy and climate policies, which ranks country performance according to an energy sustainability index. The best performers were Switzerland, Sweden and France. All produce electricity with from ~50% to 80% nuclear power in their electricity grid. No mention of (100% renewable) Iceland and no mention of Brazil either in the top three countries.

Renewable energy
Renewable energy sources, that derive their energy from the sun, either directly or indirectly, are expected to be capable of supplying humanity energy for almost another 1 billion years, at which point the predicted increase in heat from the sun is expected to make the surface of the Earth too hot for liquid water to exist.

26 March 2013 edit that was removed. ''added some info on the average rate of supply from the OECD, taken from their factbook. 4.6% in 1971 and 7.6% in 2010)''

According to the OECD factbook 2011-2012, worldwide, Iceland(85.6%) and Brazil(45.8%) exploit the greatest proportion of renewable energy to supply their total energy requirements(including electricity and other energy needs) with the world average percentage at 13.1%. Other countries in the OECD with a high total energy supply from renewable sources are - New Zealand(38.6%), Norway(37.3%), Sweden(32.7%),Austria(26%) Portugal(24%), Finland(24.9%), Chile(22.7%), Switzerland(18.8%), Denmark(18.8%), Canada(16.5%) and Estonia(14.4%). Worldwide, other non-OECD nations with a higher percentage of renewable energy representing their total energy needs, than in comparison to the average from OECD countries(7.6%), are - Brazil(45.8%), Indonesia(34.4%), India(26.1%) and China(11.9%).

For all OECD countries taken as a whole, the contribution of renewables to total energy supply increased from 4.8% in 1971 to 7.6% in 2010, In general the contribution of renewables to the energy supply in non-OECD countries is higher than in OECD countries. With the world average percentage of total energy supplied from renewable energy at 13.1%(including OECD countries and non-OECD countries) in 2010.

debate
There is an ongoing renewable energy debate, the debate on biofuels include the food vs fuel dilemma and the Indirect land use change impacts of biofuels. The debate on hydropower installations usually revolve around the footprint of dam floodplains, with for example the Three Gorges dam, the largest renewable electricity source in the world, having displaced 1.3 million people, and garnered environmental criticism. The debate on new renewables, such as wind and solar, is usually much less intense, and focuses on their intermittent supply of electricity, higher cost of electricity, and a small but growing community opposition to the industrialized footprint of large solar and wind farm installations. Renewable energy accidents with large losses of life include the Banqiao dam failure and the inhalation of particulate matter from the burning of biomass.

Biomass on the renewable energy page
The proportion of truly renewable biomass in use is uncertain, as for example peat, one of the largest sources of biomass, is sometimes regarded as a renewable source of energy. However, due to peats extraction rate in industrialized countries far exceeding its slow regrowth rate of 1mm per year, and due to it being reported that peat regrowth takes place only in 30-40% of peatlands, There is considerable controversy with this renewable classification. Organizations tasked with assessing climate change mitigation methods differ on the subject, the UNFCCC classify peat as a fossil fuel due to the thousand plus year length of time for peat to re-accumulate after harvesting, another organization affiliated with the United Nations also classified peat as a fossil fuel. However, the Intergovernmental Panel on Climate Change (IPCC) has begun to classify peat as a "slow-renewable" fuel, with this also being the classification used by many in the peat industry. Further controversy surrounding the classification of all biomass as "renewable" centers around the fact that depending on the plant source, it can take from 2 to 100 years for different sources of plant energy to regrow, such as the difference between fast growing switch grass and slow growing trees, therefore due to the high emission intensity of plant material, researchers have suggested that if the biomass source takes longer than 20 years to regrow, they argue the plant source should not be regarded as renewable from a climate change mitigation standpoint.

Solar power edit
Thermoelectric, or "thermovoltaic" devices convert a temperature difference between dissimilar materials into an electric current. First proposed as a method to store solar energy by solar pioneer Mouchout in the 1800s, thermoelectrics reemerged in the Soviet Union during the 1930s. Under the direction of Soviet scientist Abram Ioffe a concentrating system was used to thermoelectrically generate power for a 1 hp engine. Thermogenerators, but in the following cases powered by the heat source plutonium-238 in radioisotope thermoelectric generators are used in the US space program as an energy conversion technology for powering deep space missions such as the Mars Curiosity rover, Cassini, Galileo and Viking. Research in this area of thermogenerators, which can use any heat source, is focused on raising the efficiency of these devices from 7–8% to 15–20%.

Wind power, birds, material intensity & actual CO2 savings
Birds To start with, here's a sad story, that sadly also featured in the telegraph "paper", in June 2013. The White-throated Needletail - the world's fastest flying bird - garnered a big crowd in Britain amongst ornithologists as an opportunity to see the rare bird, having only arrived on the Island, they allegedly saw the bird killed by a wind turbine.

Page 81 has public questions aobut proposed wind farms, some are more plausible than others. Worth taking a look at.

Nuclear safety


originally for the article but not now, intro, if you reinsert.

To frame nuclear power together with other low carbon sources of dependable power, catastrophic terrorist attacks are also conceivable in hydroelectric dam scenarios, and depending on location, could result in a comparable death toll to the worst conceivable nuclear attack. Furthermore, numerous terrorist attacks on nuclear plants have never been successful at breaching the reactor. Attacks on hydro plants on the other hand have been successful at compromising dams, with the 2010 Baksan hydroelectric power station attack being the most recent successful terrorist attack on a hydroelectric power plant.

nuclear power development and safety
The Three Mile Island accident's molten core got exactly 15 millimeters on its way to "China" before the core froze at the bottom of the reactor pressure vessel.

Unlike the 1979 Three Mile Island accident, the much more serious 1986 Chernobyl accident did not increase regulations affecting Western reactors due to the RBMK reactor that caused the accident being of a design incomparable to all western reactors, that is, Graphite moderated and water cooled, with the water also acting somewhat as a neutron poison. This combination of graphite moderation and water cooling is not found in any other reactor design in the world. Due largely to these materials selections, the Chernobyl RBMK design used only in the former Soviet Union, is unlike all western designs which are primarily, as of 2013, Light water reactors. Precipitating the Chernobyl accident was the reactor design and specific material selections resulting in what is known as a large positive Void coefficient, which is a nuclear engineering term that means fission reactions increase, and therefore the heat that they produce also increases rapidly when coolant is lost. This is in direct contrast to the negative void coefficient in most western designs, were fission reactions cease when coolant is lost. Along with these reactor design flaws the RBMK also had a slow SCRAM speed, lacked the "robust" containment buildings that are built as standard in all U.S. designs and also lacked the 6 inch thick steel reactor vessel. Approximately 10 of these RBMK reactors are still in use as of 2013. However, changes in the former Soviet Union were made following the accident and the end of the Cold War, improvements in both the reactors themselves (use of a safer enrichment of uranium) and in the control system (prevention of disabling safety systems), amongst other things, to reduce the possibility of a duplicate accident, with some RBMK's now in operation reportedly achieving a negative void coefficient.

to be added.

Lancet. 1988 Nov 19;2(8621):1185-6. International Physicians for the Prevention of Nuclear War: Messiahs of the nuclear age

Leaders of the Nobel Peace Prize winning group International Physicians for the Prevention of Nuclear War (IPPNW) claim that their struggle against the nuclear threat may be ‘one of the significant contributions of our profession to the survival of humankind’ (Lown, B., ‘Looking back, seeing ahead’, Lancet, 1988; ii: 203-4). Citing their ‘unique knowledge and expertise’ as qualifications for working for the abolition of nuclear weapons, IPPNW urges physicians to educate the public about nuclear war and to offer sound prescriptions for nuclear war prevention (Lown, B., ‘Looking back, seeing ahead’, Lancet, 1988; ii: 203-4).

In science, good intentions and noble sentiments do not exempt one's work from critical scrutiny. Because the advocacy of IPPNW is cloaked in scientific authority, it should be (but rarely is) subjected to the usual rigors of scientific criticism.

IPPNW has indeed played a major role in educating the public about nuclear war, and consequently in gaining widespread acceptance of fallacious beliefs, some of which are repeated in the Lancet (Lown, B., ‘Looking back, seeing ahead’, Lancet, 1988; ii: 203-4). For example, Lown speaks of nuclear winter as a “discovery” rather than as a hypothesis. IPPNW has pointedly ignored the criticism (Penner, J. E., ‘Uncertainties in the smoke source term for “nuclear winter” studies’, Nature, 1986; 324: 222-226; Seitz, R., ‘Siberian fire as “nuclear winter” guide’, Nature, 1986; 323: 116-117; Seitz, R., ‘In from the cold: “nuclear winter” melts down’, National Interest, 1986; 2(1): 3-17; Chester, C. V., et al., ‘A preliminary review of the TTAPS nuclear winter scenario’, Oak Ridge, TN: Oak Ridge National Laboratory, 1984, report ORNL/TM-9223) of the original nuclear winter report, as well as the later, more sophisticated studies that have debunked the doomsday scenario ...

In referring to the Chernobyl disaster, Lown (Lown, B., ‘Looking back, seeing ahead’, Lancet, 1988; ii: 203-4) states that the odds of a meltdown were estimated to be 1 in 10,000 years, according to Soviet Life. (A mere meltdown would have been a trivial event in comparison with the graphite-fueled fire that actually occurred.) Yet American engineers recognized the danger of reactors with a positive void coefficient (like the Chernobyl reactor) as early as 1950 (Teller, E., ‘Better a shield than a sword: perspectives on defense and technology’, New York: Free Press, 1987). '''Why did the Soviets choose an unsafe design for a reactor built quite recently? One possible explanation is that such reactors can be refueled while in operation, permitting the production of weapons-grade plutonium as a byproduct (Cohen, B. L., ‘The nuclear reactor accident at Chernobyl, USSR’, Am. J. Phys, 1987; 55: 1076-1083).'''

uranium mining and nuclear spill comparable to three mile island in curies released.
Check the references.

Conventional Uranium mining via opencast mining and underground mining is largely being replaced by In-situ leaching technology, a method of extraction that does not produce the same occupational hazards, or mine tailings, as conventional mining.

Although the nuclear industry did rely on poorly ventilated uranium mining practices in times past, essentially before the advent of commercial nuclear power in the late 1960s, with a non-zero number of accidents and fatalities, primarily due to radon inhalation. The use of physically mining uranium has increasingly been replaced with In-situ leaching extraction technology, and regulation is in place to ensure the use of high volume ventilation technology in confined space uranium mining, with both largely eliminating occupational exposure and mining deaths. Moreover, it must be noted that all historic uranium mining deaths are negligible contributors to nuclear powers fatality rate per unit of energy generated, as this is dominated by a single event, the Chernobyl disaster.

Reactor-grade plutonium
Recent edits by I.
 * Reactor-grade plutonium is found in spent nuclear fuel that a nuclear reactor has irradiated (burnup/burnt up) for years before removal from the reactor, in contrast to the low burnup of weeks or months that is commonly required to produce weapons-grade plutonium, with the high time in the reactor(high burnup) of reactor-grade plutonium leading to transmutation of much of the fissile, relatively long half-life isotope 239Pu into a number of other isotopes of plutonium that are less fissile or more radioactive.


 * Thermal-neutron reactors (today's nuclear power stations) can reuse reactor-grade plutonium only to a limited degree as MOX fuel, and only for a second cycle; fast-neutron reactors, of which there is less than a handful operating today, can use reactor-grade plutonium, or any other actinide, material indefinitely as a means to reduce the transuranium content of spent nuclear fuel.


 * The degree to which typical high burn-up reactor-grade plutonium is less useful than weapons-grade plutonium for building nuclear weapons is somewhat debated, with many sources arguing that the maximum probable yield would be bordering on a fizzle of the range 0.2 to 2 kiloton in a Fat Man type device, assuming the non-trivial issue of dealing with the heat generation from the higher content of non-weapons usable Pu-238, that is present, could be overcome, as the premature initiation from the spontaneous fission of Pu-240 would ensure a low explosive yield in such a device, with the surmounting of both issues being described as "daunting" hurdles for a Fat Man era implosion design.
 * While others disagree on theoretical grounds,  arguing that it would be "relatively easy" for a well funded entity with access to fusion boosting tritium and expertise to overcome the problem of predetonation created by Pu-240, and that a remote manipulation facility could be utilized in the assembly of the highly radioactive gamma ray emitting bomb components, coupled with a means of cooling the pit during storage to prevent the plutonium charge contained in the pit from melting, and a design that kept the implosion mechanisms high explosives from being degraded by the pits heat.


 * No information, in the public domain, suggests that any well funded entity has ever achieved, or seriously pursued creating, a nuclear weapon with the same isotopic composition of modern, high burn up, reactor grade plutonium. All nuclear weapon states have taken the more conventional path to nuclear weapons by uranium enrichment and producing low burn up, weapons-grade plutonium, in reactors capable of operating as production reactors. While the isotopic content of reactor-grade plutonium, created by the most common commercial power reactor design, the pressurized water reactor, never directly being considered for weapons use.


 * In April 2012 there were thirty one countries that have civil nuclear power plants, with nine of which with nuclear weapons and almost every nuclear weapons state began producing weapons first instead of commercial nuclear power plants. The re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-proliferation treaty.


 * (break in text not altered by I)For example, a generic Pressurized water reactor's spent nuclear fuel isotopic composition, following a typical Generation II reactor 45 GWd/MTU of burnup, is 1.11% plutonium of which 0.56% is Pu-239, which corresponds to a Pu-239 content of 50.5%.


 * The odd numbered fissile plutonium isotopes present in spent nuclear fuel, such as Pu-239, decrease significantly as a percentage of the total composition of all plutonium isotopes(which was 1.11% in the above example) as higher and higher burnups take place, while the non fissile plutonium isotopes all increase in percentage - e.g Pu-238, Pu-240.

Design and construction of nuclear explosives based on normal reactor-grade plutonium is difficult and unreliable, but was demonstrated in 1962 from plutonium from Magnox reactors.
 * nuclear terrorism target(of high burn up plutonium.
 * Aum Shinrikyo, who succeeded in developed Sarin and VX nerve gas is regarded to have lacked the technical expertise to develop, or steal, a nuclear weapon. Similarly, Al Qaeda was exposed to numerous scams involving the sale of radiological waste and other non-weapons-grade material. With this experience possibly leading terrorists to conclude that nuclear acquisition is too difficult and too costly to be worth pursuing.

Much popular concern about possible weapons proliferation arises from considering the fissile materials themselves. For instance, in relation to the plutonium contained in spent fuel discharged each year from the world's commercial nuclear power reactors, it is correctly but misleadingly asserted that "only a few kilograms of plutonium are required to make a bomb". Furthermore, no nation is without enough indigenous uranium to construct a few weapons (however, that uranium would have to be enriched).

Plutonium is a substance of varying properties depending on its source. It consists of several different isotopes, including Pu-238, Pu-239, Pu-240, and Pu-241. All of these are plutonium but not all are fissile – only Pu-239 and Pu-241 can undergo fission in a normal reactor. Plutonium-239 by itself is an excellent nuclear fuel. It has also been used extensively for nuclear weapons because it has a relatively low spontaneous fission rate and a low critical mass. Consequently, plutonium-239, with only a few percent of the other isotopes present, is often called "weapons-grade" plutonium. This was used in the Nagasaki bomb in 1945 and in many other nuclear weapons.

On the other hand, "reactor-grade" plutonium as routinely produced in all commercial nuclear power reactors, and which may be separated by reprocessing the spent fuel from them, is not the same thing at all. It contains a large proportion – up to 40% – of the heavier plutonium isotopes, especially Pu-240, due to it having remained in the reactor for a relatively long time. This is not a particular problem for re-use of the plutonium in mixed oxide (MOX) fuel for reactors, but it seriously affects the suitability of the material for nuclear weapons. Due to spontaneous fission of Pu-240, only a very low level of it is tolerable in material for making weapons. Design and construction of nuclear explosives based on normal (i.e. routinely discharged) reactor-grade plutonium would be difficult and unreliable, and has not so far been done. A nuclear device has been made however from low-burned plutonium from a Magnox nuclear reactor. It was tested in 1962. Its composition was never officially released but was evidently around 80 to 90% of fissile Pu-239. This method of production was very expensive, unreliable and easily detectable (fuel has to stay in the reactor for relatively short period (few weeks) as opposed to normal use (few years)), and with a relatively small yield. All these factors contributed to the fact that apart from the test device used in 1962 no new ones were created.

Weapons usable definition change, after the magnox sourced plutonium test of 1962, Now weapons usable plutonium has to have a purity of more than ~80% Pu-239. http://permanent.access.gpo.gov/websites/osti.gov/www.osti.gov/html/osti/opennet/document/press/pc29.html

See also Nth Country Experiment which was concerned with basic weapon design rather than procuring fissile material.

So can reactor grade plutonium be used in an explosive device? - Theoretically it appears the answer is Yes, but it really depends on what the isotopic quality of the 'reactor grade plutonium' you pick, and if you regard an explosive yield of just 1 kiloton as worthy of the same classification as real nuclear explosives. Los Alamos chip in on the possibility of achieving a 1 kiloton nuclear explosive with reactor grade plutonium - http://www.nci.org/NEW/NT/rgpu-mark-90.pdf J Carson Marks of Los Alamos. Presents a technical break down on alpha rate and predetonation(due to Pu-240 contamination) in reactor grade plutonium. Conclusion: reactor grade plutonium is theoretically capable of producing a low yield device ~1 kiloton. The paper also discusses the probability of such a high yield event occurring.

Similarly, Richard Garwin has stated that it is theoretically possible to make a fizzle weapon with 'reactor grade plutonium' of ~70% Pu-239. http://www.fas.org/rlg/980826-pu.htm the critical mass of plutonium sourced from highly irradiated spent fuel from  a  normal  pressurized  water  reactor  or boiling  water reactor operating at 43,000 megawatt-days per kg fuel is 13 kg-- only 30% (not the 100 kg which would be a factor 10 or "an order of magnitude") greater than the 10-kg critical mass  of  weapon-grade  plutonium. - I wonder how much higher the burn up is in an EPR Gen III power reactor. As it is reported it has a higher burn up of fuel than Gen II reactors which Garwin was referring to. This higher burn up in X megawatt-days per kg of fuel would translate into an even lower quality of plutonium for weapons needs. That is, an even lower percentage of Pu-239 in the plutonium contained in the spent fuel.-Which is good from a nuclear power non-proliferation standpoint, and a further headache for any would be terrorist, as the plutonium would have an even higher amount of undesirable isotopes - Pu-240 etc.

Having read all the above, it must be said,(1) no one has apparently ever succeeded in turning even Gen II PWR or BWR spent fuel into a nuclear explosive, and (2) one major thing I think all of the above well experienced weapons designers are overlooking is the 'how' and 'why' a terrorist group would (A) get away with, and (B) want to steal, tens of tons of spent nuclear fuel - which (C) lets not forget, is a high-profile event, with the payoff being the achievement of nothing more than ~ 1000 tons of TNT equivalent explosion. For example, if I were a terrorist and I wanted a 1 kiloton yield device. I'd simply manufacture(or steal from a factory or trade ship) 1000 tons of TNT or ~ 600 tons of dynamite or ~1200 tons of ANFO(see Relative effectiveness factor). That would probably be more reliable, easier, and more clandestine than trying my hand at stealing the required tons of spent fuel(most definitely getting caught in the process and irradiating myself by the spent fuel) and, or, killing myself messing around with reprocessing reactor grade spent fuel, and then fabricating cores from hot reactor grade plutonium. So yes, it's theoretically possible to make a fizzle producing explosive from reactor grade plutonium, but why anyone would try it...is anyones guess, when there are more reliable alternatives for the low yield payoff in question. There are easier ways to go about getting a meager 1 kiloton device. Simple cost-benefit analysis suggests they'd take the conventional explosive route to 1 kiloton.

Some may retort with- Well the reactor grade plutonium weapon could be boosted with tritium and deuterium and the yield could then be something you could reliably shake a stick at - 10+ kilotons. But how in the hell are a terrorist organization going to both successfully steal reactor grade plutonium and then also successfully get a hold of enough tritium to make this possible? They would need to know in advance when to assault a military reactor(or CANDU) when the reactor operators are in the process of moving the tritium they produce.[These reactors usually irradiate Lithium targets or Deuterium to produce tritium(as this tritium production process takes a while to synthesize/build up to amounts of tritium worthy of extracting, assaulting the reactor at the wrong time will result in the terrorists coming away pretty empty handed)] In this scenario the terrorists are going to also have to successfully assault a military reactor, or CANDU, at just the right moment, when they're moving the tritium off site...As you think about it, this whole thing just becomes one giant rube goldberg fantasy scenario. All the steps in the chain of events necessary for this whole thing to be a success are so low in success rate...its like Alice in wonderland peering through the looking glass. Safeguards against all these things are in place. Even the relatively sophisticated Aum Shinrikyo group, attempt at attaining a nuclear weapon was to go the 'road most traveled' for this reason,* they recognized trying their massive collective, and well funded, hand at stealing reactor grade plutonium etc. etc. was just not likely to succeed and shifted focus.

The whole scenario is pretty far fetched to say the least.Reenactments of the following, are far more likely, - Texas City Disaster and need more safeguards against, including LNG supertanker attacks. http://www.cfr.org/port-security/liquefied-natural-gas-potential-terrorist-target/p9810 & Ocala Star-banner June 5, 1975. Supertanker explosion discussion. http://news.google.com/newspapers?nid=1356&dat=19750608&id=WMwwAAAAIBAJ&sjid=CAYEAAAAIBAJ&pg=4802,1459295
 * http://www.rand.org/pubs/research_briefs/RB165/index1.html

Spontaneous fission rates
Spontaneous fission rates:

In practice 239Plutonium will invariably contain a certain amount of 240Plutonium due to the tendency of 239Plutonium to absorb an additional neutron during production. 240Plutonium's high rate of spontaneous fission events makes it an undesirable contaminant. Weapons-grade plutonium contains no more than 7.0% 240Plutonium.

The rarely-used gun-type atomic bomb has a critical insertion time of about one millisecond, and the probability of a fission during this time interval should be small. Therefore, only 235Uranium is suitable. Almost all nuclear bombs use some kind of implosion method.

Spontaneous fission can occur much more rapidly when the nucleus of an atom undergoes superdeformation.

Spontaneous fissions release neutrons as all fissions do, so if a critical mass is present, a spontaneous fission can initiate a self-sustaining chain reaction. Also, radioisotopes for which spontaneous fission is not negligible can be used as neutron sources. For example, californium-252 (half-life 2.645 years, SF branch ratio about 3.1 percent) can be used for this purpose. The neutrons released can be used to inspect airline luggage for hidden explosives; to gauge the moisture content of soil in highway and building construction; or to measure the moisture of materials stored in silos, for example.

As long as the spontaneous fission gives a negligible reduction of the number of nuclei that can undergo such fission, this process can be approximated closely as a Poisson process. In this situation, for short time intervals the probability of a spontaneous fission is directly proportional to the length of time.

The spontaneous fission of uranium-238 and uranium-235 does leave trails of damage in the crystal structure of uranium-containing minerals when the fission fragments recoil through them. These trails, or fission tracks, are the foundation of the radiometric dating method called fission track dating.

Nuclear Waste Policy Act & Price-Anderson Act
The Price anderson act(Price–Anderson Nuclear Industries Indemnity Act is a big pot of money paid by reactor operators derived from each unit of energy sold into the market that goes into the governments/insurance pool in the event of an accident. "The costs of this insurance, like many costs of nuclear generated electricity, are borne by the industry, unlike the corresponding costs of some other power sources. Costs from hydropower mishaps, such as dam failure and resultant flooding, for example, are borne directly by the public. The 1977 failure of the Teton Dam in Idaho caused $500 million in property damage, but the only compensation provided to those affected was about $200 million in low-cost government loans.

The public has paid nothing under the Price-Anderson framework, while insurance pools have paid roughly $200 million in claims, and the nuclear power industry has paid $21 million to the federal government in indemnity fees.

The act has proven so successful that Congress has used it as a model for legislation to protect the public against potential losses or harm from other hazards, including faulty vaccinations, medical malpractice and toxic waste."


 * The Nuclear Waste Policy Act has a slant, and needs work. The Nuclear waste fund is there to pay for the eventual recycling or permanent to interim storage of spent nuclear fuel."The Fund receives almost $750 million in fee revenues each year(from reactor operators) and has an unspent balance of $25 billion. However (according to the Draft Report by the Blue Ribbon Commission on America's Nuclear Future, actions by both Congress and the Executive Branch have made the money in the fund effectively inaccessible to serving its original purpose.  The commission made several recommendations on how this situation may be corrected.

nuclear weapon
Antimatter, which consists of particles resembling ordinary matter particles in most of their properties but having opposite electric charge, has been considered as a trigger mechanism for nuclear weapons. A major obstacle is the difficulty of producing antimatter in large enough quantities, and there is no evidence that it is feasible beyond the military domain. However, the U.S. Air Force funded studies of the physics of antimatter in the Cold War, and began considering its possible use in weapons, not just as a trigger, but as the explosive itself. A fourth generation nuclear weapon design is related to, and relies upon, the same principle as Antimatter-catalyzed nuclear pulse propulsion.

Nuclear strategy
Another deterrence position in nuclear strategy is that nuclear proliferation can be desirable. This view argues that, unlike conventional weapons, nuclear weapons successfully deter all-out war between states, and they succeeded in doing this during the Cold War between the U.S. and the Soviet Union. In the late 1950s and early 1960s, Gen. Pierre Marie Gallois of France, an adviser to Charles DeGaulle, argued in books like The Balance of Terror: Strategy for the Nuclear Age (1961) that mere possession of a nuclear arsenal, what the French called the force de frappe, was enough to ensure deterrence, and thus concluded that the spread of nuclear weapons could increase international stability. Some very prominent neo-realist scholars, such as the late Kenneth Waltz, formerly a Political Science at UC Berkeley and Adjunct Senior Research Scholar at Columbia University, and John Mearsheimer of University of Chicago, have also argued along the lines of Gallois. Specifically, these scholars have advocated some forms of nuclear proliferation, arguing that it would decrease the likelihood of total war, especially in troubled regions of the world where there exists a unipolar nuclear weapon state. Aside from the public opinion which opposes proliferation in any form, there are two schools of thought on the matter: those, like Mearsheimer, who favor selective proliferation, and those of Kenneth Waltz, who was somewhat more non-interventionist.

The threat of potentially suicidal terrorists possessing nuclear weapons (a form of nuclear terrorism) complicates the decision process. The prospect of mutually assured destruction may not deter an enemy who expects to die in the confrontation. Further, if the initial act is from a stateless terrorist instead of a sovereign nation, there is no fixed nation or fixed military targets to retaliate against. It has been argued by the New York Times, especially after the September 11, 2001 attacks, that this complication is the sign of the next age of nuclear strategy, distinct from the relative stability of the Cold War. In 1996, the United States adopted a policy of allowing the targeting of its nuclear weapons at terrorists armed with weapons of mass destruction.

Robert Gallucci, president of the John D. and Catherine T. MacArthur Foundation, argues that although traditional deterrence is not an effective approach toward terrorist groups bent on causing a nuclear catastrophe, Gallucci believes that “the United States should instead consider a policy of expanded deterrence, which focuses not solely on the would-be nuclear terrorists but on those states that may deliberately transfer or inadvertently lead nuclear weapons and materials to them. By threatening retaliation against those states, the United States may be able to deter that which it cannot physically prevent.”.

Graham Allison makes a similar case, arguing that the key to expanded deterrence is coming up with ways of tracing nuclear material to the country that forged the fissile material. “After a nuclear bomb detonates, nuclear forensics cops would collect debris samples and send them to a laboratory for radiological analysis. By identifying unique attributes of the fissile material, including its impurities and contaminants, one could trace the path back to its origin.” The process is analogous to identifying a criminal by fingerprints. “The goal would be twofold: first, to deter leaders of nuclear states from selling weapons to terrorists by holding them accountable for any use of their own weapons; second, to give leader every incentive to tightly secure their nuclear weapons and materials.”

Costs and technology spin-offs


According to an audit by the Brookings Institution, between 1940 and 1996, the U.S. spent $ in present-day terms on nuclear weapons programs. 57 percent of which was spent on building nuclear weapons delivery systems. 6.3 percent of the total$, in present-day terms, was spent on Environmental remediation and nuclear waste management, for example cleaning up the Hanford site, and 7 percent of the total$,  was spent on making nuclear weapons themselves.



Strictly speaking however not all this 57 percent was spent solely on "weapons programs" delivery systems. For example, two such delivery mechanisms, the Atlas ICBM and Titan II, were re-purposed as human launch vehicles for manned spaceflight, both were used in the civilian Project Mercury and Project Gemini programs respectively, which are regarded as stepping stones in the evolution of US manned spaceflight. The Atlas vehicle sent John Glenn, the first American into orbit. Similarly in the Soviet Union it was the R-7 ICBM/launch vehicle that placed the first artificial satellite in space, Sputnik, on 4 October 1957, and the first human spaceflight in history was accomplished on a derivative of the R-7, the Vostok, on 12 April 1961, by cosmonaut Yuri Gagarin. A modernized version of the R-7 is still in use as the launch vehicle for the Russian Federation, in the form of the Soyuz spacecraft.

The first true weather satellite, the TIROS-1 was launched on the Thor-Able launch vehicle April 1, 1960. The PGM-17 Thor was the first operational IRBM(intermediate ballistic missile) deployed by the U.S. Air Force (USAF). The Soviet Union's first fully operational weather satellite, the Meteor 1 was launched 26 March 1969 on the Vostok rocket, a derivative of the R-7 ICBM.

WD-40 was first used by Convair to protect the outer skin, and more importantly, the paper thin "balloon tanks" of the Atlas missile from rust and corrosion. These stainless steel fuel tanks were so thin that, when empty, they had to be kept inflated with nitrogen gas to prevent their collapse.

During the development of the submarine-launched Polaris missile, a requirement to accurately know the submarine's location was needed to ensure a high circular error probable warhead target accuracy. This led the US to develop the Transit system. In 1959, ARPA (renamed DARPA in 1972) also played a role in Transit.



The first satellite navigation system, Transit, used by the United States Navy, was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite that proved the ability to place accurate clocks in space, a technology required by the latter Global Positioning System. In the 1970s, the ground-based Omega Navigation System, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.

While there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation for a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear-deterrence posture, accurate determination of the SLBM launch position was a force multiplier.

Precise navigation would enable United States submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The Navy and Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (such as Russian SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.

In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was worked in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for Air Force bombers as well as ICBMs. Updates from the Navy Transit system were too slow for the high speeds of Air Force operation.The Navy Research Laboratory continued advancements with their Timation (Time Navigation) satellites, first launched in 1967, and with the third one in 1974 carrying the first atomic clock into orbit.

Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters from known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969. Decades later, during the early years of GPS, civilian surveying became one of the first fields to make use of the new technology, because surveyors could reap benefits of signals from the less-than-complete GPS constellation years before it was declared operational. GPS can be thought of as an evolution of the SECOR system where the ground-based transmitters have been migrated into orbit.

Civil engineering and energy production
Apart from their use as weapons, nuclear explosives have been tested and used for various non-military uses. These have included large-scale earth moving and the creation of artificial bays. Due to the inability of the physicists to reduce the fission fraction of small, approximately 1 kiloton, yield nuclear devices that would have been required for many civil engineering projects, when long-term health and clean-up costs from fission products were included in the cost, there was virtually no economic advantage over conventional explosives, except for potentially the very largest of projects. In the United States, this work was done under the Operation Plowshare program, and included 27 nuclear tests designed towards investigating these possible uses from 1961 through 1973.

The Qattara Depression Project, as developed by Professor Friedrich Bassler who during his appointment to the West German ministry of economics in 1968 put forth a plan to create a Saharan lake and hydroelectric power station from blasting a tunnel between the Mediterranean sea and the Qattara Depression in Egypt, an area which lies below sea level. The core problem of the entire project was the water supply to the depression. Calculations by Bassler showed that digging a canal or tunnel would be too expensive, therefore Bassler determined that the use of nuclear explosive devices, to excavate the canal or tunnel, would be the most economical. The Egyptian government declined to pursue the idea.

The Soviet Union's Nuclear Explosions for the National Economy was a program in the Soviet Union that investigated non-weapons uses of nuclear explosions. These included one 30 kiloton explosion being used to close the Uzbekistani Urtabulak gas well in 1966 that had been blowing since 1963, and a few months later a 47 kiloton explosive was used to seal a higher pressure blowout at the nearby Pamuk gas field.

The public records for devices that produced the highest proportion of their yield via fusion-only reactions are possibly the Soviet peaceful nuclear explosions of the 1970s, with 98% of their 15 kiloton explosive yield being derived from fusion reactions, a total fission fraction of 0.3 kilotons in a 15 kt device.

The repeated detonation of nuclear devices underground in salt domes, in a somewhat analogous manner to the explosions that power a car internal combustion engine(in that it would be a heat engine) has also been proposed as a means of fusion power, in what is termed PACER. Other investigated uses for peaceful nuclear explosions were underground detonations to stimulate, by a process analogous to fracking, the flow of petroleum and natural gas in tight formations, this was most developed in the Soviet Union, with an increase in the production of many well heads being reported.

Physics
The discovery and synthesis of new chemical elements by nuclear transmutation, and their production in the necessary quantities to allow the studying of their properties, was carried out in nuclear explosive device testing. For example, the discovery of the short lived einsteinium and fermium, both created under the intense neutron flux environment within thermonuclear explosions, followed the first Teller-Ulam thermonuclear device test - Ivy Mike. The rapid capture of so many neutrons required in the synthesis of einsteinium would provide the needed direct experimental confirmation of the so-called r-process, the multiple neutron absorptions needed to explain the cosmic nucleosynthesis (production) of all heavy chemical elements heavier than nickel on the periodic table, in supernova explosions, before beta decay, with the r-process explaining the existence of many stable elements in the universe.

In 2008 the worldwide presence of new isotopes from atmospheric testing beginning in the 1950s was developed into a reliable way of detecting art forgeries, as all paintings created after that period may contain traces of caesium-137 and strontium-90, isotopes that did not exist in nature before 1945. Fission products like Cs-137 and Sr-90 did occur in nature prior to 1945, being produced in the natural nuclear fission reactor at Oklo, but almost all traces of them had long since decayed away before the rise of even the earliest known human painting.

Both climatology and particularly aerosol science, a subfield of atmospheric science, were largely created to answer the question of how far and wide fallout would travel. Similar to radioactive tracers used in hydrology and materials testing, fallout and the neutron activation of nitrogen gas served as a radioactive tracer that was used to measure and then help model global circulations in the atmosphere by following the movements of fallout aerosols.

After the Van Allen Belts surrounding earth were published about in 1958, James Van Allen suggested that a nuclear detonation would be one way of probing the magnetic phenomenon, data obtained from the August 1958 Project Argus test shots, a high altitude nuclear explosion investigation, were vital to the early understanding of the earth's magnetosphere. Soviet nuclear physicist and Nobel peace prize recipient Andrei Sakharov also proposed the idea that earthquakes could be mitigated and particle accelerators could be made by utilizing nuclear explosions, with the latter created by connecting a nuclear explosive device with another of his inventions, the explosively pumped flux compression generator, to accelerate protons to collide with each other to probe their inner workings, an endeavor which is presently done at much lower energy levels with non-explosive superconducting magnets in CERN. Sakharov suggested to replace the copper coil in his MK generators by a big superconductor solenoid to magnetically compress and focus underground nuclear explosions into a shaped charge effect. He theorized this could focus 1023 positively charged protons per second on a 1 mm2 surface, then envisaged making two such beams collide in the form of a supercollider.

Underground nuclear explosive data from Peaceful nuclear explosion test shots have been used to investigate the composition of the earth's mantle, analogous to the exploration geophysics practice of mineral prospecting with chemical explosives in "deep seismic sounding" reflection seismology.

Project A119, proposed in the 1960s, which as Apollo scientist Gary Latham explained, would have been the detonating of a "smallish" nuclear device on the Moon in order to facilitate research into its geologic make-up. Analogous in concept to the comparatively low yield explosion created by the water prospecting (LCROSS)Lunar Crater Observation and Sensing Satellite mission, which launched in 2009 and released the "Centaur" kinetic energy impactor, an impactor with a mass of 2,305 kg (5,081 lb), and an impact velocity of about 9000 km/h, releasing the kinetic energy equivalent of detonating approximately 2 tons of TNT (8.86 GJ).

Propulsion use
Although likely never achieving orbit due to aerodynamic drag, the first macroscopic object to obtain earth orbital velocity was a "manhole cover" propelled by the detonation of test shot Pascal-B, before sputnik obtained orbital velocity, and also successfully became the first satellite, in October 1957. The use of a subterranean shaft and nuclear device to propel an object to escape velocity has since been termed a "thunder well".

The direct use of nuclear explosives, by using the impact of propellant plasma from a nuclear shaped charge acting on a pusher plate, has also been seriously studied as a potential propulsion mechanism for space travel (see Project Orion).

Edward Teller, in the United States, proposed the use of a nuclear detonation to power an explosively pumped soft X-ray laser as a component of a ballistic missile defense shield, this would destroy missile components by transferring momentum to the vehicles surface by laser ablation. This ablation process is one of the damage mechanisms of a laser weapon, but it is also the basis of pulsed laser propulsion for spacecraft.

Ground flight testing by Professor Leik Myrabo, using a non-nuclear, conventionally powered pulsed laser test-bed, successfully lifted a lightcraft 72 meters in altitude by a method similar to ablative laser propulsion in 2000.

A powerful solar system based soft X-ray, to ultraviolet, laser system has been calculated to be capable of propelling an interstellar spacecraft, by the light sail principle, to 11% of the speed of light. In 1972 it was also calculated that a 1 Terawatt, 1-km diameter x-ray laser with 1 angstrom wavelength impinging on a 1-km diameter sail, could propel a spacecraft to Alpha Centauri in 10 years.

Asteroid impact avoidance
A proposed means of averting an asteroid impacting with earth, assuming low lead times between detection and earth impact, is to detonate one, or a series, of nuclear explosive devices, on, in, or in a stand-off proximity orientation with the asteroid, with the latter method occurring far enough away from the incoming threat to prevent the potential fracturing of the near earth object, but still close enough to generate a high thrust laser ablation effect.

A 2007 NASA analysis of impact avoidance strategies using various technologies stated: Nuclear stand-off explosions are assessed to be 10-100 times more effective than the non-nuclear alternatives analyzed in this study. Other techniques involving the surface or subsurface use of nuclear explosives may be more efficient, but they run an increased risk of fracturing the target NEO[near earth object]. They also carry higher development and operations risks.

Analysis of the uncertainty involved in nuclear device asteroid deflection shows that the ability to protect the planet does not imply the ability to also target the planet, which is the case with all non-nuclear alternatives, such as the controversial gravity tractor technology. A nuclear explosion which changed an asteroid's velocity by 10 meters/second (plus or minus 20%) would be adequate to push it out of an Earth-impacting orbit. However, if the uncertainty of the velocity change, was more than a few plus or minus percent, there would be no chance of directing the asteroid to a particular target.

However, if the need arises to use nuclear explosive devices to prevent an asteroid impact event, it may face the legal issue that presently the United Nations Committee on the Peaceful Uses of Outer Space, and the 1996 Comprehensive Nuclear-Test-Ban Treaty technically ban nuclear weapons in space.

Non-nuclear explosives in lunar science. For Project A119 page
Project A119, also known as "A Study of Lunar Research Flights", was a top-secret plan developed in 1958 by the United States Air Force. The aim of the project was to detonate a nuclear bomb on the Moon which would help in answering some of the mysteries in planetary astronomy and astrogeology, and had the explosive device not entered into a lunar crater, the flash of explosive light would have been faintly visible to people on earth with their naked eye, a show of force resulting in a possible boosting of domestic morale in the capabilities of the United States, a boost that was needed after the Soviet Union took an early lead in the Space Race and who were also working on a similar project.

Neither the Soviet nor the US Project A119 were ever carried out, being cancelled primarily out of a fear of a negative public reaction, with the potential militarization of space that it would also have signified, and because a moon landing would undoubtedly be a more popular achievement in the eyes of the American and international public alike.

The existence of the US project was revealed in 2000 by a former executive at the National Aeronautics and Space Administration (NASA), Leonard Reiffel, who led the project in 1958. A young Carl Sagan was part of the team responsible for predicting the effects of a nuclear explosion in low gravity and in evaluating the scientific value of the project. The project documents remained secret for nearly 45 years, and despite Reiffel's revelations, the United States government has never officially recognized its involvement in the study.



A vacuum stable chemical explosive filled the thumper mortar ammunition canisters used as part of the Apollo Lunar Active Seismic Experiments. These explosive experiments investigated the composition of the Lunar mantle during the Apollo Program, analogous to the exploration geophysics practice of mineral prospecting with chemical explosives in "deep seismic sounding" reflection seismology.

The scientific objectives of Project A119, which as Apollo scientist Gary Latham explained, would have been the detonating of a "smallish" nuclear device(1700+ tons of TNT) on the Moon in order to facilitate research into its geologic make-up. Have been attempted by non-nuclear means, for example, using the comparatively much lower yield explosion created by the water prospecting (LCROSS)Lunar Crater Observation and Sensing Satellite mission, which launched in 2009 and released the "Centaur" kinetic energy impactor, an impactor with a mass of 2,305 kg (5,081 lb), and an impact velocity of about 9000 km/h, releasing the kinetic energy equivalent of detonating approximately 2 tons of TNT (8.86 GJ) on impact.

The question of if LCROSS would find water had been stated to be influential in whether or not the United States government pursues creating a Moon base. On November 13, 2009, NASA confirmed that water was detected after the Centaur impacted the crater. The LCROSS "Centaur" kinetic energy impactor was however underpowered and therefore only partially successful, having not produced the expected earth visible flash, nor succeeding in excavating and vaporizing enough subsurface material for a complete lunar soil spectral analysis, an analysis that would identify the lunar soil composition to a high depth.

melt down
The Three Mile Island accident's molten core got exactly 15 millimeters on its way to "China" before the core froze at the bottom of the reactor pressure vessel.

Actual and predicted deaths due to the Chernobyl disaster, the role of the media
http://hyperphysics.phy-astr.gsu.edu/hbase/NucEne/radexp.html See press overreaction to radiation

Following the accident, journalists mistrusted many medical professionals (such as the spokesman from the UK National Radiological Protection Board), and in turn encouraged the public to mistrust them.

Throughout the European continent, in nations were abortion is legal, many requests for induced abortions, of otherwise normal pregnancies, were obtained out of fears of radiation from Chernobyl; including an excess number of abortions of healthy human fetuses in Denmark in the months following the accident.

"As the increase in radiation in Denmark was so low that almost no increased risk of birth defects was expected, the public debate and anxiety among the pregnant women and their husbands 'caused' more fetal deaths in Denmark than the accident. This underlines the importance of public debate, the role of the mass media and of the way in which National Health authorities participate in this debate."

In Greece, following the accident there was panic and false rumors which led to many obstetricians initially thinking it prudent to interrupt otherwise wanted pregnancies and/or were unable to resist requests from worried pregnant mothers over fears of radiation, within a few weeks misconceptions within the medical profession were largely cleared up, although worries persisted in the general population. Although it was determined that the effective dose to Greeks would not exceed 1 mSv (100 rem), a dose much lower than that which could induce embryonic abnormalities or other non-stochastic effects, there was an observed 2500 excess of, otherwise wanted pregnancies, being terminated, probably out of fear in the mother of some kind of perceived radiation risk.

A "slighty" above the expected number of requested induced abortions occurred in Italy, were upon request, "a week of reflection" and then a 2 to 3 week "health system" delay usually occur before the procedure.

Other conditions (section already present in the article and not by me)
According to Kenneth Mossman, a professor of Health Physics and member of the U.S. Nuclear Regulatory Commission advisory committee, the "LNT philosophy is overly conservative, and low-level radiation may be less dangerous than commonly believed". Yoshihisa Matsumoto, a radiation biologist at the Tokyo Institute of Technology, cites laboratory experiments on animals to suggest there must be a threshold dose below which DNA repair mechanisms can completely repair any radiation damage. Mossman suggests that the proponents of the current model believe that being conservative is justified due to the uncertainties surrounding low level doses and it is better to have a "prudent public health policy".

Another significant issue is establishing consistent data on which to base the analysis of the impact of the Chernobyl accident. Since 1991 large social and political changes have occurred within the affected regions and these changes have had significant impact on the administration of health care, on socio-economic stability, and the manner in which statistical data is collected. Ronald Chesser, a radiation biologist at Texas Tech University, says that "the subsequent Soviet collapse, scarce funding, imprecise dosimetry, and difficulties tracking people over the years have limited the number of studies and their reliability."

By me, Work in progress, Studies on animal and plant deformities


In two studies published in 2010, both of which had contributions from Wladimir Wertelecki, and which are credited with having created a new impetus to analyze the birth defect rates since 1986 as they relate to the concentration of contaminated areas, with Wetelecki suggesting that the rise could be linked to continuing exposure to low-level radiation doses, but he has also been quick to point out that the study does not claim that radiation exposure is definitively the cause of the defects. As his study lacked data about prenatal alcohol consumption and the diet of mothers in the region. With both being regarded as key to understanding the causes of the defects, as fetal exposure to alcohol and a lack of folates during pregnancy can lead to both of the types of birth defects observed.

Wetelecki et al. followed 344 women in the Ukraine, and concluded that low dose radiation exposure "might contribute", in a synergistic manner with alcohol consumption and micronutrient deficiencies, to a higher prevalence of birth defects in areas of Ukraine where the levels of radiation contamination are above that found in other radiation contaminated parts of the country. In Wetelecki's other study, he concluded that upon analyzing 96 438 births that occurred in Rivne Province (Oblast) in Ukraine between 2000 and 2006, the overall rate of neural tube defects in Rivne is one of the highest rates found in Europe (22.2 per 10,000 live births).

Igor Pokanevych, of the WHO office in the Ukraine, has said that although the WHO welcomes and supports efforts to undertake new studies, like Wetelecki's, it noted that studies need to follow the same methodology for any meaningful comparison to be made, and that it remains by the conclusions of the 2005 Chernobyl Forum.

In a paper published in 2004, an analysis of the prevalence of cleft lip and palates (CLP), which naturally occur with a frequency of between 1 and 2 cases in 1000 live births and are the most frequent congenital anomalies. Records from the former German Democratic Republic (GDR), covering 1967-1989 suggest that CLP in newborns increased by 9.4% (p=0.10) in prevalence from 1987 to 1989, in comparison to the rate observed prior to 1986, "possibly" caused by Chernobyl.

In contrast, a study published in 1992 which appeared in the American Journal of Obstetrics and Gynecology found that in the 66,743 human births between 1985 and 1989 which were monitored in Austria, no significant changes in the incidence of birth defects were observed. The author also stressed that to observe the teratology potential of low-dose radiation, sufficient baseline data, and highly reliable registries are a necessity.

However the only large scale human study, which was conducted by the World Health Organization, states, "children conceived before or after their father's exposure showed no statistically significant differences in mutation frequencies." Which is also supported by the now, over half a century long, Life Span Study which has followed the children conceived by parents who were alive at, and survived, Hiroshima and Nagasaki in 1945. Here too no differences were found in the frequencies of birth defects or stillbirths in the latter conceived children of the survivors. ]].

ftp://71.43.176.186/Sort%20Literature%20Review.Data/Kalter_2003_Neurotoxicology-and-Teratology-4229581825/Kalter_2003_Neurotoxicology-and-Teratology.pdf 5.2.4 on page 151 is particularly good, as the authors know how ~1 Gy doses of radiation, such as being in utero 1 to 2 km from the atomic bombings is a teratogen, no nuclear power industry sources have been determined to be a teratogen. pg 175 "no-effect" discussion is also informative about the shape of teratogen dose-response curves in general.

Atomic bomb survivors and the history of teratology of various substances, including ionizing radiation.

From the above teratology paper, I was directed to a 2000 study by Doyle et al. in the Lancet - Fetal death and congenital malformation in babies born to nuclear industry employees: report from the nuclear industry family study "We found no evidence of a link between exposure to low-level ionising radiation before conception and increased risk of adverse reproductive outcome in men working in the nuclear industry. Similarly for women there was no evidence of an association between monitoring before conception and malformation in offspring. " Although I think the they are suggesting that still births and miscarriages were higher in women "monitored". http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(00)02812-9/abstract

In terms of Radioecology studies, in non-mammals, a 1998 paper by Anders Pape Moller on plants, concluded that radiation from Chernobyl has reduced the ability of plants to control their developmental processes. Moller et al. in 2007 reported the observance of 11 morphological abnormalities in barn swallow populations around Chernobyl, abnormalities which are less frequently observed in uncontaminated Ukrainian control populations.

However the barn swallow work and results by Anders Moller and Timothy Mousseau has been questioned by their colleague Sergey Gaschak. By Smith in 2008, who raised doubts due to the lack of consideration of confounding factors, unclear dosimetry reporting, and for what he regarded as an inappropriate grouping of sites for data interpretation, and it has also been questioned by Wickliffe and Baker in 2011.

On farms in Narodychi Raion of Ukraine, it is reported that between 1986 and 1990 nearly 350 animals were born with deformities such as missing or extra limbs, missing eyes, heads or ribs, or deformed skulls; in comparison, 3 abnormal births had been registered in the five years prior.

Cover of nature magazine on the 10 year anniverserary of Chernobyl BASE-PAIR substitution rates for the mitochondria! cytochrome b gene of free-living, native populations of voles collected next to reactor 4 at Chernobyl, Ukraine, were estimated by two indepen-dent methods to be in excess of 10 −4 nucleotides per site per generation. These estimates are hundreds of times greater than those typically found in mitochondria of vertebrates, suggesting that the environment resulting from this nuclear power plant disaster is having a measurable genetic impact on the organisms of that region. Despite these DNA changes, vole populations thrive and reproduce in the radioactive regions around the Chernobyl reactor.

In 1996 in Nature that reported an elevated mutational rate in two species of voles living in the Red Forest. Because we had archived the clones that were sequenced from the gene used in those studies, we were able to exactly replicate our initial studies using automated DNA sequencers that became available to us immediately after the Nature paper was published. When this more accurate sequencing was completed, the data were no longer significant when comparing highly exposed and unexposed samples. After sequencing all of the clones multiple times and repeatedly finding no statistically significant differences, we retracted Nature paper. We experienced this first-hand. Extensive television and major newspaper coverage attended our 1996 paper that showed a deleterious effect of radiation exposure, but scant attention was paid to our 1997 retraction.

98 to 99 percent of the radiation released is now decayed – good news. The 1 or 2 percent left has long decay rates and will be there for decades yet to come. After analysis, decoding and comparing results, I was rather surprised to find no difference in the DNA damage between those animal living in the clean area and those living in the exclusion zone

Baker said the team calculated this particular animal's radiation dose and found it to be the highest of any animals in the area. If any mammal would show signs of mutation or genetic disruption, the bank vole would be the one.

In a review of the literature, published by Frank P. Castronovo of the Harvard Medical School in 1999, it states there is no substantive proof regarding radiation induced teratogenic effects from the Chernobyl accident. THIS IS A GREAT REFERENCE BY THE WAY. it has a load of top notch info that answers a lot of questions.

International spread of radioactive substances
Move picture, and make readers aware that - Approximately four hundred times more radioactive material was released than had been by the atomic bombing of Hiroshima, however it must be noted that a direct comparison between the two events, in biological terms, based solely on comparisons in the quantity of material involved can be misleading, as nuclear detonations also produce a burst of prompt radiation coinciding with the explosions flash. The disaster released 1/100 to 1/1000 of the total amount of radioactivity released by nuclear weapons testing during the 1950s and 1960s.

Also this reference, http://world-nuclear.org/info/chernobyl/inf07.html should not be used to push, as it is presently done, the idea that 1 million people have been "affected" by contamination, as it is not clearly stating that. - "In the absence of pre-1986 data(to compare against), they compared a control population with those exposed to radiation. Significant health disorders were evident in both control and exposed groups, but, at that stage, none was radiation related. Subsequent studies in the Ukraine, Russia and Belarus were based on national registers of over one million people possibly affected by radiation. By 2000, about 4000 cases of thyroid cancer had been diagnosed in exposed children. However, the rapid increase in thyroid cancers detected suggests that some of it at least is an artifact of the screening process."

Radiation, fallout and the long term effects of nuclear explosions
See phytoremediation for plants that selectively bioaccumulate radioisotopes like Sr-90 & Cs-137 for remediation.

The release of radioisotopes from the nuclear fuel was largely controlled by their boiling points, and the majority of the radioactivity present in the core was retained in the reactor.
 * All of the noble gases, including krypton and xenon, contained within the reactor were released immediately into the atmosphere by the first steam explosion.
 * 50 to 60% of all core radio-iodine in the reactor, containing about 1760 PBq(176 × 10$16$ Becquerels), which in mass units is 400 kg of iodine-131, was released, as a mixture of sublimed vapor, solid particles, and organic iodine compounds. Half life 8 days. The activity of any radioisotope, and therefore the quantity of that isotope remaining, after 7 decay half lifes have passed, is less than 1% of its initial magnitude, and it continues to reduce beyond 1% after 7 half lifes to 0% remaining after 10 half lifes have passed.
 * 20 to 40% of all core caesium-137 was released, 85 PBq in all. Caesium was released in aerosol form; caesium-137, along with isotopes of strontium, are the two primary elements preventing the Chernobyl exclusion zone being re-inhabited. The caesium-137 activity represented by 8.5 × 10$16$ Bq, would be produced by 24 kilograms of caesium-137. Cs-137 has a half life of 30 years.
 * Tellurium-132, half life 78 hours, an estimated 1150 PBq was released.
 * Xenon-133, the total radioactivity atmospheric release is estimated at 5200 PBq, Xe-133 has a half-life of 5 days.
 * An early estimate for total nuclear fuel material released to the environment was 3 ± 1.5%; this was later revised to 3.5 ± 0.5%. This corresponds to the atmospheric emission of 6 t of fragmented fuel.

Two sizes of particles were released: small particles of 0.3 to 1.5 micrometers (aerodynamic diameter) and large particles of 10 micrometers. The large particles contained about 80% to 90% of the released nonvolatile radioisotopes zirconium-95, niobium-95, lanthanum-140, cerium-144 and the transuranic elements, including neptunium, plutonium and the minor actinides, embedded in a uranium oxide matrix.







The dose that was calculated is the relative external gamma irradiation for a person standing in the open. The exact dose to a person in the real world who would spend most of their time sleeping indoors in a shelter and then venturing out to consume an internal dose from the inhalation or ingestion of a radioisotope, requires a personnel specific radiation dose reconstruction analysis.

Counterforce and energy accident argument.
Moreover, in some interpretations of the human labor theory of value ,applied to the cost of an accident, as no event, activity or piece of property has any intrinsic value, but for the cost in human sweat and blood that must be shed in the pursuit of that source, nothing being more valuable than a human life. Applied to attempts to quantifiy the economic cost of an accident, the value of the loss of a human life - either at the hands of an immediate energy accident death, such as is common in fossil fuel explosions, and oil wars, or from latent deaths, and lost hours of work productivity as a result of inhaling smog or other energy related pollutants should be included in any serious attempts to quantify the cost of an energy accident. Therefore, the full economic cost of the use of an energy source, should not be merely based upon property damage, but be inclusive of the externality cost in lost hours of work and public health resources expended to ameliorate the physiological impact of the use of that energy source.

Fatalities Energy accident fatalities include both immediate deaths caused by an accident, those that occur in the seconds to weeks after the event, and latent, or predicted fatalities, that show up in epidemiology studies years after the fact, due to pollutant exposure related to the use of that energy source.

Activities at the Benxihu Colliery resulted in a coal dust explosion that immediately killed 1,500 miners, it is the single worst mining accident in history.

G gravitational constant how to DIY measure it
http://www.fourmilab.ch/gravitation/foobar/ Bending space time in the basement Henry Cavendish.1997 John Walker

Myopia, short sightedness, does it get worse?
http://www.fourmilab.ch/documents/health/myopia/ Ask to be correct at reading distance the next time you're getting your glasses to reduce eye strain, also enquire about getting this done in laser eye surgery. This is said to reduce eye strain, halt the worsening of myopia and elminate the need for 'reading glasses' when you hit 50.

Trumped up latent negative effects of thermonuclear war
For US nuclear weapons policy see OPLAN and Essentials of Post-Cold War Deterrence.



Why is there a consistent effort to make thermonuclear war out to be even more deadly than it already appears from the definite initial blast & thermal fatalities that would result in cities following a surprise attack? Talk of the disruption of food supplies(when very few people will remain anyway), nuclear winter fantasies, ozone hole causing godzillion deaths etc. Why, from a psychological point of view has it been necessary for so many people to dedicate there time to attempting to trump up the effects of something that would ultimately be thy most devastating war ever from initial deaths alone, shouldn't that be enough to deter people away from thinking a nuclear victory would be anything but a pyrrhic one?

Radioisotope fallout, number of deaths from a Cold war style TNW
As nuclear weapons are primarily intended to be used as a deterrent, it is natural that the countries that possess them wish to make their effects appear even more lethal than they already are, which allows for a greater deterrence value from a smaller total weapons stockpile, a global thermonuclear war would result in from 60 to 100 mSv of an average worldwide human dose to each individual.





From https://www.fas.org/irp/threat/mctl98-2/ Militarily Critical Technologies List (MCTL) Part II: Weapons of Mass Destruction Technologies SECTION VI NUCLEAR WEAPONS EFFECTS TECHNOLOGY.

The contamination from Chernobyl was significantly larger than would have been expected from a nuclear detonation of about 20 kT at ground level, but was comparable in extent to what might result from a “small” nuclear war in which a dozen or so weapons of nominal yield were exploded at altitudes intended to maximize blast damage. Hence, for those nations which are concerned about being the victims of a nuclear attack, the requirement for understanding and implementing ways of mitigating NWE is important...



http://hyperphysics.phy-astr.gsu.edu/hbase/NucEne/cherno2.html#c6 "Levi gives an estimated long term total exposure is 29 million person rems with an excess of 3000 cancer deaths above the 9.5 million cancer deaths projected in the same population. Largest effect from cesium. The later estimates by Anspaugh, et al. suggest 93 million person rem and a projection of 17000 additional fatal radiogenic cancers out of a total of 123 million cancer deaths. 97% of the health effects are projected to be in the Soviet Union and Europe."

Levi, B G"Soviets assess cause of Chernobyl accident", Physics Today, Dec 86, vol39, p 17. Excellent description of sequence in a block on p8-9

Anspaugh, L. R., Catlin, R. J., Goldman, M., The Global Impact of the Chernobyl Reactor Accident, Science 242, Dec 16, 1988, p1513

Both the above 1986 and 1988 cancer death toll predictions for chernobyl are surprising in tune to present day calculations.

A retrospective view of the Chernobyl accident of Apr 26, 1986 assesses the total radiation release at about 100 megaCuries or 4 x 10^18 becquerels, including some 2.5 MCi of cesium-137. The cesium is the most serious release in terms of long term consequences.

A Centers for Disease Control and Prevention/ National Cancer Institute study claims that CONUS nuclear fallout might have led to approximately 11,000 excess deaths, most caused by thyroid cancer linked to exposure to iodine-131. The U.S. is however the only nation that has compensated nuclear test victims. The money is going to people who took part in these tests, notably at the Nevada Test Site, and to others exposed to fallout radiation.What governments offer to victims of nuclear tests

Normal and abnormal conditions
The nuclear chemistry associated with the nuclear fuel cycle can be divided into two main areas; one area is concerned with operation under the intended conditions while the other area is concerned with maloperation conditions where some alteration from the normal operating conditions has occurred or (more rarely) an accident is occurring.

The releases of radioactivity from normal operations are the small planned releases from uranium ore processing, enrichment, power reactors, reprocessing plants and waste stores. These can be in a different chemical/physical form to the releases which could occur under accident conditions. In addition the isotope signature of a hypothetical accident may be very different from that of a planned normal operational discharge of radioactivity to the environment.

Just because a radioisotope is released it does not mean it will enter a human and then cause harm. For instance, the migration of radioactivity can be altered by the binding of the radioisotope to the surfaces of soil particles. For example, caesium (Cs) binds tightly to clay minerals such as illite and montmorillonite, hence it remains in the upper layers of soil where it can be accessed by plants with shallow roots (such as grass). Hence grass and mushrooms can carry a considerable amount of 137Cs which can be transferred to humans through the food chain. But 137Cs is not able to migrate quickly through most soils and thus is unlikely to contaminate well water. Colloids of soil minerals can migrate through soil so simple binding of a metal to the surfaces of soil particles does not fix the metal totally.

According to Jiří Hála's text book, the distribution coefficient Kd is the ratio of the soil's radioactivity (Bq g−1) to that of the soil water (Bq ml−1). If the radioisotope is tightly bound to the minerals in the soil, then less radioactivity can be absorbed by crops and grass growing on the soil.


 * Cs-137 Kd = 1000
 * Pu-239 Kd = 10000 to 100000
 * Sr-90 Kd = 80 to 150
 * I-131 Kd = 0.007 to 50

One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also after a nuclear war or serious accident, the removal of top few cm of soil and its burial in a shallow trench will reduce the long-term gamma dose to humans due to 137Cs, as the gamma photons will be attenuated by their passage through the soil.

Even after the radioactive element arrives at the roots of the plant, the metal may be rejected by the biochemistry of the plant. The details of the uptake of 90Sr and 137Cs into sunflowers grown under hydroponic conditions has been reported. The caesium was found in the leaf veins, in the stem and in the apical leaves. It was found that 12% of the caesium entered the plant, and 20% of the strontium. This paper also reports details of the effect of potassium, ammonium and calcium ions on the uptake of the radioisotopes.

In livestock farming, an important countermeasure against 137Cs is to feed animals a small amount of Prussian blue. This iron potassium cyanide compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to eat several grams of Prussian blue per day. The Prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years. This is a constant which can not be changed but the biological half-life is not a constant. It will change according to the nature and habits of the organism for which it is expressed. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the Prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of Prussian blue required for the treatment of humans or animals is a special grade. Attempts to use the pigment grade used in paints have not been successful. Note that a good source of data on the subject of caesium in Chernobyl fallout exists at (Ukrainian Research Institute for Agricultural Radiology).

Release of radioactivity from fuel during normal use and accidents
The IAEA assume that under normal operation the coolant of a water-cooled reactor will contain some radioactivity but during a reactor accident the coolant radioactivity level may rise. The IAEA states that under a series of different conditions different amounts of the core inventory can be released from the fuel, the four conditions the IAEA consider are normal operation, a spike in coolant activity due to a sudden shutdown/loss of pressure (core remains covered with water), a cladding failure resulting in the release of the activity in the fuel/cladding gap (this could be due to the fuel being uncovered by the loss of water for 15–30 minutes where the cladding reached a temperature of 650-1250 oC) or a melting of the core (the fuel will have to be uncovered for at least 30 minutes, and the cladding would reach a temperature in excess of 1650 oC).

Based upon the assumption that a Pressurized water reactor contains 300 tons of water, and that the activity of the fuel of a 1 GWe reactor is as the IAEA predicts, then the coolant activity after an accident such as the Three Mile Island accident (where a core is uncovered and then recovered with water) can be predicted.

Releases from reprocessing under normal conditions
It is normal to allow used fuel to stand after the irradiation to allow the short-lived and radiotoxic iodine isotopes to decay away. In one experiment in the US, fresh fuel which had not been allowed to decay was reprocessed (the Green run  ) to investigate the effects of a large iodine release from the reprocessing of short cooled fuel. It is normal in reprocessing plants to scrub the off gases from the dissolver to prevent the emission of iodine. In addition to the emission of iodine the noble gases and tritium are released from the fuel when it is dissolved. It has been proposed that by voloxidation (heating the fuel in a furnace under oxidizing conditions) the majority of the tritium can be recovered from the fuel.

A paper was written on the radioactivity in oysters found in the Irish Sea. These were found by gamma spectroscopy to contain 141Ce, 144Ce, 103Ru, 106Ru, 137Cs, 95Zr and 95Nb. Additionally, a zinc activation product (65Zn) was found, which is thought to be due to the corrosion of magnox fuel cladding in spent fuel pools. It is likely that the modern releases of all these isotopes from Windscale is smaller.

Nuclear winter, and surface bursts in general complicate TNW fallout
As the implications of nuclear winter began to be taken seriously in the late 1980s, military analysts turned their attention to the development of nuclear warheads that would explode at low altitudes and cause less thermal radiation ignited fires, thus reducing the likelihood of a nuclear winter. The TTAPS paper had described a 3000 MT counterforce attack on ICBM sites; Michael Altfed of Michigan State University and political scientist Stephen Cimbala of Pennsylvania State University argued that smaller, more accurate warheads and lower detonation heights could produce the same counterforce strike with only 3 MT and produce less climatic effects, even if cities were targeted, as lower fuzing heights, such as surface bursts, would limit the range of the burning thermal rays due to terrain masking and shadowing, while also temporarily lofting far more radioactive soil into the atmosphere. Therefore, as a consequence of attempting to limit the target fire hazard by reducing the range of thermal radiation with fuzing for surface bursts, this will result in a scenario were the far more concentrated, and therefore deadlier, local fallout following a surface burst is generated, as opposed to the comparatively dilute global fallout, created when nuclear weapons are fuzed in air burst mode. Altfred and Cimbala also suggested that belief in the possibility of nuclear winter has actually made nuclear war more likely, contrary to the views of Sagan and others, because it has played a part in inspiring the development of newer more accurate, and comparatively lower, explosive yield nuclear weapons.

Castle Bravo
Navy man John Bianco was on board the Philip off the coast of the Bikini Atoll at zero hour.

As he discusses in the November 2012 issue of NAAV.com, a document found on that site. The flash was so bright he could make out his blood vessels in his arm, much like how you see red when you cover your eyes with your eyelids and stare at the sun, only times 10. He also describes how the fallout appears as snow, and that him and his crew safely "buttoned up" in the inner decks of the ship after observing the fallout out. Buttoning up meaningl closing all the ship hatches and attempting to seal the ship up from the ingress of fallout. None of his crew died from acute effects, nor did he report having any acute radiation effects himself. Contrast this experience with the crew of the Lucky Dragon no.5, who did not button up, and who also described the fallout as like snow. With 1 of the crew dying a few months later, possibly from acute radiation syndrome.

http://www.naav.com/assets/2012_11_NAAV_Newsletter.pdf If the doc doesn't work just go to NAAV.com There are a great deal of exaggerations in the doc, consider the fact that any health claims are to be suspect as the organization gets its money from making health claims. That's not to take away from the very real fact that there are "atomic veterans" with causal health problems, but it seems like everyone involved now wants to get on the money gravy train. For example, the doc also states on pg 7, that the nuclear flash can burn its way into your "skull" if you're far enough away to get 1st degree burns. Which is just one example of the silly exaggerations in the doc. You get Chorioretinal burns, not skull burning! See eye injuries.

Toba catastrophe theory, cooling criticism.
Archaeologists who in 2013 found a microscopic layer of glassy volcanic ash in sediments of Lake Malawi, and definitively linked the ash to the 75,000-year-old Toba super-eruption, went on to note a complete absence of finding the change in fossil type close to the ash layer that would be expected following a severe volcanic winter. This result led the archaeologists to conclude that the largest known volcanic eruption in the history of the human species did not significantly alter the climate of East Africa.

Other research has cast doubt on the genetic bottleneck theory. For example, ancient stone tools in southern India were found above and below a thick layer of ash from the Toba eruption and were very similar across these layers, suggesting that the dust clouds from the eruption did not wipe out this local population. Additional archaeological evidence from southern and northern India also suggests a lack of evidence for effects of the eruption on local populations, leading the authors of the study to conclude, "many forms of life survived the supereruption, contrary to other research which has suggested significant animal extinctions and genetic bottlenecks".

Radiation units
The moment magnitude scale is to the gray as the Mercalli intensity scale is to the sievert.

http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/radrisk.html#c5 Explanation of old and new units.

http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/radrisk.html#c4 nuclear winter talk page roentgen intensity answered.

http://www.neimagazine.com/opinion/opinionnuclear-in-china---now-back-on-track/ China have 29, not 28, of world total 68 nuclear power reactors being built. They have also changed to only building Gen III reactors post Fukushima.

effects of TNW
https://de.wikipedia.org/wiki/Kernwaffenexplosion German effects of nuclear weapons has some good data copied from Glasstone, including equations for use in my 'effects of nuclear weapons'. An entire page of equations on this page https://de.wikibooks.org/wiki/Formelsammlung_Physik/_Kernwaffenexplosion

ttps://en.wikipedia.org/wiki/Effects_of_nuclear_weapons#See_also See the media images, including the RERF images here to add.

Nuclear weapons tests and cancer, an indication of TNW cancer incidence
Childhood Leukemia rates between 1944 and 1975 in Utah doubled in the heavy fallout zones. From a rate half of the US average before nuclear testing, to just over the US rate after testing ended. No other cancer types were found to have increased. The increase could be due to fallout or "some other unexplained factor".

"This paper primarily discusses health effects that have resulted from exposures received as a result of above-ground nuclear tests, with emphasis on thyroid disease from exposure to 131I and leukemia and solid cancers from low dose rate external and internal exposure. Results of epidemiological studies of fallout exposures in the Marshall Islands and from the Nevada Test Site are summarized, and studies of persons with exposures similar to those from fallout are briefly reviewed (including patients exposed to 131I for medical reasons and workers exposed externally at low doses and low dose rates). Promising new studies of populations exposed in countries of the former Soviet Union are also discussed and include persons living near the Semipalatinsk Test Site in Kazakhstan, persons exposed as a result of the Chernobyl accident, and persons exposed as a result of operations of the Mayak Nuclear Plant in the Russian Federation. Very preliminary estimates of cancer risks from fallout doses received by the United States population are presented."

Nuclear weapons and eye injuries
There is a paucity of recorded cases of flash induced eye injuries at Hiroshima. A single bilateral central scotoma was recorded, the explanation for the low rate was that it was daylight at the time of the bombing and everyones pupils were already constricted / un-dilated at the time. This study also included the results of over 700 rabbit experiments that were conducted concurrently with 6 nuclear detonations in the Nevada test site in 1953.

If an eyeball was really close enough to the fireball, say 100 m then at﻿ that distance the eyeball would likely be rapidly ablated, but further away, but not too far away that retinal burns are the only eye injuries observed, perhaps the eye would get some internal vitreous humor heating, that could cause an eye explosion? You could test all these distances out; all you need to do is buy a pigs eyeball from the local butcher and put it at the focus of a solar thermal furnace for a couple of seconds. A Fresnel lens would suffice.

Earthquakes generated by Nuclear explosions
The energy released by nuclear weapons is traditionally expressed in terms of the energy stored in a kiloton or megaton of the conventional explosive trinitrotoluene (TNT).

A rule of thumb equivalence from seismology used in the study of nuclear proliferation asserts that a one kiloton nuclear explosion creates a seismic signal with a Moment magnitude scale of approximately 4.0. This in turn leads to the equation


 * $$M_n = \textstyle\frac{2}{3}\displaystyle\log_{10} \frac{m_{\mathrm{TNT}}}{\mbox{Mt}} + 6,$$

where $$m_{\mathrm{TNT}}$$ is the mass of the explosive TNT that is quoted for comparison (relative to megatons Mt).

Such comparison figures are not very meaningful. As with earthquakes, during an underground explosion of a nuclear weapon, only a small fraction of the total amount of energy transformed ends up being radiated as seismic waves. Therefore, a seismic efficiency has to be chosen for a bomb that is quoted as a comparison. Using the conventional specific energy of TNT (4.184 MJ/kg), the above formula implies the assumption that about 0.5% of the bomb's energy is converted into radiated seismic energy $$E_s$$. For real underground nuclear tests, the actual seismic efficiency achieved varies significantly and depends on the site and design parameters of the test.

Atomic bombing of Hiroshima and Nagasaki, including debate
All my edits so far. Japanese nuclear weapons program During the war, and 1945 in particular, due to state secrecy, very little was known about the progress, or lack thereof, of the Japanese nuclear weapons program outside Japan. The US state of knowledge of the Japanese nuclear weapons program in that year was that, amongst other things, Japan had requested materials from their German allies and 560 kg of unprocessed uranium oxide was dispatched to Japan in April 1945 aboard the submarine U-234, which however surrendered to US forces in the Atlantic following Germany's surrender. The uranium oxide was reportedly labeled as "U-235", which may have been a mislabeling of the submarine's name and its exact characteristics remain unknown; some sources believe that it was not weapons-grade material and was intended for use as a catalyst in the production of synthetic methanol to be used for aviation fuel.

Post war analysis of if the Japanese nuclear weapons research and development efforts were near completion, and therefore had in a revisionist sense justified an attack, have determined that Japan was considerably behind the US developments in 1945. Historians have found that the Japanese nuclear program was comparably undeveloped, when compared to the German nuclear energy project of WWII.

See talk page.


 * I agree with Ghostofnemo that it is important material to give readers a feel for the over all context of the event, that is, to understand the state of warfare at that time. However it clearly has no place in the intro, and although it should be in the article, it should be placed further down in its own category, or in the Debate of the Bombings page, rather than shoe horned into the introduction. In respect to Nick-D's point and analogy, its on egg shells, the Nanking massacre came before any major allied attacks on Japan, therefore did not affect it unless someone in Japan built a time machine. However, I think I know what he is driving at, he opposes the material on the grounds that it may appear more suited to a list of civilian deaths in war page. However although it would undoubtedly also be a good contribution to such a page, because the majority of these democide deaths perpetrated by Japan occured before the nuclear bombings, it is a piece of the puzzle to quickly convey to a reader, how could someone justify ordering such a thing(a nuclear attack).
 * Boundarylayer (talk) 16:00, 29 May 2013 (UTC)


 * After some replies ->
 * I agree Ghostofnemo, it is a routinely downplayed, or uncared for fact that the Japanese committed a large amount of civilian killing, perhaps not downplayed by the Japanese themselves, but by the general public, very few people have even heard of Unit 731 by comparison and the approx 30 million deaths from Japanese war crimes in general. Therefore a brief mention to what you had in mind is suited for the article, with of course the larger discussion of it being more fitted to the Debate over the atomic bombings of Hiroshima and Nagasaki page, something everyone here appears to be in agreement on.


 * To Nick-D and Binksternet, who have argued "there was no concern about past deaths caused by Japanese militarism", you both really need to listen to Truman's quote again. - I realize the tragic significance of the atomic bomb...having found the bomb, we have used it. We have used it against those who attacked us without warning at Pearl Harbor, against those who have starved and beaten and executed American prisoners of war, against those who have abandoned all pretense of obeying international laws of warfare. We have used it in order to shorten the agony of young Americans. We shall continue to use it until we completely destroy Japan's power to make war. Only a Japanese surrender will stop us.


 * Yet you're trying to argue, that past actions by the Japanese had no bearing on the decision to drop the bombs, that not only was there - "no concern about the past deaths caused by Japanese militarism", but that it is irrelevant to even mention Japanese atrocities as being an important component in the rationale of making the decision to conduct the bombings?


 * As James Kenneth Bowen of Counsel states, Truman would have been aware of the brutality, racism and fanaticism routinely displayed by the Japanese military; the cruelty towards, and frequent murder of prisoners of war and non-combatants; the raping and looting; the mass slaughter of Chinese civilians....
 * http://www.pacificwar.org.au/AtomBomb_Japan.html


 * As an aside to Nick, I must disagree, and issue a reply to your suggestion, that "the great majority of people in the Japanese cities had nothing to do with the war crimes and killings committed by Japanese forces, so to imply a connection is false". If you are helping your country in a total war, which is a war were almost every single able-bodied "civilian" contributes to the war effort. Then you cannot make the claim that the majority of Japanese people were mere bystanders to what their country was doing. To absolve them of wrong doing, and just deferring blame on up the ranks to the "government", takes away from the fact that it was the "majority of people in Japan" who were greasing the wheels of the war machine, by arming, and feeding, the imperialist Japanese military as it went around setting up colonies in oceania.


 * Boundarylayer (talk) 06:23, 21 June 2013 (UTC)

Warnings given, leaflets. & do they only have themselves to blame
http://nuclearsecrecy.com/blog/2013/04/26/a-day-too-late/

For several months, the US had dropped more than 63 million leaflets across Japan, warning civilians of air raids. Many Japanese cities suffered terrible damage from aerial bombings, some even 97% destruction. In general, the Japanese regarded the leaflet messages as truthful, however, anyone who was caught in possession of a leaflet was arrested by the Japanese government. Leaflet texts were prepared by recent Japanese prisoners of war because they were thought to be the best choice "to appeal to their compatriots."

In preparation for dropping an atomic bomb on Hiroshima, US military leaders had decided against a demonstration bomb, and they also decided against a special leaflet warning, in both cases because of the uncertainty of a successful detonation, and the wish to maximize psychological shock. No warning was given to Hiroshima that a new and much more destructive bomb was going to be dropped. Various history books give conflicting information about when the last leaflets were dropped on Hiroshima prior to the atomic bomb: Robert Jay Lifton writes that it was 27 July and Theodore H. McNelly writes that it was 30 July but the USAAF history notes 11 cities targeted with leaflets on 27 July, none being Hiroshima, and no leaflet sorties on 30 July. Other leaflet sorties were undertaken on 1 and 4 August, according to the official USAAF chronology. It is very likely that Hiroshima was leafleted in late July or early August, as survivor accounts talk about a delivery of leaflets a few days before the atomic bomb was dropped. One such leaflet lists 12 cities targeted for firebombing: Otaru, Akita, Hachinohe, Fukushima, Urawa, Takayama, Iwakuni, Tottori, Imabari, Yawata, Miyakonojo, and Saga. Hiroshima was not listed.

A wiki editor contacted me on YouTube about the above paragraph. He was Japanese and was canvassing for my support to suggest that Hiroshima did not get leafletted and that if it did it made no sense to do so. As I didn't write the above paragraph I didn't help him all that much. What I did do was present him with the Machiavellian reasons for the leaflets. Copied from my sent folder on YouTube. ->

Although none of us can be 100% if Hiroshima received a leaflet or not, the issue you seem to be having is that it seems like if Hiroshima did receive a leaflet then it DOES NOT make sense at all. However, It would not have been contradictory to leaflet Hiroshima before the bombing, as the leaflets do say something to the effect that - Cities not named specifically here on this leaflet, may also be bombed.

You may think this makes no sense and is contradictory, but militarily it makes complete sense.

Try and consider it from the point of view of the USAAF - Are you really going to tell your enemy all your bombing plans, in advance - that you plan to firebomb Tokyo and drop new weapons on Hiroshima and Nagasaki? No of course you're not, not unless you have a death wish! If you do tell your enemy(the Japanese) all your bombing planes, then your enemy will logically put all their anti-aircraft guns in those cities to protect them, if you tell them in advance. Militarily what you do is you tell your enemy - Listen Mr. enemy we are going to bomb over here - target Z. But it's just a ruse, all smoke and mirrors or subterfuge, what you're really going to do is bomb target Y while the enemy falls for the misinformation about target Z, when the enemy have all their defenses at city Z.

Here read this, the leafleting of cities of Japan in 1945 was an example of Military deception, a 'ruse'. http://www.armchairgeneral.com/tactics-101-056-military-deception-means-and-techniques.htm

''During a ruse, you will deliberately expose false information so that your enemy can collect upon it. It is hoped that this information will be analyzed by the enemy as truthful.''

Military deception does not make sense for a reason.

So while I sympathize with the children who died in all the cities of Japan(all 66-68 cities which were firebombed and Atom bombed to various degrees) The responsibility for their deaths, primarily, does not lie with the US, but with the Japanese parents who did not have sense enough(if they cared for their children) to get out of all major cities as the USA was telling them to do!

The Japanese were given more than enough warning prior to Hiroshima and Nagaski(such as the firebombing of Tokyo in March 1945, the most devastating air raid in history)

Hirohito was given numerous chances to surrender. Yet the Japanese, from the Japanese 'civilian' right up to the Emperor, did not listen.

War is hell my friend. The Japanese could have ended it sooner and saved thousands of lives. But they did not listen.

Why is it so hard for you to accept that the USAAF dropped leaflets on numerous cities, including those not listed specifically on the leaflet in question?

I'm finding it hard to understand why you so strongly believe that - The USAAF would never drop leaflets on a city not listed for bombing?

What evidence do you have that the USAAF dropped leaflets only on cities that were listed specifically on the leaflet?

Allegations of censorship not supported.
The book Hiroshima written by Pulitzer Prize winner John Hersey, was an article originally published in The New Yorker on 31 August 1946. Although the article was originally to be published over four issues, "Hiroshima" made up the entire contents of one issue. Hiroshima narrates the stories of six bomb survivors immediately prior and for months after the dropping of the first atomic bomb.

The US, understandably did not want agents of the Soviet Union getting hold of video footage, or taking soil samples, to learn the effects of, and composition of the nuclear bombings (see Nuclear forensics). As this would be helpful to the soviet nuclear weapons efforts. Other than that, I have not encountered a single case that the US censored books. The highly popular 1946 book above attests to that.

Hibakusha


According to the US Department of Energy, the immediate effects of the blast and thermal radiation killed approximately 70,000 people in Hiroshima. Estimates of total deaths by the end of 1945 from initially non lethal burn and blast injuries, acute radiation syndrome and related disease, the effects of which were aggravated by lack of medical resources, range from 90,000 to 166,000. One speculative estimate suggests that up to, or in excess of, 200,000 had died as a consequence of Little Boy, in the "five-year death toll", as cancer and other long-term effects took hold. An epidemiology study by the Japanese Radiation Effects Research Foundation states that from 1950 to 2000, 46% of leukemia deaths and 11% of solid cancer deaths among the bomb survivors were due to radiation from the bombs, the statistical excess being estimated to 200 leukemia and 1700 solid cancers.

Hibakusha and their children were (and still are) victims of severe discrimination due to public ignorance about the consequences of radiation sickness, with much of the public believing it to be hereditary or even contagious. This is despite the fact that no statistically demonstrable increase of birth defects/congenital malformations was found among the later conceived children born to survivors of the nuclear weapons used at Hiroshima and Nagasaki, or found in the later conceived children of cancer survivors who had previously received radiotherapy. The surviving women of Hiroshima and Nagasaki, that could conceive, who were exposed to substantial amounts of radiation, went on and had children with no higher incidence of abnormalities/birth defects than the rate which is observed in the Japanese average.

https://www.radiology.wisc.edu/sections/msk/files/forReferringClinicians/PregInfo.pdf . For microcephaly the period at risk is 2 to 15 weeks post-conception (4 to 17 weeks post-LMP). For severe mental retardation and intellectual deficit the period at risk is 8 to 15 weeks post-conception (10 to 17 weeks post-LMP)

Studs Terkel's book The Good War includes a conversation with two hibakusha. The postscript observes:

"There is considerable discrimination in Japan against the hibakusha. It is frequently extended toward their children as well: socially as well as economically. 'Not only hibakusha, but their children, are refused employment,' says Mr. Kito. 'There are many among them who do not want it known that they are hibakusha.'"

The Japan Confederation of A- and H-Bomb Sufferers Organization (日本被団協) is a group formed by hibakusha in 1956 with the goals of pressuring the Japanese government to improve support of the victims and lobbying governments for the abolition of nuclear weapons.

Life Span Study and its effect on the Linear no-threshold theory
As of 2012, 93,741 hibakusha have taken part in what is, probably, the largest and longest running epidemiology studies ever conducted, beginning with the work of Atomic Bomb Casuality Commission almost immediately after the two bombings occurred, and continuing with the Radiation Effects Research Foundation a Life Span Study(LSS) has determined, that a 22% increase in death, over a control group, is observed in those few that survived the blast and thermal effects of the bombs at, but who nonetheless received 1 Gray (unit) of radiation.

From the online 1962 Gladstone & Dolan Weapons effect calculator, supplied by http://www.fourmilab.ch/bombcalc/.

You would need to be about ~ 1.5 km away from a 20 kiloton weapon to receive 100 rem of radiation, as you receive 3000 rem at 1 km and 30 rem at 2 km (as the calculator doesn't allow 1.5 to be entered). 100 rem is 1 sievert Assuming you are conservative and don't correct for the types of radiation with relative biological effectiveness. 1 Sievert ~ 1 Gray. Complicating matters is that the Japanese DSO2 figures are different than those used to make the 1962 calculator, therefore you should use gray and rad as much as possible.

At a distance of ~ 1.5 km, you can safely assume pretty much no one survives from blast and heat effects alone.

Life Span Study Report 8. Mortality experience of atomic bomb survivors, 1950-74 - http://www.rerf.or.jp/library/archives_e/lsstitle.html

Since the last report, covering the experience of the 82,000 A-bomb survivors for the period 1950-1972, there have been 1,704 deaths (by 30 September 1974) and total deaths now stand at 20,230 since 1 October 1950. For cancer the increase was 390 and the new total 3,957...

Evidence of radiation carcinogenesis is much stronger in Hiroshima than in Nagasaki, and for many sites the Nagasaki data alone would not suffice to show a radiation effect. It seems clear also that absolute risks per rad are higher in Hiroshima where neutrons contribute substantially to the total dose, than in Nagasaki where they make almost no contribution...

At the end of 1974 excess deaths numbered about 85 for leukemia and 100 for other forms of cancer among the 82,000 A-bomb survivors under study. Sites of cancer that seemed especially involved in the continued increase in absolute risk estimates for all forms of cancer except leukemia were the respiratory organs, and the digestive organs. Incidence data suggest that breast cancer is also on the rise, but this does not show in the mortality analysis.

Under the linear hypothesis, which is far from proved for any form of cancer, except perhaps leukemia in Hiroshima, the estimated absolute risk for all forms of cancer, including leukemia, would suggest that the A-bomb survivor population of 285,000 registrants at the time of the 1950 census may have experienced 400 or 500 deaths from cancer induced by radiation in addition to perhaps 69,000 naturally occurring deaths in the interval 1950-1974.

Studies of mortality of atomic bomb survivors. Report 13: Solid cancer and noncancer disease mortality: 1950–1997.

"There have been 9,335 deaths from solid cancer and 31,881 deaths from noncancer diseases during the 47-year follow-up. Of these, 19% of the solid cancer and 15% of the noncancer deaths occurred during the latest 7 years. We estimate that about 440 (5%) of the solid cancer deaths and 250 (0.8%) of the noncancer deaths were associated with the radiation exposure"

The following is often said, but note the word "possible" and lack of quantifying the risk in hard number terms, sentiments like "radiation increases cancer" are often parrotted, but without stating how much radiation and what the evidence is built on.-"Late or delayed effects of radiation occur following a wide range of doses and dose rates. Delayed effects may appear months to years after irradiation and include a wide variety of effects involving almost all tissues or organs. Some of the possible delayed consequences of radiation injury are life shortening, carcinogenesis, cataract formation, chronic radiodermatitis, decreased fertility, and genetic mutations."

Presently, the only discovered statistically observable congenital abnormality or teratological effect observed in humans following nuclear attacks on highly populated areas is microcephaly, found in the in utero developing human fetuses present during the Hiroshima and Nagasaki bombings. Of all the thousands of pregnant women exposed in the two cities, the number of children born with microcephaly was below 50, and in these approximately 50 cases, all victims were in the first trimester of pregnancy and close to ground zero, that is, the teratological effect was dependent on the dosage received from the nuclear weapons prompt radiation - high levels of Gamma and neutron exposure - and not from fallout, that is, not from chronic amounts of, comparatively, lower levels of bomb debris fallout/Fission product generated radiation. No statistically demonstrable increase of congenital malformations was found among the later conceived children born to survivors of the nuclear detonations at Hiroshima and Nagasaki. However, no heritable genetic mutations have been observed in follow up studies of the children of surviving women of Hiroshima and Nagasaki, that is women who could conceive despite being exposed to substantial amounts of prompt radiation and local fallout exposure. This women went on and had children with no higher incidence of abnormalities than the Japanese average.

Survivors with the greatest proximity to the bombing centers
Some of the reinforced concrete buildings in Hiroshima had been very strongly constructed because of the earthquake danger in Japan, and their framework did not collapse even though they were fairly close to the blast center. Eizo Nomura (野村 英三) was the closest known survivor, who was in the basement of a reinforced concrete building (it remained as the Rest House after the war) only 170 m from ground zero (the hypocenter) at the time of the attack. He lived into his 80s. Akiko Takakura (高蔵 信子) was among the closest survivors to the hypocenter of the blast. She had been in the solidly built Bank of Hiroshima only 300 m from ground-zero at the time of the attack. Since the bomb detonated in the air, the blast was directed more downward than sideways, which was largely responsible for the survival of the Prefectural Industrial Promotional Hall, now commonly known as the Genbaku, or A-bomb Dome. This building was designed and built by the Czech architect Jan Letzel, and was only 150 m from ground zero.

Health studies on Cumbria and Seascale
In 1983, the Medical Officer of West Cumbria, is said by Paul Foot to have announced that cancer fatality rates were lower around the nuclear plant than elsewhere in Great Britain. In the early 1990s, concern was raised in the UK about apparent clusters of leukaemia near nuclear facilities.

A 1997 Ministry of Health report stated that children living close to Sellafield had twice as much plutonium in their teeth as children living more than 100 mi away. Health Minister Melanie Johnson said the quantities were minute and "presented no risk to public health". This claim, according to a book written by Stephanie Cooke, was challenged by Professor Eric Wright, an expert on blood disorders at the University of Dundee, who said that even microscopic amounts of the man-made element might cause cancer.

Studies carried out by the Committee on Medical Aspects of Radiation in the Environment (COMARE) in 2003 reported no evidence of raised childhood cancer in general around nuclear power plants, but did report an excess of leukaemia (cancer of the blood or bone) and non-Hodgkin's lymphoma (NHL) (blood cancer) near two other nuclear installations including Sellafield, the Atomic Weapons Establishment Burghfield and UKAEA Dounreay. COMARE's conclusion was that "the excesses around Sellafield and Dounreay are unlikely to be due to chance, although there is not at present a convincing explanation for them". In earlier reports COMARE had suggested that "..no single factor could account for the excess of leukaemia and NHL but that a mechanism involving infection may be a significant factor affecting the risk of leukaemia and NHL in young people in Seascale."

In a study published in the British Journal of Cancer, which also did not find an increase in any other cancers other than Leukemia, the authors of which attempted to quantify the effect population mixing might have on the Seascale leukaemia cluster. In the analysis of childhood leukaemia/NHL in Cumbria, excluding Seascale, they noted that if both parents were born outside the Cumbrian area (incomers), there was a significantly higher rate of leukaemia/NHL in their children. 1181 children were born in the village of Seascale between 1950 and 1989, in children aged 1–14 during this period, the Seascale cluster of 6 observed cases of NHL were noted. Two similarly aged children, born between 1950 and 1989, outside Seascale(in greater Cumbria) were also diagnosed with ALL/NHL before the end of 1992. The origin of birth of 11 of the 16 parents of these eight children was found to be; 3 had parents born outside Cumbria and 3 had one parent born outside the UK. The studies authors strongly supported the hypothesis that the risk of ALL/NHL, in particular in the younger age group, increases with increased exposure to population mixing during gestation or early in life. Although they determined that the exact mechanism by which it causes these malignancies, apart from Kinlen's infection aetiology that was mentioned, remained unknown, concluding that the possibility of additional risk factors in Seascale remains.

In an examination of all causes of stillbirth and infant mortality in Cumbria taken as a whole, between 1950 and 1993, 4325 stillbirths, 3430 neonatal death and 1569 lethal congenital anomalies, occurred among 287,993 births. Overall, results did not infer an increased risk of still birth or neonatal death in Cumbria, with the rate of these negative outcomes being in line with the British baseline rate. With a cautioned connection between a small excess of increased risk of death from lethal congenital anomalies and proximity to municipal waste incinerators and chemical waste crematoriums being noted. With two examples of the latter crematoriums operating in both Barrow-in-Furness and further afield at Carlisle, crematoriums which may have emitted various chemical dioxins during their operation.

Acute myeloid leukemia
A preliminary study done on adults in two German towns in which there existed a WWII TNT factory, found an increased risk of AML, compared to adults in neighboring areas, although it has been noted that the studies case numbers were small.

Weapons effects technology, simulants of nuclear explosions
http://www.globalsecurity.org/wmd/ops/testing-effects-intro.htm In 1964, Operation SNOWBALL, a 500-ton HE event, was conducted in Alberta, Canada. ''MILL RACE, and DISTANT RUNNER. As a result of these tests, an extensive array of pressure-distance and impulse-distance data was accumulated. There were sufficient data now available to ask and answer the question, How reproducible are ANFO detonations? Composite pressure-distance and impulse-distance curves have been generated along with their associated error bands. In addition, the absolute yield of each event has been determined. Similar computations are made for several test series utilizing TNT in a target sphere configuration. Comparisons are made between the reproducibility of the ANFO and TNT sources. ANFO appeared to be slightly more reproducible than TNT.''

Operation snowball should not be confused with the soviet Totskoye nuclear exercise of the same name.

HE simulants. Minor Scale, Misty Picture, Operation Sailor Hat, Operation Blowdown.

From https://www.fas.org/irp/threat/mctl98-2/ Militarily Critical Technologies List (MCTL) Part II: Weapons of Mass Destruction Technologies SECTION VI NUCLEAR WEAPONS EFFECTS TECHNOLOGY.

Civilian interest is in the survivability of similar systems and structures subjected to storm winds. The two are not completely distinct interests because the dynamic pressure from strong hurricanes may be comparable to that from nuclear blasts. As a rule of thumb, a 30 kPa pressure threshold corresponding to a 60 m/s particle velocity in the shock, or a drag force equivalent to that produced by about 210 km/hr (130 mph) steady winds, distinguishes the military and civilian applications. A frequently used design objective for civil structures is survivability in 190 km/hr (120 mph) winds....

The fireball from a nuclear explosion reaches blackbody temperatures greater than 107 K, so that the energy at which most photons are emitted corresponds to the x-ray region of the electromagnetic spectrum. For detonations occurring below 30,000 m (100,000 ft) these X-rays are quickly absorbed in the atmosphere, and the energy is reradiated at blackbody temperatures below 10,000 K. Both of these temperatures are well above that reached in conventional chemical explosions, about 5,000␣ K....

In addition to the high temperature of the nuclear fireball, the blackbody radiation is emitted in a characteristic two-peaked pulse with the first peak being due to the radiating surface of the outrunning shock. As the shock front temperature drops below 6,000 K, thermal radiation decreases when the shock front becomes transparent to radiation from the interior. This occurs between 10–5 and 10–2 seconds after detonation. At about 0.1 second after detonation, the shock front becomes sufficiently transparent that radiation from the innermost, hottest regions becomes visible, producing a second thermal peak. Before the second peak begins the fireball has radiated only about one quarter of its total energy. About 99 percent of the total thermal energy is contained in the second pulse. The duration of this pulse depends on the yield of the weapon and the height of burst (HOB); it ranges from only about 0.4 s for a 1 kT airburst to more than 20 s for a 10 MT explosion. Both theory and experiment indicate that the dominant thermal pulse can be adequately represented by a blackbody at a temperature between 6,000 and 7,000 K, which places the peak of the spectrum near the boundary between the ultraviolet and the visible regions of the spectrum. The shape of the Planck spectrum is such that most of the radiation is contained in the visible and infrared regions...

Facilities with surface emittance rates on the order of 150 cal/cm2 -s at blackbody temperatures of ≥ 3,000 K are critical to some simulators. Such mixer facilities should mix fuel and oxidizer[e.g LOX + Al] before ignition to avoid the production of smokes and particulate clouds. Instrumentation designed to function at flux levels above about 150 cal/cm2 -s is specialized to the nuclear simulation role; this intense radiation environment can easily melt all known materials over the duration of a full thermal pulse. T technologies such as plasma discharges with arc diameters >1.0 cm and arc lengths >10 cm for current greater than 1,000 Å and more than 300 kW input power are unique to nuclear simulation...

The new U.S. Large Blast/Thermal Simulator (LBTS) is the most advanced facility of its type in the West. [Also see page II 6 -18 that all solar power tower[[s, parabolic mirrors, [[solar furnaces etc. are "Nuclear Thermal Radiation Effects Technology Parameters"]. {On the next few pages, coating RV's in Silver acetylide.] Explosively driven flyer plates that simulate thermally generated pressures and impulses at the surface of generic shaped space platforms of moderate size (e.g., RVs) with pressures <1 kbar to 70 kbar (7 GPa) for fiberreinforced organic ablators and up to 13 GPa for metal targets; and impulses ranging from several hundred taps to >7,000 taps (700 Pa-s).

Methods for concurrent simulation of peak pressure, impulse, and angular distribution of shock waves produced by soft x-rays on moderate to large space platforms or segments of space platforms using a combination of the: Sheet-Explosive Loading Technique (SELT), Light-Initiated High Explosive (LIHE) technique, and methods for spraying explosive on complex targets such as the Spray Lead at Target (SPLAT) technique. Specific issues are: SELT—accounting for finite velocity and oblique shock wave instead of uniform detonation time over surface and nonperpendicular shock, especially at low stress, reducing the minimum explosive thickness to permit reduction of impulse to threat levels, and adjusting the peak pressure and impulse using attenuators; LIHE—produce impulses <1,000 taps (100 Pa-s) using shortduration blast waves, reduce sensitivity of explosives and improve handling capabilities, and apply to complex target shapes; SPLAT— generate low-impulse simulation for large test objects.

Ballistic missile defense, countermeasurues
Maneuverable reentry vehicle & Anti-ballistic missile defense countermeasures

Read here first on the bomb pumped laser which discusses, with pics, Excalibur X-ray lasing system. Here also states "The particle beam weapons postulated for Star Wars missile defense were to disable missiles by damaging the sensitive electronics via radiation, not by carving the missiles into pieces". http://www.projectrho.com/public_html/rocket/spacegunconvent.php

It then segues into discussion of 1 paper, that was summarized, on which I have found another paper, however it appears it would be less efficient than the X-ray laser in detonation energy in to laser energy out, but it would be longer ranged with less beam divergence over distance. -

"The Nevada experiment described herein sounds suspiciously like the bomb pumped XRASER (xray laser) experiments in the 70s/80s codenamed Excalibur that started the chain of events that got Teller in so much trouble. Thing I cannot figure is that the device described herein seems to produce GAMMA RAYS in the 6-8 MeV range (~0.002 Ångström) which is 10000 times higher photon energy than the stuff I've found in the literature that is available on Excalibur (which was in the ~14 Ångström range).

I've never heard if this worked or not... but there you go."

GRASER a gamma ray laser. "gamma-ray laser initiated by the neutron flux from a nuclear explosive...We show how such a device is plausible within our understanding of Mossbauer technology and the kinetics of superradiant systems".

The design uses the Mössbauer effect, the recoil-free emission and absorption of gamma ray photons by atoms bound in a solid form. This is important. Laser light is coherent light, where all the photons are in perfect lock-step. The trouble with x-ray and gamma-ray emission is that they are powerful enough to make the excited atom recoil in reaction. This throws off the synchronization, so that the beam is not coherent, and thus not a laser beam. The Mössbauer effect prevents this by locking the lasing atoms in a matrix of anchor atoms, thus dealing with the recoil.

It was estimated that the grasing transition energy densities of tens of kilojoules per cubic centimeter. This means a one megajoule graser could fit in a breadbox, sans bomb of course. '''A laser beam composed of gamma rays impacting on, say, an incoming Soviet nuclear warhead would produce a flood of neutrons generated by gamma-ray/neutron recations, burning a nice hole. And the high-energy Compton-scattered electrons would create an enormous EMP, frying the warhead's electronics'''.

"On The Feasibility of an Impulsively Driven Gamma-ray Laser" (1979) http://www.projectrho.com/public_html/rocket/supplement/Feasibility_of_an_Impulsively_Driven_Gamma-Ray_Laser.pdf & effects. A Theoretical Assessment of Radiation Damage Effects in a Proposed Gamma-Ray Laser System. UC-34 Issued: Jan 1978. LA-7099-T. Harold R. Schwenn. http://www.fas.org/sgp/othergov/doe/lanl/lib-www/la-pubs/00311808.pdf

As it relates to tech close to Project Excalibur and the Strategic defense initiative as a whole.

On the overkill myth that there is enough nuclear weapons to kill everyone on earth and on the 1982 imperative for SDI, a rebuttal to arguments made against SDI, and in the face of Soviet technological prowess, it includes a list of firsts by the Soviets. "The proposed satellites could be countered by means of decoys, electronic jamming, and/or a proliferation of missiles. Furthermore, an expensive laser station in space would itself become the first target of an enemy nation planning an attack. However, it is difficult to see why an extensive system of directed-energy weapons in space would not be able to destroy missiles or satellites sent to attack it. Such an attack would also prompt an immediate nuclear first strike against the attacking nation. As for the general argument, it is in the nature of war for each new weapon to produce countermeasures, against which other countermeasures are developed, and so on in a never-ending cycle"..." The consequence of such an invulnerability might be a shift of strategic emphasis to low-flying cruise missiles or to other weapons against which directed-energy weapons would be largely ineffective".

General George J. Keegan, head of Air Force intelligence, had been monitoring Soviet developments since 1973, and attempting since 1975 to convince the American military and intelligence community of an impending directed energy weapons gap. With his retirement in January 1977, he went public with his case. The following FAS page goes on to describe how the general was misled but it also notes, The soviet union was developing a LASER system of its own, similar to the US FALCON laser concept, it was a nuclear reactor powered system.

The "FALCON concept" was a Fission Assisted Laser Concept.

Deadly cool drawings. Nuclear pumped laser ,DOE reactor-pumped laser program James R. Felty "FALCON is a high-power, steady-state, nuclear reactor-pumped laser (RPL) concept that is being developed by the Department of Energy. The FALCON program has experimentally demonstrated reactor-pumped lasing in various mixtures of xenon, argon, neon, and helium at wavelengths of 585, 703, 725, 1271, 1733, 1792, 2032, 2630, 2650, and 3370 nm with intrinsic efficiency as high as 2.5%. The major strengths of a reactor-pumped laser are continuous highpower operation, modular construction, self-contained power, compact size, and a variety of wavelengths (from visible to infrared). These characteristics suggest numerous applications not easily accessible to other laser types"

"From the 1970's, the USSR was involved in an extensive, multi-faceted program to develop high-powered, ground-based lasers and microwave weapons. The centers of this activity, with potential ASAT applications, were at Sary Shagan and at Troitsk near Moscow. At least two major facilities were constructed at Sary Shagan: one a 0.7 µm ruby laser and one a 10.6 µm pulsed C02 laser. Both lasers shared a common one-meter diameter beam director. Although Soviet officials admitted the facilities had been used to track satellites prior to 1988, no lethal capability was said to exist"

The Soviet progress probably spurred development of SDI from the Carter to 1980 Reagan administration - “Soviets Push for Beam Weapon.” Aviation Week and Space Technology, May 2, 1977 “Space-Based Laser Battle Stations Seen.” Aviation Week and Space Technology, December 8, 1980, v. 113, n. 23, p. 36.

As an aside to someone interested in Induced gamma emission ballotechnic and Pure fusion weapon myths or concepts."Progress in the Production of Samples of Gamma Ray Laser Candidate Materials.” Annual Technical Report 1 April 1993 – 31 March 1994. Richardson, TX: General Coherent Technology Inc., 15 April 1994. 21p. Abstract: Studies of the 29 possible candidates to use as the working medium of a gamma ray laser have identified the 31-year isomer of Hafnium-178 as the best. It is a natural exawatt material capable of emitting 0.05 exawatt per gram if triggered. The problem being addressed in this work is the development of a production cycle for this rare substance."

"The susceptibility of some types of commercially available fiber optic cable to optical darkening (and hence increased signal loss and bit error rate) from exposure to ionizing radiation [from fallout] raises serious questions about the survivability of such systems in the reconstitution phase of a nuclear conflict."

"POSSIBLE SOVIET RESPONSES TO THE US STRATEGIC DEFENSE INITIATIVE"
http://www.fas.org/spp/starwars/offdocs/m8310017.htm some redacted info. Political pressure. The CIA expected the Soviets to:

"Mobilize existing resources for a targeted peace offensive, aimed at exerting domestic political pressure in the United States and NATO countries to forgo advanced ballistic missile defense (BMD) technologies completely, or at least to postpone their development indefinitely. Moscow will make use of the peace movement, the scientific community, and appeals to the defense and arms control concerns of NATO opinion leaders. As part of the campaign, the Soviets will be likely to extol the virtues of the Antiballistic Missile (ABM) Treaty and accuse the United States of undercutting its provisions." Yet Moscow was the only place in the world with an active ABM sytstem in operation. They also had the "Perimetr"/the "Dead Hand" system.

Active measures "We believe that the Soviets will employ measures to cope with the President's BMD initiative that could include:

Attempts to cause divisiveness and unrest among the US allies by arguing that the US initiative is an attempt to abandon them and that the United States is reverting to a "Fortress America" policy. Attempts to force the administration to withdraw or step down the BMD initiative by trying to convince the American people that implementation of the President's proposal would seriously curtail US social programs. Claims that the United States is upsetting the strategic balance and planning for a nuclear war-winning capability, disrupting the peaceful coexistence between East and West which has been so successful in maintaining peace since the end of World War II, and is starting a dangerous new spiral in the arms race. Veiled threats of Soviet response, including statements implying that undefined countermeasures are already under way."

22. In the near term,(7) the Soviets could seek to increase the survivability of their ICBMs or the number of weapons surviving by:

Deploying larger numbers of boosters, decoys, and penetration aids. Continuing or quickening the present trend to solid-propellant missiles, which tend to be structurally less vulnerable to continuous-wave (CW) laser damage(8) and have higher acceleration than liquid-propellant ICBMS. Further fractionating (increasing the number of reentry vehicles) systems currently deployed or in development. [Information has been deleted] [Information has been deleted] 6

23. In the midterm (1995-2005), the Soviets could undertake more radical measures to harden present types of missiles. Toward the end of the period, they could begin to deploy new missiles which have been designed to reduce the effectiveness of US defensive measures as the Soviets understood them in the middle to late 1980s:

'''New guidance systems that would allow continuously rolling airframes could probably be developed, tested, and installed in existing missiles toward the beginning of the period. Continuous roll is estimated to be two to three times as effective as oscillatory role in increasing CW laser burnthrough times'''. Also in the first half of this period, current missile types could be modified with an ablative coating to further protect against CW laser attack. Studies indicate that for missiles of the SS-17, -18, and -19 classes a continuously rolling airframe with a thin ablative coating will increase bumthrough times by a factor of about 15 over the present systems. It is possible that some degree of X-ray hardening could be incorporated with an ablative coating on the postboost vehicle (PBV), thus reducing vulnerability [Information has been deleted](9) to other lasers. Other laser hardening measures, such as a smoke-screen, could be installed. First-generation boost-phase decoys could be deployed. Measures to reduce or alter the infrared (IR) signatures of booster plumes would also be possible. New PBVs that dispense RVs earlier than at present could be put on existing missiles. 24. New missiles appearing in the 15- to 20-year time frame might incorporate the following features:

High-acceleration boosters that burn out below 100 kilometers, thus eliminating boost-phase vulnerability to X-ray and neutral Particle beam weapons.(10) Airframes designed to minimize vulnerability to spot heating from CW lasers and impulsive loads from pulsed lasers. Multiple high-acceleration PBVs to minimize RV dispensing times and proliferate PBV targets. Maneuvering RVs to reduce accuracy degradation caused by early, rapid RV dispensing. PBVs designed to dispense many decoys per RV. [Information has been deleted] 25. In the far term, from 2005 onward, the Soviets will be able not only to refine responsive measures taken earlier, but also will have had time to perform research and development on radically new concepts and to deploy those which prove out:

Highly fractionated, hardened, high-acceleration ICBMs could be developed as evolutionary follow-ons to first-generation responses of the late 1990s. Further development of boost-phase decoys and signature reduction/alteration techniques could make early characterization of an attack and weapon targeting very difficult. New means of launching RVs, such as railguns and other electromagnetic devices, could eliminate boosters entirely. Nuclear rockets could be used in boosters capable of depressed (height less than 100 km) trajectory, perhaps very fast (greater than circular velocity) attacks. Missiles could be put in high Earth or solar orbit to be deorbited on enemy targets. 7

Sea-Based Ballistic Missile Systems

26. Many of the measures to make the Soviet land-based ICBM force less vulnerable to defensive systems would be applicable as well to the strategic SLBM force. [Information has been deleted]

Ablative coatings of the airframes and continuously rolling airframes would be possible by the end of the century, as would initial measures to reduce or mask visible, infrared, and radar signatures of boosters, PBVS, and RVs. By the end of the midterm, new SLBMs designed specifically against currently proposed US defensive systems could be in test or the early stages of deployment. These, like the ICBMs of that time, could incorporate airframes designed to mimimize vulnerability to CW and pulsed laser effects, high-acceleration boosters, and multiple PBVs that could rapidly dispense RVs and decoys. Advanced signature reduction techniques for boosters and RVs could also become available at this time.

27. SLBMs do possess peculiarities that both restrict possibilities for some responsive measures that are available to ICBMs and, conversely, offer some unique opportunities:

By the early years of the next century, the Soviets could design, develop, and deploy depressed trajectory SLBMs that would not exit the Earth's atmosphere and would have very short times of flight, if launched from 3,000 to 4,000 km from their targets. The fact that such systems never leave the atmosphere would stress the capabilities of the defensive systems even more severely: X-ray and neutral particle beam weapons would be of little use against them. Moreover, such SLBMs would, by necessity for their own survival in flight, employ hardening measures which would also be effective against CW lasers. In addition to attacking time-urgent counterforce and countervalue targets, these weapons would be very useful for attacking ground-based components of the US BMD system, particularly command, control, and communications elements and interceptor missile launch sites. Boost-phase decoys would be more difficult to develop and might not be worth the space they would take on a submarine. (But decoys could be launched from cooperative surface vessels.) A submarine-mounted electromagnetic RV launcher is probably not a practical prospect for the next quarter of a century. Thus, submarine-based ballistic systems will continue to depend on rocket engines (perhaps nuclear in the far term) for accelerating their payload. Cruise Missiles

28. If the United States develops a ballistic missile defense, an obvious way for the Soviets to try to defeat it is to place greater emphasis on nonballistic strategic offensive systems. One of these is '''long-range cruise missiles, which remain in the atmosphere and are susceptible to further reduction of their IR, visible, and radar signatures, which are already small. In addition to attacking the target sets formerly allotted to ballistic missiles, cruise missiles could be potent defense suppression weapons'''. Using combinations of speed, stealth, and launch points near the United States, they could attack ground-based elements of the US BMD system, clearing the way for a subsequent ballistic missile attack.

29. A major disadvantage of cruise missiles is, of course, that if they can be detected, they can be brought under attack by fairly conventional air defense systems. This, however, might not be seen as a completely negative point by the Soviets, since the enormous expense needed to provide the United States with an effective air defense system would mean that these resources would not be available for other programs: In the longer term, visible and IR space-based lasers of the sort that might be incorporated in the endoatmospheric boost-phase segment of a United States BMD system would also be effective weapons against cruise missiles, again assuming they could be located. In the near term, submerged-launched versions of two Soviet long-range cruise missiles now under test, if deployed on submarines off the coast of the United States, could provide an initial circumvention of a BMD system.

Weapon station platforms, underground, underwater in the air etc.
Shallow underwater missile SUW staging concept for the MX/peacekeeper missile. It also contains a nice insight into the staging location selection process, with the problem of the tsumani - Van Dorn or surf-zone effect, a phenomenon in which a wave from a massive nuclear explosion, or a coordinated series of explosions, in deep water steepens and increases in height as it reaches the shallow waters of the continental shelf off the East Coast of the United States. & ''we planned to use Navigation System Using Timing and Ranging (NAVSTAR)GPS guidance to the MX during boost phase and ground beacons if NAVSTAR is destroyed. There is no doubt that the SUM-MX will have as good accuracy as the MX/MPS, and this too is borne out by SPC 554 and by specific agreement of defense officials & a great rebuttal which includes discussion on time-sensitive hard targets''.

http://www.au.af.mil/au/cadre/aspj/airchronicles/aureview/1981/may-jun/garwin.htm

http://sci.tech-archive.net/Archive/sci.space.history/2006-01/msg01521.html Encapsulated missiles fastened to small submarines patrolling off US coast.'' & Horizontal tunnels on south side of mesas.''
 * ''Shallow Underwater Missile
 * ''Mesa Basing

Ground alert missile launching 747 or C-5 class air craft. ''
 * ''Wide Body Jet

References: HAC, FY 1982 DOD, Part 2, pp. 254–255; SASC, FY 1982 DOD, Part 6, pp. 3745–3747; DOD, "ICBM Basing Options: A Summary of Major Studies to Define a Survivable Basing Concept for ICBMs," December 1980.

Design and Construction of Deep Underground Basing Facilities for Strategic Missiles: Report of a Workshop Conducted by the U.S. National Committee on Tunneling Technology, Commission on Engineering and Technical Systems, National Research Council 1982. http://books.google.co.uk/books/about/Design_and_construction_of_deep_undergro.html?id=VD8rAAAAYAAJ

https://www.princeton.edu/~ota/disk3/1981/8116/811611.PDF Deep underground basing. Including diagrams of the Mesa tunnel concept and dig out post attack with tunnel boring machines.

& air launching a minuteman ICBM (an AL-ICBM) from air dropping it, with video, on October 24, 1974, out the payload bay doors of a C-5 galaxy. Much more powerful and faster traveling than an ALCM for hitting time sensitive hard targets. http://defensetech.org/2012/02/17/video-a-c-5-galaxy-air-launches-an-icbm-what/

A further explanation is here, 78,000 lb Minuteman missile launched from a C5-Galaxy. http://www.liveleak.com/view?i=0c1_1245265480#YGuJDSiVVl6FLuSt.99

The concept of air launched orbital vehicles has also been looked at for cheap access to space, to deliver the DARPA "falcon" of 1000 lb. http://holderaerospace.com/downloads/In_the_News/Aviation%20Week%20QuickReach%20(2006).pdf

The 1990s to present Pegasus (rocket) launches from a higher altitude, 10 km, and with a 1000 lb payload to Low earth orbit.

http://up-ship.com/blog/?p=11404 ''Flying ICBM launcher. This is not a new idea… the Skybolt ICBM was flying around under the wings of several bombers back in the 1960′s, and both the Minuteman and Peacekeeper ICBMs were proposed to be made air mobile at various times… typically by the relatively simple expedient of carrying them in cargo planes and shoving them out the back door. But what sets this concept apart is that the ICBMs are carried *vertically* in silos, just like on ballistic missile submarines. But here there is no compressed gas charge to blow them out; they come out hot.''

Patent - Air based vertical launch ballistic missile defense 7540227

Patent - Air-based vertical launch ballistic missile defense 7849778

Neutron bomb
http://www.athenalab.com/#_Shame:_Confessions_of_the_Father_of Samuel Cohen's book.

New Scientist 12 Jun 1986. “You published an article ‘Armour defuses the neutron bomb’ by John Harris and Andre Gsponer (13 March, p 44). To support their contention that the neutron bomb is of no military value against tanks, the authors make a number of statements about the effects of nuclear weapons. Most of these statements are false ... Do the authors not realise that at 280 metres the thermal fluence is about 20 calories per square centimetre – a level which would leave a good proportion of infantrymen, dressed for NBC conditions, fit to fight on? ... Perhaps they are unaware of the fact that a tank exposed to a nuclear burst with 30 times the blast output of their weapon, and at a range about 30 per cent greater than their 280 metres, was only moderately damaged, and was usable straight afterwards. ... we find that Harris and Gsponer's conclusion that the ‘special effectiveness of the neutron bomb against tanks is illusory’ does not even stand up to this rather cursory scrutiny. They appear to be ignorant of the nature and effects of the blast and heat outputs of nuclear weapons, and unaware of the constraints under which the tank designer must operate.”

Natural nuclear fission reactor
Fission product retention in newly discovered organic-rich natural fission reactors at Oklo and Bangombe, Gabon http://www.osti.gov/energycitations/product.biblio.jsp?osti_id=5854065

nuclear winter
Following TTAPS in 1983, These guys corrected them - In 1985, Dr R. D. Small and Dr B. W. Bush of Pacific-Sierra Research Corp assessed the smoke from 4,100 megatons distributed as 2 warheads per target on 3,459 counterforce targets in forests and grassland areas (Science, v229, p465). They found the smoke output was 300,000 tons for a January attack and 3,000,000 tons for an August attack. These figures are 100-1,000 times lower than the guesses made by the "nuclear winter" hype of 1982–3, because the smoke is only 3% of the mass of vegetation burned (the rest is CO2 gas and cinders): "The amount varies seasonally and at its peak is less by an order of magnitude than the estimated threshold level necessary for a major attenuation of solar radiation."

One of the original errors was overestimating the soot production by fire. The fraction of the mass burned that becomes smoke is only 1% for wood, 3% for vegetation, 6% for oil and 8% for plastic.

Underground nuclear testing
http://www.bibliotecapleyades.net/ciencia/ciencia_uranium27.htm by Milo D. Nordyke. Science & Global Security, 1998, Volume 7, pp. 1–11

OR ALTERNATIVELY http://www.princeton.edu/sgs/publications/sgs/pdf/7_1nordyke.pdf The Soviet Program for Peaceful Uses of Nuclear Explosions Milo D. Nordykea

The taiga explosion is most of interest to me from a project orion perspective. - "Three explosives with yields of 15 kt each were emplaced at depths of about 127 m, roughly at the base of the alluvial deposits, to be fired simultaneously. The scaled depths of burial were about 57 m/kt1/3-4, which placed them somewhat deeper than optimum. The spacing between the explosives was about 165 m, a spacing expected to enhance crater width by about 10 percent compared to a single crater diameter.

The explosives used for the Taiga experiment were of a special design in which the fission yield had been significantly reduced over that used for the "Chagan" event in January 1965. The design used was tested in Hole 125 at the Sary-Uzen' portion of the STS on November 4, 1970, 40,41 several months before the Taiga event. Although specific details of the explosives used for Taiga have not been provided, MinAtom has reported that special nuclear explosives for excavation were developed in the 1970s in which the fission contribution was reduced to about 0.3 kt with the remainder of the energy coming from thermonuclear reactions.42"

Reference 42 is History of Soviet Nuclear Weapons, op. cit, p. 46.. - The History of Soviet Nuclear Weapons, Draft Outline VNIIEFNNIITF, Moscow, 19923; U.S.S.R. Nuclear Weapons Tests and Peaceful Nuclear Explosives, 1949 Through 1990, RFCN-VNIIEF, Sarov, 15RN5-85165-062-1

Another decent doc on the subject. http://geology.er.usgs.gov/eespteam/pdf/USGSOFR01312.pdf U.S. DEPARTMENT OF THE INTERIOR GEOLOGICAL SURVEY OPEN FILE REPORT 01-312 The Containment of Soviet Underground Nuclear Explosions

Underground tests & experiments
It appears the below document is available on the nepis.EPA.gov website. Understandably the tests on livestock were part of something broader, the desire to do fracking with nuclear devices, and as there were concerns about the effect that this would have on topside livestock, they tested it out. No injuries were reported following the tests, and indeed - Cows produced milk at normal rates afterwards.

Accession Number	NERC-LV-539-24 Title	Observations on Wildlife and Domestic Animals Exposed to the Ground Motion Effects of Underground Nuclear Detonations. Publication Date	Oct 1973 Media Count	19p Personal Author	Donald. D. Smith Abstract	For abstract, see NSA 29 02, number 02646.

High altitude nuclear explosions
Operation Plumbbob test shot John. A AIR-2 Genie detonated at a burst height of ~13,000 ft in altitude. Yield ~ 1.7 kt. Missile Launched by a Northrop F-89 Scorpion. Target drone destroyed. Five men at ground zero video. www.YouTube.com/watch?v=BlE1BdOAfVc

http://www.naav.com/assets/2012_11_NAAV_Newsletter.pdf pg 9 includes some good pics, includign a pic of the destroyed target drone, but 1 pic is misleading, the 5 men at ground zero "directly below! pic is obviously a pose. As their normal mid day sun created shadows are seen on the desert floor. The doc has a fair few errors, see eye injuries discussed elsewhere above. www.NAAV.com Nov 2012.

Cuban missile crisis, or the Caribbean crisis by the ruskies
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol46no1/article06.html CIA analysis and summary some 50 years later.

''Soviet Deception in the Cuban Missile Crisis Learning from the Past James H. Hansen''

Deterrence, Nuclear peace, Global Zero (campaign), Nuclear disarmament
“For the good of mankind and to end all world wars.”

“A peace enforced through fear is a poor substitute for a peace maintained through international cooperation based upon agreement and understanding. But until such a peace is brought about, this nation can hope only that an effective deterrent to global war will be a universal fear of the atomic bomb as the ultimate horror in war.” Report of the Joint Chiefs of Staff, Operations Crossroads, June 30, 1947

Marshal Nikolai V. Ogarkov, Chief of the Soviet General Staff, 1979 (the year the Soviet Union invaded Afghanistan): ‘The Soviet Union has superiority over the United States. Henceforth it will be the United States who will be threatened. It had better get used to it.’

Aleksandr Solzhenitsyn interviewed in the Wall Street Journal, 23 June 1983: ‘There are two Soviet Unions. The people - millions of them - dream of an end to wars, to armaments. The government, to the contrary, does not contemplate that idea even for a minute. It does, of course, want the WEST to disarm. But not one item of Soviet military equipment will ever be given up. ... It is normal to be afraid of nuclear weapons. I would condemn no one for that. But the generation now coming out of Western schools is unable to distinguish good from evil. Even those words are unacceptable. This results in impaired thinking ability. Isaac Newton, for example, would never have been taken in by communism! These young people will soon look back on photographs of their own demonstrations and cry. But it will be too late. I say to them: You are protesting nuclear arms. But are you prepared to try to defend your homeland with NON-nuclear arms? No: These young people are unprepared for ANY kind of struggle.’

Calcium carbide edit to do on ripening
http://www.academia.edu/2321590/Eating_artificially_ripened_fruits_is_harmful

Ireland at risk from british nuclear facilites?
http://www.neimagazine.com/news/newsvery-low-threat-from-uk-nuclear-plants-says-irish-radiation-protection-body

Ireland gets nuclear energy from Britain, Wylfa
http://www.irishexaminer.com/opinion/letters/that-nukes-that-argument-233440.html "There is no need for this consideration as the electricity, which has been flowing into Irish homes for the past year, has been partly sourced from nuclear plants in Britain, some less than 100 miles from Dublin.

The East-West interconnector integrates the Irish and UK electrical grids and a significant portion of the UK grid is derived from nuclear energy. Ireland develops wind energy while Britain develops nuclear energy and both states operate off a single grid.

Heed not the Irish councillors as they do not understand what they are talking about! "

sulfuric acid aka sulphuric. now on SO2 page
Sulfur dioxide can also be a by-product in the manufacture of calcium silicate cement: CaSO4 is heated with coke and sand in this process:
 * 2 CaSO4 + 2 SiO2 + C → 2 CaSiO3 + 2 SO2 + CO2

Up until the 1970s, commercial quantities of sulphuric acid and calcium silicate were produced by mixing anhydrous gypsum with shale or marl, and roasting in a rotary kiln, the sulfate likely liberating sulfur dioxide gas, used in sulfuric acid production, the reaction also produces calcium silicate upon reacting the burnt lime(CaO) with the silica in the shale.

History
The field of study is largely credited with being pioneered by Charles A. S. Hall, a Systems ecology and biophysical economics professor at the State University of New York, who took their initial work done at an Ecosystems Marine Biological Laboratory and focused their methodology on examining the sustainability of human industrial civilization. The concept would have its greatest exposure in 1984, with a paper by Hall that appeared on the cover of the journal Science.

Competing methodology
In a 2010 paper by Murphy and Hall, the advised extended["Ext"] boundary protocol, for all future research on EROI, was detailed. In order to produce, what they consider, a more realistic assessment and generate greater consistency in comparisons, than what Hall and others view as the "weak points" in a competing methodology. In more recent years however a source of continued controversy is the creation of a different methodology endorsed by certain members of the IEA which for example most notably in the case of photovoltaic solar panels, controversially generates more favorable values.

In the case of photovoltaic solar panels, the IEA method tends to focus on the energy used in the factory process alone. In 2016, Hall observed that much of the published work in this field is produced by advocates or persons with a connection to business interests among the competing technologies, and that government agencies had not yet provided adequate funding for rigorous analysis by more neutral observers. --- However, when comparing two energy sources a standard practice for the supply chain energy input can be adopted. For example, consider the steel, but don't consider the energy invested in factories deeper than the first level in the supply chain. It is in part for these fully encompassed systems reasons, that in the conclusions of Murphy and Hall's paper in 2010, an EROI of 5 by their extended methodology, is considered necessary to reach the minimum threshold of sustainability, while a value of 12-13 by Hall's methodology, considered the minimum value necessary for technological progress and a society supporting high art.

EROI effects within a grid
In an influential, 2013 analysis of the energy return on energy invested for common energy sources, the "unbuffered" (uncorrected for intermittency) EROEI for each energy source analyzed is as depicted in the table at right. With the exception of the only two non-fossil baseload energy supplying systems, of biomass and nuclear, in order to approximate the steady electrical supply characteristics of a biomass or nuclear facility, the corrected for their intermittency/"buffered" EROEI stated in the paper for all low-carbon power sources, were lowered considerably due to weather variations, a reduction of EROEI proportional to how reliant these other sources of energy are, on the manufacture and use of back-up energy systems. With these lowered values, in more recent years other analysts have questioned if an industrial society reliant on new-renewables, would be capable of sustainable progress. Charles Hall and Prieto would later largely corroborate the Solar PV value, with empirical real-world analysis from the utility-scale solar PV installation projects, in Spain.

ESOEI
ESOEI (or ESOIe) is used when EROEI is below 1. "ESOIe is the ratio of electrical energy stored over the lifetime of a storage device to the amount of embodied electrical energy required to build the device."

One of the notable outcomes of the Stanford University team's assessment on ESOI, was that if pumped storage was not available, the combination of wind energy and the commonly suggested pairing with battery technology as it presently exists, would not be sufficiently worth the investment, suggesting instead curtailment.

An Gorta Mor. missing accent.
In Ireland, the Great Famine was a period of mass starvation, disease and emigration between 1845 and 1852. It is also known, mostly outside Ireland, as the Irish Potato Famine. In the Irish language it is called an Gorta Mór (, meaning "the Great Hunger") or an Drochshaol (, meaning "the bad life").

During the famine approximately 1 million people died and a million more emigrated from Ireland, causing the island's population to fall by between 20% and 25%. The proximate cause of famine was a potato disease commonly known as potato blight. Although blight ravaged potato crops throughout Europe during the 1840s, the impact and human cost in Ireland—where one-third of the population was entirely dependent on the potato as the staple food, as all other crops were required to be sold as cash crops to pay landlord rents or face eviction, an economic situation created by the preceding British penal laws—which was exacerbated by a host of political, ethnic, religious, social and economic factors which remain the subject of historical debate.

The famine was a watershed in the history of Ireland. Its effects permanently changed the island's demographic, political and cultural landscape. For both the Irish people and those in the resulting diaspora, the famine entered folk memory{{#tag:ref|The Famine that affected Ireland from 1845 to 1852 became another rallying call for various Irish emancipation and independence efforts from the land league, to the Celtic revival, Home Rule and Irish republicanism movement, as the whole island was without its own Irish Parliament, being dissolved by the then United Kingdom Act of Union, 1800 and the lasting effect of the penal laws still fresh in the minds of the survivors.

In contrast to the previous Irish Famine (1740–1741)|Year of Slaughter]] famine of 1740 that preceded it, which was a famine coinciding with the longest period of extreme cold in modern European history, that created a Europe-wide grain scarity, including one as far afield as the normally surplus area of the southern Baltic. Slowing the inter-European food trade, resulting in far less food being drained out of Ireland during this period. In juxtaposition, there was a far greater government and military presence by 1845, and with that the number of Irish food exports being conducted, many times under armed Dragoon guard, during the Great Hunger was substantially higher. The effects of this were that, while nobody saw a scandal or faulted government per se for what had happened in the 1740 famine, as the weather was obviously to blame, and the power and size of the government was small to almost non-existent at this time. The Great Hunger hit a much altered Ireland: it was possessed of far greater administrative and economic resources, that had been entirely absent a century previous.

Despite being riven by divisive internal politics and religion over the desired fate of the country's future, nearly all Irishmen alike in 1847 agreed that it was an outrage that their country should be brought to its knees by famine in an era of peace and relative plenty both within Ireland and in Europe – in direct contrast to the famine of 1740 were poor and rich alike had felt impotent against nature. All these factors, and more, finally broke the already strained relations between the Irish people and the economic, political and social arms of the British Government within Ireland, ithe Great Hunger rekindled the desire in Irish people to once more have direct democratic representation, a democratic desire which was finally granted expression in the all Island Irish elections of 1918, and with the events that transpired, culminated in the last major Irish rebellion in the form of the Irish War of independence of 1919. Modern historians regard the Great Hunger as a dividing line in the Irish historical narrative, referring to the preceding period of Irish history as "pre-Famine".

The starvation and the workhouse living conditions that were breeding grounds for dysentery of the Great Hunger resulted in what remains the single worst death toll in either Irish or British history.

In 2009 British Prime Minister apologised for the famine.

Humor and comedy
The 1984 "We begin bombing in five minutes" incident, is an example of cold war dark humor, it was a personal microphone gaffe joke between Ronald Reagan, his White House staff and radio technicians that accidentally got leaked to the US populace, to provide some context, at this time, Reagan was well known before this incident for telling Soviet jokes in televised debates, many of which have now been uploaded to video hosting websites.


 * My fellow Americans, I'm pleased to tell you today that I've signed legislation that will outlaw Russia forever. We begin bombing in five minutes.

The joke was a parody of the opening line of that day's speech:


 * My fellow Americans, I'm pleased to tell you that today I signed legislation that will allow student religious groups to begin enjoying a right they've too long been denied &mdash; the freedom to meet in public high schools during nonschool hours, just as other student groups are allowed to do.

Following his trip to Los Angeles in 1959 and being refused entry into Disneyland, on security grounds, a dejected Soviet Premier Krushchev joked, "...just now I was told that I could not go to Disneyland, I asked "Why not?" What is it, do you have rocket launching pads there?

Daisies and mushroom clouds
Daisy was the most famous campaign commercial of the Cold War. Aired only once, on 7 September 1964, it was a factor in Lyndon B. Johnson's defeat of Barry Goldwater in the 1964 presidential election. The contents of the commercial were controversial, and their emotional impact was searing.

The commercial opens with a very young girl standing in a meadow with chirping birds, slowly counting the petals of a daisy as she picks them one by one. Her sweet innocence, along with mistakes in her counting, endear her to the viewer. When she reaches "9", an ominous-sounding male voice is suddenly heard intoning the countdown of a rocket launch. As the girl's eyes turn toward something she sees in the sky, the camera zooms in until one of her pupils fills the screen, blacking it out. The countdown reaches zero, and the blackness is instantly replaced by a simultaneous bright flash and thunderous sound(which in reality due to the low speed of sound in air, would not have reached the camera for a number of seconds after the flash was recorded, see thunder for further information) which is then followed by footage of a nuclear explosion, an explosion similar in appearance to the near surface burst Trinity test of 1945, followed by another cut to footage of a billowing mushroom cloud.

As the fireball ascends, an edit cut is made, this time to a close-up section of incandescence in the mushroom cloud, over which a voiceover from Johnson is played, which states emphatically, "These are the stakes! To make a world in which all of God's children can live, or to go into the dark. We must either love each other, or we must die." Another voiceover then says, "Vote for President Johnson on November 3. The stakes are too high for you to stay home." (Two months later, Johnson won the election in an electoral landslide.)

Archive get
Most available from the Pelinger archive. http://archive.org/details/MedicalA1950 Medical Aspects of Nuclear Radiation, USAF Special Weapons Project (1950)

Duck and Cover, Federal Civil Defense Administration (1951)

Radioactive Fallout and Shelter - 1965 American Civil Defense Educational Film Home Preparedness Workshops Family Fallout Shelters - 1960 American Civil Defense Educational Film

Transuranium Elements - 1963 Chemistry Educational Documentary that actually has Glenn Seaborg narrating most of this!

Stay Safe, Stay Strong: The Facts About Nuclear Weapons, USAF (1960) Radiological Defense, U.S. Office of Civil Defense (1961)

Effects 1977 Glasstone & Dolan
Best online scan of the 1977 book, better than fermilabs, PDF downloadable. http://babel.hathitrust.org/cgi/pt?id=uc1.31822004829784;view=1up;seq=165

1st edition 1957. Second 1962 with an "update"/correction in 1964 and then the final third in 1977.

As for the 1962 to 1964 2nd edition, the cheapest version appears to be offered by ultrabooks on amazon For $25. The Effects of Nuclear Weapons, 1962 (with[out] Nuclear Bomb Effect Computer) ''This is the revised edition reprinted in Feb 1964. The outside is somewhat yellowed, a little soiled and aged. The inside is clean, tight, and unmarked. No writing, underlining, highlighting, etc. 730 pages. Does not contain the nuclear bomb effects "computer" at the end. I pack well and ship fast.''

Capabilities of nuclear weapons
The originally classified series of books by Sam Dolan and others, it was used to write all the Qualitative "Effects..." books, with the last released in 1977 by Glasstone & Dolan. The more quantitative capabilities book, then termed the Defense's Nuclear Agency Effects Manual EM-1, Was released to officials on July 1, 1972, Change 1: July 1, 1978, Change 2: August 1, 1981 Declassified on 13 February 1989.

http://glasstone.blogspot.ie/2009/09/capabilities-of-nuclear-weapons-us-dod.html

The above has the 1972 versions in pdf, broken into parts. Part I,II etc. On docstoc. Each part is ~800 pages long.

John Northrop's 736 pages long Handbook of Nuclear Weapon Effects: Calculational Tools Abstracted from DSWA's Effects Manual One (EM-1) in September 1996 briefly summarized the formulas from the multi-thousand pages long 22-volume Capabilities of Nuclear Weapons, DNA-EM-1(it became this long ~ 1985).

Charles Bridgman's Introduction to the Physics of Nuclear Weapons Effects summarized the physics behind the formulae in Northrop's book. This is currently not available for public release. Defense Threat Reduction Agency (2001) Language: English ASIN: B0006S2HWK

Declassifed films
Declassified nuclear test films #1 to 100 perhaps. All found on archive, alternatively on their website.

Peter Kuran's "atomic filmmakers" might also be worth getting for its digitally remastered quality and rapatronic info it no doubt contains.

Neutron bomb working on
New Scientist 12 Jun 1986. NB's would still be effecive, but marginally so, after 1990 and tank armor thickness increased.http://books.google.co.uk/books?id=RYH7o-4ykmMC&pg=PA62&lpg=PA62#v=onepage&q&f=false

- C. S. Grace, Royal Military College of Science, Shrivenham, Wiltshire, New Scientist, 12 June 1986, p. 62.

'The neutron bomb, so-called because of the deliberate effort to maximize the effectiveness of the neutrons, would necessarily be limited to rather small yields - yields at which the neutron absorption in air does not reduce the doses to a point at which blast and thermal effects are dominant. The use of small yields against large-area targets again runs into the delivery problems faced by chemical agents and explosives, and larger yields in fewer packages pose a less stringent problem for delivery systems in most applications. In the unlikely event that an enemy desired to minimize blast and thermal damage and to create little fallout but still kill the populace, it would be necessary to use large numbers of carefully placed neutron-producing weapons burst high enough to avoid blast damage on the ground [500 metres altitude for a neutron bomb of 1 kt total yield], but low enough to get the neutrons down. In this case, however, adequate radiation shielding for the people would leave the city unscathed and demonstrate the attack to be futile.'

- Dr Harold L. Brode, RAND Corporation, Blast and Other Threats, pp. 5–6 in Proceedings of the Symposium on Protective Structures for Civilian Populations, U.S. National Academy of Sciences, National Research Council, Symposium held at Washington, D.C., April 19–23, 1965.

Samuel Cohen's Book. READ. http://www.athenalab.com/Confessions_Sam_Cohen_2006_Third_Edition.pdf

Neutron Bomb creator speak. http://www.youtube.com/watch?feature=player_embedded&v=z_QFXGxw6Tk

Saturn V. done
After Apollo, the Saturn V was planned to be the prime launch vehicle for Prospector intended to deliver a 330 kg robotic rover on the Moon similar to Lunokhod and the Voyager Mars probes, as well an upscaled version of the Voyager interplanetary probes. It was also to have been the launch vehicle for the nuclear rocket stage RIFT test program and the later NERVA. All of these planned uses of the Saturn V were cancelled, with cost being a major factor. Edgar Cortright, who had been director of NASA Langley, stated decades later that "JPL never liked the big approach. They always argued against it. I probably was the leading proponent in using the Saturn V, and I lost. Probably very wise that I lost."

The (canceled) second production run of Saturn Vs would very likely have used the F-1A engine in its first stage, providing a substantial performance boost. Other likely changes would have been the removal of the fins (which turned out to provide little benefit when compared to their weight); a stretched S-IC first stage to support the more powerful F-1As; and uprated J-2s or a M-1 (rocket engine) for the upper stages.

A number of alternate Saturn vehicles were proposed based on the Saturn V, ranging from the Saturn INT-20 with an S-IVB stage and interstage mounted directly onto an S-IC stage, through to the Saturn V-23(L) which would not only have five F-1 engines in the first stage, but also four strap-on boosters with two F-1 engines each: giving a total of thirteen F-1 engines firing at launch.

The Space Shuttle was initially conceived of as a cargo transport to be used in concert with the Saturn V, even to the point that a Saturn-Shuttle was proposed, using the winged shuttle orbiter and external tank, but with the tank mounted on a modified, fly-back version of the S-IC. The first S-IC stage would be used to power the Shuttle during the first two minutes of flight, after which the S-IC would be jettisoned (which would then fly back to KSC for refurbishment) and the Space Shuttle Main Engines would then fire and place the orbiter into orbit. The Shuttle would handle space station logistics, while Saturn V would launch components. Lack of a second Saturn V production run killed this plan and has left the United States without a heavy-lift launch vehicle. Some in the U.S. space community have come to lament this situation, as continued production would have allowed the International Space Station, using a Skylab or Mir configuration with both U.S. and Russian docking ports, to have been lifted with just a handful of launches. The Saturn-Shuttle concept also would have eliminated the Space shuttle solid rocket boosters that ultimately precipitated the Challenger accident in 1986.

Proposed successors
Maximum payload.PNG(LEO).

(Left to right)Space Shuttle payload includes 7 crew and cargo. Ares I payload includes 4 crew and inherent craft. Saturn V payload includes 3 crew, inherent craft and cargo. Ares V payload includes only cargo and inherent craft.]] U.S. proposals for a rocket larger than the Saturn V from the late 1950s through the early 1980s were generally called Nova. Over thirty different large rocket proposals carried the Nova name, but none were developed.

Wernher von Braun and others also had plans for a rocket that would have featured eight F-1 engines, like the Saturn C-8, in its first stage allowing it to launch a manned spacecraft on a direct ascent flight to the Moon. Other plans for the Saturn V called for using a Centaur as an upper stage or adding strap-on boosters. These enhancements would have increased its ability to send large unmanned spacecraft to the outer planets or manned spacecraft to Mars. Other Saturn-V derivatives analyzed, included the Saturn MLV concept family of "Modified Launch Vehicles", which would have almost doubled the payload lift capability of the standard Saturn V, and was intended for use in a proposed manned mission to Mars by 1980.

Another Saturn-V derivative analyzed, included the use of a solid nuclear thermal rocket engine, a number of which were successfully ground tested, and which was intended for the, in orbit upper third stage of the vehicle, a vehicle designated Saturn C-5N, a baseline Saturn C-5 with a nuclear Saturn S-N engine, studied by Boeing in 1968, in place of the standard chemical J-2 (rocket engine). This would carry a considerably higher payload into deep space than the all chemical version of the Saturn V, and was also being studied for a manned mission to Mars by 1980, however the then President Nixon ended work on the engines in 1973, along with all Saturn V ELVs, in government budget cuts. In 2006, as part of the cancelled Constellation Program that would have replaced the Space Shuttle, NASA unveiled plans to construct the heavy-lift Ares V rocket, a Shuttle Derived Launch Vehicle using some existing Space Shuttle and Saturn V infrastructure. Named in homage of the Saturn V, the original design, based on the Space Shuttle External Tank, was 360 ft tall, and powered by five Space Shuttle Main Engines (SSMEs) and two uprated five-segment Space Shuttle Solid Rocket Boosters, which a modified variation would be used for the crew-launched Ares I rocket. As the design evolved, the Ares V was slightly modified, with the same 33 ft diameter as that of the Saturn V's S-IC and S-II stages, and in place of the five SSMEs, five RS-68 rocket engines, the same engines used on the Delta IV EELV, would be used. The switch from the SSME to the RS-68 was due to the steep price of the SSME, as that it would be thrown away along with the Ares V core stage after each use, while the RS-68 engine, which is expendable, is cheaper, simpler to manufacture, and more powerful than the SSME.

In 2008, NASA again redesigned the Ares V, lengthening and widening the core stage and added an extra RS-68 engine, giving the first stage core of the launch vehicle a total of six engines. The six RS-68B engines, during launch, would have been augmented by two "5.5-segment" SRBs instead of the original five-segment designs, although no decision was made on the number of segments NASA would have used on the final design. If the six RS-68B/5.5-segment SRB variant had been used, the vehicle would have had a total of approximately 8900000 lbf of thrust at liftoff, making it more powerful than the Saturn V or the Soviet/Russian Energia boosters, but less than 50–43 MN for the Soviet N-1. An upper stage, known as the Earth Departure Stage and based on the S-IVB, would have utilized a more advanced version of the J-2 engine known as the "J-2X," and would have placed the Altair lunar landing vehicle into a low earth orbit. At 381 ft tall and with the capability of placing 180 metric tons into low Earth orbit, the Ares V would have surpassed the Saturn V and the two Soviet/Russian super-heavy lift vehicles in both height, lift, and launch capability. The RS-68B engines, based on the current RS-68 and RS-68A engines built by the Rocketdyne Division of Pratt and Whitney (formerly under the ownerships of Boeing and Rockwell International), produce less than half the thrust per engine as the Saturn V's F-1 engines, but are more efficient and can be throttled up or down, much like the SSMEs on the Shuttle. The J-2 engine used on the S-II and S-IVB would have been modified into the improved J-2X engine for use both on the Earth Departure Stage (EDS) as well as on the second stage of the proposed Ares I. Both the EDS and the Ares I second stage would have used a single J-2X motor, although the EDS was originally designed to use two motors until the redesign employing the five (later six) RS-68Bs in place of the five SSMEs.

In September 2011, NASA announced the Space Launch System (SLS) as the United States' new heavy-lift rocket for manned deep-space exploration, and which will be comparable in size and capabilities to the Saturn V. The new SLS, similar to the Saturn V and Ares IV concept that directly preceded it, is both a cargo and crew carrier. It has an upper-stage powered by a J2-X engine derived from the Saturn V launch vehicle, the first stage powered by five liquid-fueled rocket engines derived from the Space Shuttle's main engines, along with two strap-on SRBs also derived from the Shuttle program. The initial configuration of the new booster as proposed by NASA could lift approximately 70 metric tons to low earth orbit, with later "Block II" variants with the SRBs possibly lifting up to 130 metric tons.

In 2012, it was noted if a derivative of the F-1 (rocket engine) was integrated as a liquid fuel booster in the SLS Block II, the payload could be 150 metric tons to low earth orbit, 20 metric tons higher than that of the planned SLS Block II lifted by SRBs. In 2013, it was reported that the F-1B engine is to have improved efficiency, thrust, be more cost effective with a simplified combustion chamber, and have fewer engine parts. Each F-1B is to produce 1800000 lbf of thrust at sea level, an increase over the approximate 1550000 lbf of thrust achieved by the mature Apollo 15 F-1 engine.

NASA SLS deputy project manager Jody Singer of the Marshall Space Flight Center in Huntsville, in 2012 stated that the vehicle will have a launch cost of approximately $500 million per launch, with a relatively minor dependence of costs on launch capability.

^^Half the price of the Saturn V^^

F-1B booster for the Space Launch System
In 2012, PWR proposed using a derivative of the F-1 engine in NASA's Advanced Booster Competition for the Space Launch System(SLS), a competition which is anticipated to end in 2015, with the selection of a winning booster configuration. In 2013, engineers at the Marshall Space Flight Center began tests with an original F-1, serial number F-6049, an engine which was removed from Apollo 11 due to a glitch and never used; for many years it was at the Smithsonian Institute. The tests are designed to refamiliarize NASA with the design and propellants in light of interest in using an evolved version of the F-1 in future deep space flight applications.

Pratt and Whitney, Rocketdyne and Dynetics, Inc. have presented a competitor to the 5 segment Space Shuttle Solid Rocket Booster intended for the Space Launch System, using two increased thrust and heavily modified F-1B engines. In 2012 it was noted that, due to the engines potential advantage in terms of specific impulse(a unit analogous to car fuel efficiency), if this F-1B configuration was integrated in the SLS Block II, the overall vehicle would have a payload lift capability of 150 metric tons to low earth orbit, 20 metric tons higher than what is achievable with the currently planned solid boosters.

In 2013, it was reported that the F-1B engine in development has the design goal of being at least as powerful as the un-flight tested F-1A, while also being more cost effective; incorporating a greatly simplified combustion chamber, and a reduced number of engine parts, including the removal of the previously mentioned F-1 exhaust recycling system, that is, the removal of the turbopump exhaust mid-nozzle, "curtain" cooling manifold. The resulting F-1B configuration is intended to result in each engine producing 1800000 lbf of thrust at sea level, an increase over the approximate 1550000 lbf of thrust that the mature Apollo 15 F-1 engines produced.

Space travel on the cheap
"Using a ΔV of 9.5 km/sec as an example, the energy required to reach a low-Earth orbit is approximately 45 MJ/kg or 12.5 kW-hr/kg. This equates to $1.75/kg at current peak-hour electricity rates and about $0.48/kg at offhour rates. The energy requirement roughly doubles for placing payload mass into geosynchronous orbit. Current rates for access to space range from several thousand to well over $10,000 US dollars per kilogram. Clearly there is also room for improvement in the area of cost for current chemical launch vehicles. High Energy Density Materials, "If all else is held constant, in order to achieve higher specific impulse with higher propellant density, the chamber temperature must be increased. Currently, materials considerations limit chamber temperature to about 7000ºR if the chamber is actively cooled. Any new propellants with higher energy densities would likely require higher combustion temperatures than current materials allow"

So the document includes mentioned to, and quick summary of: Quadricyclane, polynitrogen, metallic hydrogen, SHARP, Nuclear electric propulsion tugboat, Beamed propulsion. and for Laser ablation"Laser ablation involves the removal and subsequent acceleration of atoms or molecules from a solid surface through laser irradiation...Scaling this technology to higher liftoff masses will require large amounts of laser power and presumably equally large-scale development programs. In general, laser ablation propulsion is capable of providing much higher thrust levels than the PLT[Photonic laser thruster]; "

Von Braun
"I'm glad they gave props to our humble Dr. VB. You don't see that too often. Out of respect to the man who is more than anyone else responsible for me not being a truck driver, and actually having a college education, here are some fun, yet little known facts about Dr. Von Braun (for those interested Space Geeks out there):

VB was good friends with Walt Disney which led to a couple of neat things:

Remember Dr. Ludwig Von Duck (I think that was his name. The Disney duck that always wore a mortarboard and spoke with a thick German accent)? Believed to be primarily based on VB.

Disney World's underground system of tunnels/accesses for logistics was based on, and almost identical to the tunnel systems under Cape Canaveral's launch complex due to VB's help and suggestions.

VB was a lieutenant in Hitler's SS, but not so much because he wanted to be. Himmler, seeing the devastating potential of VB's work, approached him personally and asked him to join the SS. At that time in Germany, one did not say no to Himmler.

Hitler personally threw VB in jail several times because he was pissed that VB kept telling everyone who would listen about sending men to explore space with rockets instead of, as Hitler wished, telling them about using the tech to blow the shit out of enemies of the Fatherland. He was eventually let go each time because his work was deemed too vital to the Nazi cause.

The Shoa Foundation (not sure if I spelled it correctly) maintains VB should posthumously be brought up on charges of war crimes for his role at Peenemunde (this was the location in which Germany built the V2 rockets lobbed at Britain using enslaved Jewish labor, and sadly was overseen by Dr. VB).

Enough sad stuff...

VB's first venture into rocketry was when he was a nine-year-old child and strapped a shit-load of fireworks to his wagon and lit them. Said wagon went careening into town, with the young VB far behind on foot. When he happened upon it he was so enthralled in studying the wreckage that he didn't notice the angry police officer... (this was also, as far as I know, his first trip to jail).

When he and his team came to America after the war, the were sent to White Sands (Fort Bliss). And in the middle of the desert, mostly working in air-conditioner-less shacks at the range, the men (including VB) would wear proper suits to work every damn day, and never seemed to take their jackets off. Eventually they bitched to the Army enough that they sent the team here, to Huntsville, AL.

VB was huge on assimilation, and was almost ashamed of his accent. When Disney proposed he do a series of shorts describing his ideas for space exploration to the general public (that Disney would produce and distribute) he initially refused to be on camera, and nearly fought Disney tooth and nail to not do them. Disney, however, wouldn't back down and insisted that it must be VB because no one else could properly convey the true awesomeness of the projects. Eventually he gave in, and these shorts became his first and most memorable entrance into American households.

Again with assimilation; he was big into using American adages at the time to better assimilate, (think "put on your dancing shoes") and relied quite heavily on the Americans he worked with to give him examples he could put into his speeches to come across better to the American people. However, the generals he worked with loved to fuck with him. One of his most memorable speeches, and most quoted phrases, ends with "put on your dancing slippers".

Also in an effort to better assimilate to the American way of life, Dr. VB would often be seen around Huntsville, shopping for groceries at Kroger as an example, and seeing as how he was more than a celebrity here, was often bombarded by crazy ass people (like my great uncle Lloyd), but he would always talk to them, always listen to them, and always thank them. Sorry, I know that's not really a fact, but that is what I've always respected most about him.

And finally, back to the story, his consulting is evident in the design of that OTRAG rock. When you look at the Saturn I (at least it's earliest incarnations), all it amounts to is a bunch of Redstone rockets bolted together.

Sorry to gush, but he is one of my heroes, and I like talking about him. He was, deep down a good man, sometimes put in horrible situations, who when finally given the opportunity, achieved great great things. So, the next time you read an article about something NASA keeps fucking up, do as I do, and silently think "We miss you Dr." Also, this year is the 100th anniversary of his birth, so wish him a happy birthday, too."

SR-71 The Sabu Snake. Project Oxcart
"This Blackbird accrued about 2,800 hours of flight time during 24 years of active service with the U.S. Air Force. On its last flight, March 6, 1990, Lt. Col. Ed Yielding and Lt. Col. Joseph Vida set a speed record by flying from Los Angeles to Washington, D.C., in 1 hour, 4 minutes, and 20 seconds, averaging 3,418 kilometers (2,124 miles) per hour."

CIA A-12 Archangel program. It has some awesome info, including a picture of a test bed engine glowing red hot.

Tires had Al mixed into the latex to prevent melting.

Historical Society
http://www.jbhall.freeservers.com/journal_of_the_old_drogheda_soci.htm Pillboxes on the Boyne [1939 - 1945] - Dr Geraldine Stout and Michael Reilly

http://forum.irishmilitaryonline.com/archive/index.php/t-9447.html More related to pillboxes

http://www.boards.ie/vbulletin/showthread.php?t=2055683847&page=1 Forum discussion on pillboxes an tire na hEireann

http://irishpillbox.blogspot.ie/ Guy in Navan looking to do a pillbox photo project

Newgrange and the Bend of the Boyne (Irish Rural Landscapes, V. 1) Get this book and use it for research. On sale in Amazon, the online supplier.

http://ballincollig.wordpress.com/limekilns/ Lime Kilns of Cork

http://www.cementkilns.co.uk/early_rotary_kilns.html rotary cement kilns probably like the one in Platin.

http://www.oracleireland.com/Ireland/history/lime-kiln.htm Batch Lime kilns, short history of use.

Mythology and storytelling
Seumas MacManus has some good books on the matter.

Peerage of Ireland
Or how people still claim to be "lords" in a republican country, claiming to be a lord in a country that promotes equality is entirely incompatible.

For the Marquess Conyngham page. - The Marquess Conyngham, of the County of Donegal, was a title in the Peerage of Ireland. It was created in 1816 for Henry Conyngham, 1st Earl Conyngham. He was the great-nephew of another Henry Conyngham, 1st Earl Conyngham, The families beginnings in Ireland, originate with Charles Conyngham from Scotland, who attaining land under the Plantation of Ulster. Probably during the Cromwellian land confiscation (1652) in what was then known as Tamhnach an tSalainn but as he was the landlord he renamed it after himself as mount-charles Mountcharles in County Donegal, Ireland.

...emotionally charged. - It is of particular importance that Peerage titles have not been recognised by Law in the Republic of Ireland since the establishment of the Free State in 1921 so any attempts at claiming titles of nobility are void unconstitutional (see maybe edits in article Irish nobility) and therefore illegal. The Attorney General, in 2003, has reconfirmed the states stance on equality, however the Conynghams continue to unconstitutionally claim nobility. The nobility title is said, by the Family, to be simply a Courtesy title, however this courtesy of nobility peerage can be extended to all members of the People of Ireland, as there is no mention of nobility peerage out of courtesy in the Constitution of Ireland..

Or more scholarly - Titles of nobility have not been recognised by law since the inception of the constitution of Ireland, Bunreacht na hÉireann. Any subsequent attempts at claiming titles of nobility are considered void and anachronistic. The Attorney General, in 2003, has confirmed the states stance on equality.

Similary, the constitution should be mentioned in:
 * "Henry Conyngham, 8th Marquess Conyngham"
 * and watch out on peerage of Ireland, Irish nobility, Slane Castle, Slane Concert.Tawnaghtallan AKA Mountcharles.

IP users
User:109.125.17.161 - mostly nuclear energy in france, July 2016.

Is it notable that the hospital remained below the "national standard"?
An Ernst and Young assessment of the hospital, in 2014, following the local HIQA recommendations that were made in the wake of Savita's death. Returned the finding that the hospital was improving but still not compliant with creating an "action plan" to bring it up to the "national standards". In 2014 The hospital where Savita died is improving, although not quickly enough in some areas.

Secondly,Rexxs user on the lack of heating in the room. The broken radiator which caused the cold room, was indeed, a confounding factor in her death. In the timeline presented in all the inquests. At 4:15AM on 24 October, Savita unequivocally reported being cold and was shivering, an indicator of sepsis. However as the room was noted to be cold and Savita's low grade temperature at "37.7 C" was not a smoking gun of septicemia, everyone missed the diagnosis and just threw her some blankets instead. Everyone failed to make the necessary adjustment, that this indeed would be a high/setpicemia-range temperature for someone who has just spent the last day inside a cold room. Indeed, even 37.5 C is deemed to be well within the domain of pyrexia.

If on the other hand, the room had been equipped with a modicum of heating and the higher recommended ambient temperature for all hospital rooms had been met, instead of it being a complete barn. Then her elevated temperature would have been higher, ergo they would've likely recognized and diagnosed sepsis earlier. It was only upon her temperature being recognized as very much in the septic range, some hours later, that they began to act. Early detection is vital for treating septicemia, in case you're unfamiliar. So the broken heaters/"a lack of basic care" was pretty important actually.

User:Ebelular. Only 5 of 19 maternity hospitals implemented the 2007 Sepsis guidelines in the wake of Tania McCabe's unnecessary death, who unsurprisingly, was also a public patient who died. So I can indeed say that 5 Irish hospitals are very much in a different universe, or your "excellent" which I never actually said. While simultaneously speaking about how the other hospitals are worlds apart. "Irish hospitals" are not a monolithic entity, with across the board similarities in professionalism or funding. So you can in fact say "Irish hospitals are terrible/ok". It's really not hypocrisy to do so.

It is well known in Ireland that University Galway Hospital is a, curiously managed hospital, to say the least. A gander at the controversy list in that article and triage system circa 2012 is all that need be done to see that. So the fact that Savita admitted as a public patient into this very same low-ranking hospital, that is below the "national standard" is notable.

Indeed, many nurses and doctors have commented on the very public standard of care Savita received. User:RexxS & User:Bastun you asked for WP:RS on this factor being at play and in the latter case, Bastun you should already know that we discussed this exact reference weeks ago? Here for example: [http://www.irishexaminer.com/viewpoints/columnists/victoria-white/savitas-death-is-not-about-abortion-it-is-about-medical-negligence-247993.html In the Irish Examiner. "Why do we throw consultants at mothers if they pay for them privately, even in public maternity hospitals?

The most depressing part of the Savita tragedy came home to me when a nurse friend said: “Was she a public patient? That figures."]


 * Wait, what? You're seriously suggesting that an opinion piece by a pro-life columnist - quoting a "nurse friend" is a reliable source for a condemnation of Ireland's public health service. Woah!


 * ,, , , were you aware of this hot mess of synthesis, OR, and personal opinion? Bastun Ėġáḍβáś₮ŭŃ! 10:51, 1 November 2017 (UTC)

"HIQA PG 133. "The regions fall significantly short of the one consultant per 350 births as recommended by The Future of Maternity and Gynaecology Services in Ireland 2006 – 2016 report as being necessary for the provision of dedicated consultant cover on the labour ward for 40 hours per week, a figure supported by international evidence."

Head OBGYN at the wealthier Rotunda Hospital in Dublin, Dr. Coulter-Smith, has said both that Savita's death has nothing to do with abortion laws & that he was operating just fine with the more open to professional discretion legal situation on abortion that existed in the same year Savita died. He conducted 4 life-saving abortions in the same year, 2012. So did Savita really die from this red-herring of abortion laws? Or a Catholic ethos? Or did she actually die from a lack of basic care, exactly as the inquests highlight exactly like Tania McCabe. With her primary caregiver operating in a hospital without Sepsis guidelines that shoud have been in place, a caregiver that was overstretched and a caregiver that did not have an understanding of the national OBGYN guidelines or laws?

The HSE inquest writes: "The interpretation of the law related to lawful termination in Ireland is considered to have been a material contributory factor." - It does not say the actual law was considered to be material contributory factor nor does it say it played a "major role" either way.

Arkulumaran et al. continues: "The consultant clearly thought that the risk to the mother had not crossed the point where termination was allowable in Irish law.".

Indeed. The 15 local and 19 national recommendations made by HIQA in regards in Savita's death include pretty specific reference to the caregivers in the Hospital not being aware of the existing Irish OBGYN guidelines/laws. - http://www.thejournal.ie/changes-savita-hospital-abortion-service-1816464-Dec2014/ Saolta chief "We want to embed this continuous learning into the way we work every day."]

National recommendation 19 reads in part. This learning should actively inform the respective Clinical Care Programmes and relevant guidelines and guidance.

So if doctor Astbury didn't know the laws. With words like "intrepreted" and "thought". How exactly is the law to blame?

[https://www.rte.ie/news/2012/1116/345877-savita-halappanavar/ Medical Council President Prof Kieran Murphy has said that its current guidelines on abortion were decided in 2009 and reflect the current legal position. Prof Murphy said the guidelines are as accessible and as straightforward as possible and had received a plain English recognition mark.]

That's pretty notable and should be included.

The recommendation 4b to clarify the law to super-clear levels does not mean that the law wasn't clear already, or that the law played any role in her death. Not knowing the law or OGBYN guidelines, did however delay the providing of care.

So we have to be careful when writing this article, to correctly communicate to readers that it was the interpretation of the law and not actually the law itself, which slowed treatment.

......break, 2nd reply, since superseded.


 * Though that is not really what is contained in the inquest hearing? Where the curious Arkulumaran report uses a selection or presentation of events & quotes here, to emphasise some things and not others, as this inquest about the failures in care, not the cases were the best practices of care are the focus.  On the very page 41 that you link to, it is in actual fact Dr. Katherine Astbury saying unequivocally, in the timeframe of Oct 24 : I also informed Ms Halappanavar that if we did not identify another source of infection or if she did not continue to improve we might have no option but to consider a termination regardless of the foetal heart." This isn't SYNTH, as it is obviously the same thing. Page 41 reads "During interview, the patient’s consultant obstetrician/gynaecologist stated that (s)he advised the patient and her husband that if the source of infection could not be found, a termination of the pregnancy might have to be considered".


 * To put some other things in the clear light. All the remarks that the reports make, in respect to the "need for legal clarity" and to train staff on the laws, is not so much in reference to Oct 22, where their lack of understanding didn't matter so much, as standard international best practice is to actively "wait and see" when there is premature rupture of membranes, insofar instead, that it pretains to Oct 24 when the consultant wasted valuable time following the diagnosis of a clear threat to the life of Savita. Specifically discussed on the page 4'4. The consultants practicular faulty interpretation of Irish law, with the consultant erroneously believing, as they put it; that they needed to wait until the "hearbeat stopped...hands tied". When in actual reality, that is and was revealed to be, a fundamentally erroneous belief. Not supported by the Irish OGBYN guidelines, or Irish law, or any established interpretation of the law. A law which is merely a constitutional eight amendment that is all of 1 sentence long.


 * All inquests pretty squarely communicate that there was an utter and complete lack of clarity on the law, within the mind of the this particular consultant. Not anyone else.


 * At the hearing "Dr Astbury told Mr Gleeson it did not occur to her to consult her colleagues about the legal position."


 * Dr. Astbury doesn't appear to be from Ireland originally? Or that is at least what a nurse-friend suggested, is the explanation for this a very strange misunderstanding. So while they were obviously familar with the internationally accepted Royal College of Obstetrics guidelines on how to manage miscarriage and it is advised internationally, specifically on "watchful waiting" being the internationally recognized best practice, until such time that infection of the membranes is diagnosised. While cognizant of this "protocol guideline" per WP:MEDASSESS, the consultant however lacked a very basic understanding/clarity/memory of the 1 sentence eight amendment, which doesn't mention "heart beats". They also similarly appear to have lacked knowledge on the Irish OBGYN guidelines that specifically greenlight abortion in the very scenario that Savita had been diagnosed in. There was no need to do the, unnecessary signage as the HSE page 44 recounts, Go, ok, Savita's life is in danger, now where is the other consultant to give me signed permission, as I actually don't know the latitude I have in this legal scenario. There was no need for that.


 * This awareness, that OBGYN consultants in Irish hospitals weren't clear on the law, or just this one in particular, was a major revelation at the inquests, as it meant time was wasted on Oct 23, with time always being suggested as perhaps pivotal, time was wasted, all due to a consultant not being clear on the established interpretation of the law nor on existing Irish OBGYN guidelines.


 * So will we get back to discussing best practices? On Oct 22? Specifically on "watchful waiting", actually being the international best practice. As misinterpretation of the law, inappropriately delaying, is not cause for criticizing the law. It is a basic assumption that OBGYNs need to be knowledgeable of Irish OBGYN guidelines. Which were and continue to be, already signed and uncontroversially endorsed, by the HSE.


 * Management of Miscarriage in Early Pregnancy, published by the Institute of Obstetricians and Gynecologists of the Royal College of Physicians of Ireland, clearly states: “Surgical uterine evacuation (ERPC) should be offered to women that prefer that option. Clinical indications for offering ERPC include persistent excessive bleeding, haemodynamic instability, evidence of infected retained tissue.


 * pg 58 of the HSE/hand picked Arkulumaran et.al report stats that the RCOG guidelies were being followed.


 * Recommendation 4b


 * There is immediate and urgent requirement for a clear statement of the legal context in which clinical professional judgement can be exercised in the best medical welfare interests of patients.


 * The press release also summarizes the over-emphasized recommendation, Recommendation 4b There is immediate and urgent requirement for a clear statement of the legal context in which clinical professional judgement can be exercised in the best medical welfare interests of patients. The implementation of this recommendation is beyond the role of the HSE./savitareport.html#


 * Ther medworm reference, doesn't seem to be an actual one, despite my hopes for a medical best practice discussion free of politics. Medworm is just a RSS feed of new publications on a topic.
 * Boundarylayer (talk) 19:55, 31 October 2017 (UTC)

Boundarylayer (talk) 19:55, 31 October 2017 (UTC)

=to add when we progress on finding out what "view" they're talking about first=

Self published sources as supplementary to pay-walled books
I'm going to start the civil discussion that the "Dr" should have made, if they genuinely thought that there was some edit-war over reference-tags. We are in agreement on self-published sources being lacking, as you saw, from  the edit summaries. However what we have here in the article, are in both instances, sentences with multiple scholarly references, that are merelypropped up by supplementary lesser open-access sources. My motivation for adding both scholarly and these lesser online sources wherever possible, is for keeping readers, like students who may not have the money to buy books, in mind. Mainly WP:PAYWALL & WP:SOURCEACCESS. In fact, this is essentially what I stated already, as my clear motivation for doing so, in one such edit summary that you have read.

Though having looked into them I have found this for us to mull over. Self-published, doesn't mean, automatically invalid. Something that, since we're on the topic, you seem to have an issue. "A self-published source can be independent, authoritative, high-quality, accurate, fact-checked, and expert-approved."

Seen as none of the self-published references make any outlandish or questionable claims(again they are all supported with actual book and journal references) I don't see them as problematic. Is there something in one that you find questionable. In which case, I would agree that we can and should remove it? Otherwise again, I don't see the problem.

They should be tagged as self-published in some way, I agree. Though what issue do you have with the content they contain ...is there anything, to use the policy wording, inaccurate or suggesting of a lack of quality?

Here is a ping( for ((User:DrKay|DrKay)) ) and what I would consider to be the WP:Civil way of settling "revert disputes" over tags, whether they be real or in this case, imaginary.

Boundarylayer (talk) 03:50, 4 September 2018 (UTC)

life cycle greenhouse gas emissions
Measurement of life-cycle greenhouse gas emissions involves calculating the global-warming potential of electrical energy sources through life-cycle assessment of each energy source. The findings are presented in units of global warming potential per unit of electrical energy generated by that source. The scale uses the global warming potential unit, the carbon dioxide equivalent (e), and the unit of electrical energy, the kilowatt hour (kWh). The goal of such assessments is to cover the full life of the source, from material and fuel mining through construction to operation and waste management.

In 2014, the Intergovernmental Panel on Climate Change harmonized the carbon dioxide equivalent (e) findings of the major electricity generating sources in use worldwide. This was done by analyzing the findings of hundreds of individual scientific papers assessing each energy source.

For all technologies, advances in efficiency, and therefore reductions in e since the time of publication, have not been included. For example, the total life cycle emissions from wind power may have lessened since publication. Similarly, due to the time frame over which the studies were conducted, nuclear Generation II reactor's e results are presented and not the global warming potential of Generation III reactors. Other limitations of the data include: a) missing life cycle phases, and, b) uncertainty as to where to define the cut-off point in the global warming potential of an energy source. The latter is important in assessing a combined electrical grid in the real world, rather than the established practice of simply assessing the energy source in isolation.