User:MaryMO (AR)/sandbox/notes on energy

Energy
<!-- Notes on limitations
 * Primary energy has a page but final energy does not, useful energy redirects to exergy, and energy services redirects to Energy service company
 * Energy conversion efficiency (units in = units out, <1) discusses efficacy (units differ in and out, no upper bound, multiple possible concerns to be measured) somewhat but the page for Efficacy is limited to pharmaceuticals and not energy,
 * Sankey diagram needs sources.
 * Energy transition needs work. Transitions tend to be driven by quality of product and cost.
 * So overall, you expect that electricity prices must be significantly higher than the prices of the commodities from which it's made." i.e. gas, coal, oil
 * There's an international market for oil, and only a small variation between prices in different countries, because oil is efficient to transport.
 * there isn't an equivalent international market for gas, so prices vary with location; gas prices in the US have been systematically lower than gas prices in Europe and Asia.
 * coal prices are driven by local cost and demand factors, but the cost of the core mining and extraction hardware reflects a global industry.
 * Energy in the United States needs work and updating. Note 2020 Sankey diagram.
 * Levelized cost refers to an average cost for a single alternative that combines different costs put together across time-- for example, the initial cost of a power plant and then the operating cost of it over time.

Now, let's talk about some limits of levelized cost. We already mentioned that it leaves out many important qualitative factors like the appearance of a refrigerator or the customer service or an insurance company, which reveals that really, it's   only one part of a more holistic analysis of an issue. But levelized cost also has some important technical limitations. For example, the levelized cost of electricity only gives the average cost to produce the electricity or the minimum average price you'd have to charge for the electricity to break even on your investment. In the real world, prices are not the same as costs. Prices are negotiated depending on many factors, like what other suppliers there are, how much demand there is, if there are regulations, and if there are subsidies. Also, the average cost of production is not the value of the electricity. An unpredictable kilowatt hour of electricity from a solar panel is just less valuable than a predictable kilowatt hour from a natural gas plant. Levelized cost also doesn't automatically include externalities. When we computed the levelized costs of that natural gas plant, for example, we completely ignored air pollution or climate change, two major themes of this course that have huge impacts on the rest of the world and real costs. In this course, we'll start trying to include these externalities by including a carbon tax on the fuel, which you'll   do next in the upcoming problems. Another limitation of levelized cost is that it usually uses simplistic exponential discounting. David already mentioned some of the difficulties in doing this, and it's really hard for energy systems where we're trying to compare revenue and cost streams over 20 or 30 year period and compare them on some current value basis, which is just not easy to do   and it can hugely depend on the specific investor and how much they can afford to wait, for example, for far-off profits. Another shortcoming you might have noticed while we were doing the problems is that there's   a lot of uncertainty in the inputs. So how much is gas really going to cost today or 20 years or even 30 years from now? How much is labor going to cost? These things are difficult or impossible to predict. But we can at least try to estimate them. And if, for example, we're calculating the levelized cost of electricity, meaning the minimum price we would try to charge to make sure our investment was profitable, we could try to ameliorate this effect by being conservative and choosing relatively high values. We can also try to account for this problem by doing what's called sensitivity analysis, where we vary each of the inputs and see how much they affect the final levelized cost and try to reveal factors like, for example, would a natural gas plant be unprofitable if the cost of gas doubled over the next 20 years. You'll try some examples of this in the problems coming up. The last of the limitations relates to the fact that levelized cost is essentially a static analysis assuming a one time decision that's going to last for decades. This doesn't account for important things like unexpected events. For example, the planet could have a serious accident, or some kind of earthquake or other disaster could irreparably damage it. It could also be shut down by new regulations on carbon pollution. Even if the chances of these things are slight, that means in principle, the levelized cost should be slightly higher to account for them. You could account for these things by doing a more sophisticated year by year or month by month analysis, including the possibilities of these various disasters at those different points in time. But that's a bit beyond the scope of this course. A related limitation is that levelized cost ignores what's known as option value, that is, the option to be able to do something different or make a different decision later. For example, a simple LCOE analysis could tell you that it's cheaper to build a bigger plant right now because economies of scale allow you to provide a lower levelized cost of electricity on a bigger plant than on a smaller one. But that ignores important factors. For example, what if technology dramatically improves in the next five years and there's a better option available than there was today? If you chose to build that bigger plant today, you would basically have locked yourself into that technology and you would have given up the option to take advantage of a better one in the future. That was a big list of limitations and complications. But don't let it deter you. Levelized cost is still a useful tool, and it's   a good start for quantitatively estimating and comparing cost, both for energy systems and for everyday purchasing decisions. It's a key component of this course, and we hope you'll find it useful, whether you're deciding which light bulb to buy, or using the understanding you gained here to build more sophisticated models of energy systems.

-->
 * Cost of mitigation (sometimes called marginal abatement cost) is always about comparing two alternatives to see what the cost is of one rather than another. e.g. one power plant that costs a lot, but pollutes little; and another which costs less, but pollutes.
 * Formula: The cost of the cleaner option minus the cost of the dirty option divided by the pollution from the dirty option minus the pollution from the clean option. This convention means that for most calculations where the cleaner option is more expensive, the mitigation cost will be positive. When the cleaner option is cheaper, then the cost of mitigation will be negative.
 * A mitigation cost curve plots options graphically with their cost of mitigation on the vertical axis and their mitigation potential, or how much total pollution could be prevented with the strategy, on the horizontal axis.


 * Discount rate (energy) is the amount that future costs or benefits are discounted compared to current costs.
 * hedonic time preference -- we tend to choose a present known good, ignoring the future
 * future uncertainty -- we choose a present we know in preference to a future with unknowns involved
 * opportunity cost means we can't do something else with our money if we commit to using it now
 * We need to think analytically about what the trade-offs are. Be skeptical about "right" discount rates: they reflect conditions affecting the decider (e.g. am I working? unemployed?) and can push decisions in a particular direction. Discount rates have enormous effect on energy policy.
 * LCOE calculations use a formula with costs in future years and converts them to an equivalent current-year cost. In our formula, the discount rate is a fixed percentage per year, and it compounds. So the value of a future dollar decays exponentially in the future. Choosing a discount rate sets costs today against future harm.
 * these are issues around determining the "real" costs of choices
 * Technique for Levelized Cost:
 * 1) what are the fixed costs
 * 2) what are the variable costs
 * 3) identify your metric, e.g. dollars per kilowatt hour (should be useful for comparison)
 * 4) figure out how much the technology is used,
 * 5) combine fixed costs plus variable cost (per unit time) x usage (units of time) to get an average cost for an alternative
 * capital charge factor
 * fixed operations and maintenance
 * variable operations and maintenance

Environmental Impacts

 * Lifespans of Air Pollutants and CO2
 * collectively, air pollution is a local and short-lived issue
 * unlike pollutants, carbon has an extremely long and global impact
 * Climate and air pollution are public goods. It's hard for societies to coordinate to supply public goods, harder when the benefits are further away in space and time, and hardest when the costs are high and the responsibilities diffuse as they are with climate.
 * the biggest way that energy use now impacts human health is air pollution
 * the biggest way in which energy use alters the natural world, is climate change,
 * landscape impacts of energy extraction
 * oil and gas infrastructure in fragile natural environments like the Arctic
 * oil leaks from pipelines affecting both humans and natural environments
 * climate change worldwide due to combustion of fossil fuels, coal, gas, and oil
 * ozone pollution affecting human health and ozone hole
 * outdoor air pollution
 * indoor air pollution (e.g. use of wood stoves affecting indoor air quality and people's health, especially women)
 * internal environment (e.g. due to off-gassing of construction materials)


 * 1) Be able to contrast the magnitude and relative importance of different energy system impacts on the environment.
 * 2)    Regarding air pollution:
 * 3)       Know the major air pollutants associated with energy use and be able to describe their origins.
 * 4)       Be able to contrast the difficulty of regulating different air pollutants.
 * 5)       Know the health effects of the two most impactful air pollutants, ozone and particulate matter; be able to look up particulate matter levels in many parts of the world and estimate the reduction in life expectancy at given pollution levels.
 * 6)       Be able to compare costs and benefits of air pollution control in the USA.
 * 7)   Regarding climate change:
 * 8)       Know that climate change depends on accumulated emissions over very long time scales.
 * 9)       Know global yearly CO2 emissions, and approximate contributions from different countries today; be able to use good sources to look up data on these issues.
 * 10)       Know the concentration of CO2 in the atmosphere today and how it has been increasing over time; be able to use good sources to look up data for the past 60 years.
 * 11)      Understand the chain of causation from Greenhouse Gas Emissions -> Green House Gas Concentrations in the Atmosphere -> Climate Changes -> Climate Change Impacts.
 * 12)          Know some of the time delays between stages.
 * 13)         Be familiar with the uncertainty of our estimates of each stage in the chain.
 * 14)     Be able to describe and quantify some major consequences of climate change over the next century; know and be able to use reputable sources for this information.
 * 15)       Know the rough magnitude of the cost of mitigating climate change.

Pollutants
^ It is easier to control single pollutants than it is to control PM and ozone, into regular diatomic nitrogen when fuels are burned
 * Six criteria air pollutants as defined by the US Environmental Protection Agency:
 * CO air quality
 * NO2 air quality, NO2
 * Ozone air quality
 * Lead air quality
 * SO2 air quality, SO2
 * PM2.5 air quality, particulate matter
 * direct pollutants are emitted directly as a result of industrial activities, including the combustion of fossil fuels. e.g.
 * lead -- neurotoxin, prevously used in gasoline and released into environment through combustion
 * nitrogen oxides or NOx -- inert diatomic nitrogen is found in the atmosphere; reactive forms of nitrogen like NOx can be formed during combustion -- oxygen reacts with atmospheric nitrogen at high temperatures and pressures
 * sulfur dioxide or SOx -- Sulfur oxides come from sulfur contained in the fossil fuels (Coal, gas, and oil) as they are extracted and produced. SOx are released when the fuels are burned and convert to sulfuric acid in the atmosphere
 * carbon monoxide -- carbon monoxide is a product of incomplete combustion.
 * secondary pollutants form when other pollutants react together. e.g.
 * ozone is formed in the atmosphere from NOx reacting in the presence of sunlight
 * particulate matter can be of either type
 * The difference between sources of NOx (mostly nitrogen from the air) and SOx (from fue)l, has important implications for control.
 * Sulfur emissions can be controlled by removing it from the fuel before it's sent to the consumer and/or by using cleanup technologies
 * Control of NOx depends on many different post-combustion control devices which makes regulation and enforcement harder
 * Technologies used to reduce pollutants:
 * scrubbers remove sulfur from fuels during production
 * catalytic converters reduce carbon monoxide and NOx by changing NOx back
 * Ozone
 * stratospheric ozone -- good -- protects the planet from UV
 * lower atmospheric ozone, surface ozone, ground level ozone, smog -- bad -- breaks down organic compounds (e.g. people's lungs)
 * nitrogen oxide, carbon monoxide, and volatile organic compounds VOCs combined in the presence of sunlight to create ozone; process is highly complicated and difficult to influence
 * Particulate Matter
 * most common measure of PM concentration in air is PM 2.5, the total mass of particles less than 2.5 microns per volume of air.
 * correlations between measured PM levels and hospital admissions or mortality can be used as measures of the immediate impact of high levels of PM
 * correlations between PM and the long-term health of people living in areas with different amounts of PM pollution, can be used as measures of the impact of chronic low-level exposure to PM.
 * The United States Clean Air Act

"The World Health Organization estimates nearly 7 million premature deaths per year due to air pollution (WHO 2014), making air pollution the third largest risk factor contributing to premature deaths globally, and almost all of these deaths are directly related to energy. Approximately half of the mortality is due to indoor air pollution, largely from indoor cooking fires in Asia and Africa. The other half is caused by “ambient” air pollution, mostly from vehicles and electricity generation."


 * yearly premature death rate, premature mortality

Health effects of ionizing radiation

 * high-energy particles that have enough energy to knock electrons off atoms can break chemical bonds can be broken and reformed, changing the chemistry in your body and causing harm.
 * such nuclear particles include alpha rays, helium nuclei, gamma and x-rays (protons), fast electrons, beta rays, or neutrons

Measurement

 * different types of radiation have slightly different biological effects
 * standard units of measuring the health impacts of ionizing radiation
 * the gray is a simple physical unit for direct exposure, 1 gray = 1 joule per kilogram of body mass; this can be compared within a particular type of exposure
 * the sievert is a gray times a quality factor for the type of ionizing radiation; this allows comparison across types of exposure, e.g. 1 Gray of high-energy neutrons with a factor of 10 = 10 Sieverts

Risk assessment

 * the long-term risk of exposure to a 1 sievert dose is about a 5% chance of death from cancer over 30 years
 * the average American gets a dose of around 5 millisieverts a year, from natural background (80%) and man made sources (20%)
 * Of the man-made sources, about 80% of those is medical (e.g. radiology testing and cancer treatment) 15% is due to consumer products, 1% is from the nuclear fuel cycle and 1% resulted from nuclear testing
 * The linear no threshold approximation assumption is used to generate a curve using limited information from certain parts of the curve (e.g. very high doses in accidents/war) with the assumption that the relationship will be linear. This may overestimate risk, but we don't know enough to get better estimates.
 * actual levels of risk may be much lower than those from harder-to-track industrial chemicals of which we are less aware (salience)

Climate change
than in 1769 when James Watt improved the steam engine, starting the Industrial Revolution.
 * the amount of human caused climate change in a given year or decade is mostly driven by the gradual accumulation of carbon dioxide from the last century, around the world
 * Today, the amount of carbon dioxide in the atmosphere is 25% higher than in 1963 and 50% higher
 * This is a stock and flow problem where levels depend on the difference between inflow and outflow.
 * carbon stored deep underground (the geosphere) is moved to the active biosphere due to the production and use of fossil fuels,
 * carbon in the active biosphere is distributed between atmosphere, land, and ocean.
 * human-based carbon flux is 100 times larger than the sum of all the analogous natural processes such as volcanoes

the Emission of greenhouse gases

 * Emissions are the human-caused flow of greenhouse gases to the atmosphere.
 * They're typically measured in tons of carbon dioxide equivalent per year.
 * We see a pattern of accelerating growth with a few pauses where the global economy was doing poorly
 * There is no evidence of a slowdown from 1992-2002, the first decade after world leaders committed to cutting emissions

the Concentrations of greenhouse gases that result in the atmosphere

 * The amount of carbon dioxide gas in the atmosphere at any one time is cumulative, based on emissions
 * This is the most direct cause of climate change at any given time.
 * Concentrations show very steady growth with a slow, even acceleration

the Climate response
to these changing concentrations (mostly in terms of how much the climate warms, and how long it takes to warm after concentrations rise),
 * many factors push the climate one way or another, so the climate's temperature record looks much more bumpy
 * it's very hard if not impossible to predict exactly what the climate will do from one decade to the next.

the Climate impacts
of those responses (e.g. changes in precipitation and sea level rise).
 * however, we can observe ways in which these large-scale climate changes are influencing human welfare and the natural world.
 * climate pushes to extremes
 * regional productivity
 * dry places get dryer
 * wet places wetter
 * storms become more intense
 * Arctic sea ice melts
 * coral reefs acidify due to CO2 in ocean waters and temperature increases

uncertainty
varies greatly over the four steps in the carbon climate chain.
 * Current emissions are known with less than 15% error.
 * The uncertainty in future emissions is immense as it depends on assumptions about what the world will look like a century from now.
 * Ask what kind of future do we want and how can we get there given our choices?
 * Atmospheric concentrations are known with very high accuracy. (e.g. CO2, better than 1 %)
 * the carbon budget is well understood in great detail,
 * the overall uncertainty in predicting CO2 concentrations in a given year is small, 30% if we knew emissions
 * It's much harder to predict the amount of climate change for a given increase in CO2 concentration.
 * the overall uncertainty could be a factor of two or even three.
 * it's harder still to predict many of the impacts of climate change on humans or the natural world.
 * e.g. Farmers will change crops and practices, so the uncertainty is big and cascades from step to step.
 * Reality could be better or worse than our best guess.
 * We'd like to have less chance of damaging climate change.
 * But what should we do, what will it cost to cut emissions and who will pay?
 * most people depending on the natural world will be worse off if climate changes fast.

anthropogenic greenhouse gases

 * CO2
 * methane (CH4)
 * nitrous oxide (N2O),
 * accounting for the importance of each type of anthropogenic greenhouse gases is tricky because they contribute different amounts of warming while in the atmosphere and stay in the atmosphere for different lengths of time
 * they can be compared using a metric of "CO2-equivalents" (or "CO2e")
 * CH4 and N2O are much more potent than CO2 while in the atmosphere; over a 20 year period 1 ton of CH4 would contribute roughly 80x as much warming as 1 ton of CO2, and 1 ton of N2O would contribute around 270x as much. However, the atmospheric lifetimes of CH4 and N2O are only around 12 and 100 years, respectively, while a large fraction of CO2 emissions stay in the atmosphere over 1000 years, so even though CO2 is less potent it also stays in the atmosphere and contributes warming for much longer. Thus, the relative importance of CH4 and N2O depends on what timescale you look at. Over 20 years, CH4 is around 80x as potent as CO2, but over 100 years it is only around 30x as potent; over 500 years it is only about 10x as potent.
 * the most common way to compare these gases is through global warming potential which adds up the total radiative forcing for a gas over some timescale, usually 100 years. This allows CH4 or N2O emissions to be expressed as "CO2-equivalents,", e.g. representing 1 ton of CH4 emissions as ~30 tons of CO2e. This practice is convenient, and lets us make quick analyses like this one claiming that CO2 accounts for 76% of our emissions and CH4 accounts for 16%, and it's even the basis for counting reductions for international climate agreements. But this accounting neglects all warming after 100 years and glosses over hard tradeoffs between the rapid, short-term warming from CH4 and N2O and the way that CO2 locks us into warming for millenia. The choice of this kind of metric can have big implications for policy decisions around high-CH4 or N2O emissions sources, and can affect which emissions we prioritize for reduction, a hotly debated topic.
 * different greenhouse gases have different effects, and when you encounter (or use) GWP's and CO2-equivalents, remember that the equivalence is loose.

Projected impacts

 * read Climate Change 2014Synthesis Report Summary for Policymakers SPM 1.3 and 1.4 (page 6- ) and SPM2 (pages 8-16)
 * Representative concentration pathways (RCP's) are used to project possible impacts. These are trajectories of greenhouse gas concentrations over time that correspond to possible futures, ranging from a future with aggressive emissions reductions (RCP2.6) to a "business as usual" future with unconstrained emissions and economic growth (RCP8.5).
 * read American Climate Prospectus 2014
 * this focuses on four major climate changes (temperature, precipitation & humidity, sea level, and extreme weather) and six major impacts (coastal damages, temperature-related mortality, labor productivity, agricultural productivity, crime, and energy demand).
 * two methods to translate human lives lost (mortality) into dollar values.
 * the income method  involves estimating the (discounted) income the deceased person would have earned over their lifetime (a pretty limited estimate of what human lives are worth!)
 * The VSL method multiplies all mortalities by the "value of a statistical life (VSL)," or around $7 million.
 * The researchers argue that the income method is probably the minimum that anyone would say a life is worth, and the VSL method is probably near the maximum (especially since it counts the life of a teenager as equally valuable as a 95-year-old), so both are often used to show a plausible range of impacts without suggesting either as the best method.

Land Use
We can evaluate and compare the diverse land use footprints of energy technologies using the metric of power density, the rate at which energy is extracted per unit of land. A common measure of power density is watts of primary energy per square meter of land surface.
 * Power density tells you nothing about how long the activity can go on for, or the total size of the energy resource.
 * Quantifying land use impacts is inherently more ambiguous than quantifying air emissions, toxics, or greenhouse gases
 * Configuration of land use matters: considering the entire area of a wind farm, the power density could compare to biofuels; counting only land directly occupied, wind farms have a power density 100 times larger.
 * fossil fuels have had a high cost in terms of land use
 * biofuels are not feasible as a sole alternative due to extremely high land requirements: they are often <1 W/m^2. In contrast solar power has an power density at least ten times larger and therefore requires less than a tenth the land use to provide a given amount of power.
 * The typical power density of current solar power systems in reasonably good locations is about 10 watts per square meter, with a total land requirement oi 2% for an all-solar system.
 * Nuclear uses very little land: only nuclear power and solar power can plausibly be scaled to meet late century energy demands of a rich high energy civilization, with minimal carbon emissions and a reasonable land footprint.
 * energy systems need to consider carbon emissions from land cover change, as well as changes to surface albedo, water runoff, and other factors, including the diversion of local ecosystem resources from the rural poor and indigenous cultures that rely on them.

Electric power grid
Generators, generally turbines-- steam, gas, water, or wind-- turn synchronous, three-phase alternating generators that make power. Transformers then turn that power from high current, low voltage to high voltage, low current. Power is moved by high voltage, long-distance transmission lines, where other transformers turn it back down to lower voltage, higher-current power, which is distributed in intermediate lines. Other sets of transformers turn it into lower voltage power which goes through final distribution lines to the end users.

Power is voltage times current. The loss of power in a conductive line, like a transmission line, is proportional to i squared r-- the current squared times the resistance. If you double the current, you lose four times as much power. Running at higher voltages to transmit power a long way, allows us to transmit power over long distances efficiently. The system goes from high voltages for transmission to lower voltages for distributing a safer usable level of power to final consumers.

Alternating current in a transformer allows you to go between high current, low voltage to high voltage, low current, or vice versa. Large transformers can be more than 98% efficient but work only on AC power.


 * one of humanity's most amazing inventions
 * in NA there are 4 independent grids : east, west, Texas, Quebec
 * 1) a dense, interconnected network going between generators and final consumers.
 * 2) there is no storage: supply and demand must be instantaneously matched.
 * 3) demand is unresponsive and demand does not respond to price In the short term. Electric power operators work by forecasting demand, building the capacity-- generators and transmission systems-- they need to meet that demand, and then operating the system in real time to match demand (down to the second)
 * 4) alternating power grids are dynamic with both elasticity and inertia which combines to make instability-- oscillations--
 * 5) electricity is, in many senses, a natural monopoly not easily affected by traditional free market mechanisms
 * 6) the electrification of the global energy system has increased steadily over the last century, and will likely continue
 * 7)  right of access -- electricity is such a necessity for modern life that it is regulated as a necessity (We make it harder for an electricity utility to cut off your power if you don't pay your bill.
 * 8) the electricity system depends on a stock of infrastructure of capital built up over decades and decades.  more than half of the generating capacity in the US is greater than 30 years old, and more than 2/3 of the power poles are also more than 30 years old.
 * 9) electricity transmission and distribution in the United States' system is 94% efficient, and the global average is 92%.
 * 10) the system is amazingly reliable: In the United States, consumers get power more than 99.95% of the time.

Dispatch Curve
The Dispatch curve
 * an electricity planner determines what generating capacity to build and how to dispatch that capacity, how to turn it on and off in response to changing loads.
 * Ask what is the cost of each extra hour of power (assume it's built and running).
 * Turn on the lowest cost units first until you have enough power to meet demand.
 * A dispatch curve orders types of generators based on their marginal cost of generation and the amount of power they can generate.
 * x-axis shows the total amount of power generated.
 * y-axis shows the marginal price.
 * ranking lowest marginal generation costs to highest, you get solar and wind renewables, nuclear, coal, then high curve up to oil and gas
 * The highest marginal cost plant sets the overall marginal power price on an electric grid.
 * All operators need to make enough to keep running
 * In addition, power transmission costs along a high-voltage dc line tend to be less than 1.5 cents per kilowatt hour.

Reliability and Resiliency of the Electric Grid
you want it.
 * read Appendices A.1 and A.2 on pages 235-238.
 * system reliability is the capability of the system overall to operate under normal circumstances
 * system resiliency is how well the system resists failures under extraordinary circumstances such as weather, natural disasters, or intentional sabotage, including cyber-attack.
 * figures of merit for quantifying the frequency and duration of electricity interruptions
 * System Average Interruption Frequency Index (SAIFI) is the number of outages per customer per year.
 * System Average Interruption Duration Index (SAIDI) is the average outage time per customer per year.
 * SAIDI can be graphed for different countries as a function of their per capita GDP and average energy use.
 * severe weather is by far the leading cause of outages in the US, by number of events and even more by total time of outages where severe weather causes well over 90% of total time.
 * Countries with lower population densities are likely to have higher SAIDI’s because of the difficulty of repairing damage in remote areas
 * The age of the US grid and chronic under-investment are often cited, however, failures related to the equipment and operation of the grid under normal conditions are quite small
 * “equipment failures” (mostly transmission and distribution equipment, e.g. transformers)
 * “load shedding” occurs when operators intentionally shut down power to some customers due to demand exceeding supply (rare in developed countries, common elsewhere)
 * Islanding occurs when distributed power sources (like rooftop PV) are shut down (for safety reasons to those doing repairs) during a grid outage, even when they could keep operating,
 * geomagnetic storms (see Carrington Event
 * AC, DC, and Transformation costs
 * AC - transformers only work with AC and  most power is generated from turning shafts (steam turbines, water turbines, combustion turbines and wind turbines) of electric generators or motors which use AC
 * DC - can connect AC grids that are not synchronized (e.g. Texas and West); used for long-distance high voltage (HV) transmission, due to lower cost and transmission losses per unit distance of DC lines; smaller footprint of DC towers
 * transmissions costs involve both
 * costs per unit of length in the middle of the transmission line
 * “end point costs” at each end of the line to step voltage up and down with a transformer for AC, or the much larger costs to convert back and forth between AC and DC for a DC line.
 * AC is less expensive for short-distance transmission, where the end-of-line costs really matter, and DC lines become cheaper when lines get long
 * Break-even distances may vary depending on capacity, voltage, and right-of-way costs. For undersea or buried transmission, the break-even distance is much shorter
 * as more end-use devices require DC power (computers, smartphones, and LED’s), and as solar panels generate DC power. local DC “micro-grids” or autonomous electrical systems are being developed for large buildings or communities where all connected devices can accept one standard voltage (for now)
 * micro-grids might also be useable in Rural communities, which are off the traditional grid. Would this delay traditional grid expansion to rural areas?
 * Factors that create a "natural monopoly" for electrical systems
 * very high fixed costs and low marginal cost, factors that can make a monopoly in any industry.
 * need to balance supply and demand second-by-second in a very tightly coupled system.
 * free-market model problem: regulation is needed to prevent "gaming" abuses by removing a lower-cost plant to gain from a high-cost one
 * classic utility model problem is that regulated monopoly utilities have no incentive to reduce prices and so tend to increase their costs.
 * Historically: Edison (all low DC) vs. Westinghouse (AC), Westinghouse won
 * Smart Grid, Distributed Generation, and Net Metering
 * system is already smart, can it be smarter? Risks for cyberattack?
 * can the fridge respond to dynamic price changes without spoiling food
 * real-time monitoring of failures to alert technicians
 * switches to redistribute dynamically and reduce the spinning reserve without reducing reliability
 * more distributed generators
 * Net metering is the idea that you get paid for putting energy into the grid as well as paying for energy that you use
 * The system is already subsidizing your use by putting the energy grid in place and maintaining it so that you can get both a total number of kilowatt hours and be able to get then when you want them.
 * Capital Costs: See LBNL's "Utility Scale Solar 2014" and EIA's "Annual Energy Outlook 2018"
 * Fuel Costs: See BP's "Statistical Review of World Energy 2017" and EIA's "Electric Power Monthly"
 * Solar Capacity Factors: See NREL's Solar Radiation Resource Maps, SolarGIS's Global Horizontal Irradiation Maps and LBNL's Utility Scale Solar 2014
 * Load Curves: See ISO New England

Fossil fuels
At least four forces drive availability of fossil fuels.
 * technological change
 * geological diversity of the resource - there are more sources, just harder to get
 * price elasticity
 * interfuel substitution and conversion
 * political turbulence is a wild card

World reserves of oil and gas have been steadily increasing over the past 30 years. Coal reserves seem to have declined somewhat, but there is much less data on coal reserves than on oil and gas. It is likely that fossil fuels will continue to be available as a result for some centuries; scarcity is unlikely to limit them.

Reserve-to-production ratios for oil and gas have stayed surprisingly constant for over 30 years. This is understandable since companies have no incentive to find reserves that will not be used for many decades, and thus they have no incentive to improve technology or exploration that pushes reserve-to-production ratios beyond a few decades.

The grand total of resources on the planet with at least some chance of being extracted is called the Ultimately recoverable resource (URR).

Our atmosphere's ability to safely absorb carbon will be exceeded long before we run out of carbon. <!-- Ultimately Recoverable Resources

How much fossil fuel is left in the earth? You heard David say in the video lecture that we face no shortage, and that if anything we're threatened by the abundance of fossil fuels and thus the temptation to keep using them. But how much is left exactly? How long would it last if we kept using at today's rate?

There's no precise answer, but we can estimate the amounts of different fossil fuels available at different levels of economic viability. Let's cover the McKelvey Box in Figure 1 in more detail to establish the major categories.

Figure 1: The McKelvey Box classifies resources by their geological certainty and cost of production. The major categories of geological certainty are Demonstrated (have been directly measured), Inferred (strong indirect evidence), and Hypothetical & Speculative (estimated based on rough geological knowledge). The two financial categories are profitable, or "economic," to extract at today's price with today's technology, or not profitable. Deposits that are both demonstrated and profitable are called reserves. All others are called "resources."

There are several variants of the McKelvey Box with different terms and divisions; we use one of the simpler forms here. Moving across the horizontal axis, the first category is "demonstrated" resources, which have been directly tested with drillholes in addition to other kinds of indirect testing. They're usually quite well known, with small deviations between estimates and actual recovery. The next category is "inferred" which refers to resources that have been estimated indirectly, often by seismic data and detailed knowledge of the local geology. Lastly there are "hypothetical and speculative" resources, which are estimated to exist in regions despite a lack of testing or detailed knowledge. This category was very important in estimates from the early 20th century of total world resources since almost all drillholes that had ever been drilled were in North America and parts of Europe - to estimate how much might be available in South America and Africa, for example, geologists speculated based on rough knowledge of the geology there and comparisons to tested regions.

Next, on the vertical axis we can divide fossil fuels into two categories, profitable (or "economic") to recover, and not profitable, given current technologies and market prices. Deposits that are both demonstrated and economic to recover are called "reserves." Everything else is referred to as "resources." As David mentioned, the division between reserves and resources can shift very quickly; a spike in prices instantly converts many resources to reserves, and some resources may actually have low extraction costs only be resources because they haven't been tested yet.

So how much oil, gas, and coal do we have in reserves? Figure 2 shows world reserve estimates over the past several decades.

Figure 2: World reserves of oil and gas have been steadily increasing over the past 30 years. Coal reserves seem to have declined somewhat, but there is much less data on coal reserves than on oil and gas. All data is from BP 2015.

You can see that oil and gas reserves have increased by roughly a factor of two. Coal reserves appear to have gone down, but data on coal reserves is of much lower quality than for oil and gas (for example, the Chinese government hasn't updated its coal reserve estimate since 1990). The total reserves are moderate in size relative to yearly production; the global reserves-to-production ratio for oil and gas are around 50, and for coal it's around 100.

It's tempting to think of reserve-to-production ratios as physical facts about the energy consumption and geology divorced from business organization. Tempting but wrong. Oil and gas companies have no incentive to find reserves that will not be used for many decades, and thus they have no incentive to improve technology or exploration that pushes reserve-to-production ratios beyond a few decades. It’s this business dynamic-- why look for things you don’t need— that explains the amazing consistency of reserve-to-production ratios over most of the last century. Figure 3 emphasizes this point, showing how the reserve-to-production ratio for oil and gas have stayed surprisingly constant for over 30 years despite massive changes in the kinds of deposits exploited and kinds of extraction technology used.

Figure 3 World oil and gas reserve-to-production ratios have been fairly constant over the past 30 years, with oil trending upward somewhat and gas staying almost flat. The coal reserve-to-production ratio has dropped sharply from 225 to 110, but it's still much higher than oil and gas (around 50), and its uncertainties are much larger due to sparser data on reserves. All data is from BP 2015.

But how large is the total resource? This is much less certain than current reserves, but the basic answer is very large, especially for coal, and interestingly, estimates have generally been growing over time. For the purposes of this course, we'll refer to the grand total of resources on the planet with at least some chance of being extracted as the "Ultimately Recoverable Resource" (URR), as is common in the literature.

The earliest estimate of the URR for coal was made in 1913, and was 7300 billion metric tons of coal (Grenon 1980). This is about 6200 TWy of energy, enormously larger than humanity's current total primary energy supply of 18 TWy per year. But as mentioned earlier this estimate was largely speculative with only limited exploration data available and only for North America and Europe. A range of estimates from the 1960's, 70's, and 80's were largely between 7500 and 12,000 TWy, though some analysts conjectured that the true number would turn out to be over 25,000 TWy after more thorough exploration (McKelvey 1964, Parker 1974, Averitt 1975, Grenon 1980). Since the year 2000 some very large deposits have been estimated that were previously neglected. For example, most previous estimates of the US only included ~3500 TWy for the conterminous US ("lower 48 states"), and at most crude estimates of Alaska. A careful study by US Geological Survey in 2004 estimated Alaskan resources of over 4000 TWy, bringing the US total to nearly 8000 TWy, larger than the entire world estimate of 1913 (Flores 2004). Recent estimates of the global coal URR have consistently been over 15,000 TWy (BGR 2012, IEA 2013).

Some companies are even starting to estimate offshore coal resources - whether it can be economically recovered is unknown, but the possibility of gasifying the coal underground and bringing the gas to the surface is intriguing, especially near high-gas-price markets in Europe and Asia. A recent estimate in the North Sea was between 2500 and 20000 TWy of offshore coal deposits, and there are likely many more similar deposits around the world. These estimates clearly have huge uncertainties, but it seems safe to say that the URR is in the 10's of thousands of TWy.

Similar increases in estimates of URR for oil and gas have played out over the past century (Grenon 1980). Recent estimates place the oil and gas URR around 500-700 TWy and 700-900 TWy of gas, with roughly half of those resources being in conventional deposits and half being in "unconventional" deposits like shale oil, tight oil and gas, and tar sands (BGR 2012, IEA 2013). Combining these with the much larger URR of coal, the total fossil resource seems to be almost surely larger than 15,000 TWy, or more than 800x humanity's current yearly primary energy supply. -->

A McKelvey box or diagram shows the difference between resources and reserves. First proposed by Vincent McKelvey. chief geologist at the US Geological Survey in 1973 for the formal classification of mineral reserves and resources. See this.

Renewable technologies

 * renewable technologies: Solar PV, Concentrated Solar, Wind, Hydro, Biomass and Geothermal
 * desirable central measures of environmental impact to compare renewables and other energy sources:
 * less air pollution
 * less climate changing emissions
 * less land use impact
 * scalability is a concern
 * a gigawatt of solar power generates only about 15 - 35% of its rated annual maximum.
 * the intermittency of solar power is a concern, but can be addressed through better battery technology, by its augmentation by other sources in a power grid, by use of technologies for  time shifting of electrical load (between heating and cooling), and to use "excess" low-cost solar power to make low-cost hydrogen or hydrocarbon fuels and so decarbonize the fuel sector.
 * recent trends - the rate of adoption and cost of wind power have been roughly constant over the last five years.
 * solar energy use is increasing extremely quickly and costs are dropping
 * estimates of global energy demand depend on population size and projected use per person
 * "only solar power of the renewables is able to supply a significant fraction of global energy use late this century."
 * "My conclusion is that only solar power and nuclear power, fission and fusion, can plausibly supply a major fraction of global primary energy in a carbon-free world late this century."
 * Reading

Solar technologies

 * solar panels (PV)
 * concentrating power (industrial systems to concentrate sun to a high intensity)
 * building scale hot water heating
 * passive solar

Solar panels

 * The balance of plant plus the modules = modern industrial solar photovoltaic power plant.
 * modules (the actual solar panels)
 * some panel modules are warrantied to produce 80% of their original power after 20 years
 * the "balance of plant"
 * racks to hold the panels
 * racks may use trackers to move North-south to follow the sun and increase efficiency -- flat plate, plate tilted at latitude, N-S axis tracking system tilted at latitude, and a two-axis tracking system. Latitude-tilt provides roughly a 20% increase over flat panels (though the exact amount varies by region), and single-axis tracking provides an additional 30-40% in most regions. It is important to consider whether these increases in capacity factor outweigh the increased capital cost for tilted or tracking systems in a given  installation. U.S. data is available from National Renewable Energy Laboratory (NREL) and their Solar Radiation Resource Maps tool. See also SolarGIS free high-resolution GHI world maps.
 * converter from direct to alternating current
 * Physics: You have to be able to make electron charge and hole pairs, separate the charge and hole pairs from where they were made, deliver them to an outside circuit or junction, and avoid their recombination
 * In practice, one does want efficiencies over 10%, and improvements up to 20% have substantially reduced the costs of solar PV in recent years
 * However, the important measure for grid-connected power is not efficiency but rather the cost per installed watt of productive capacity. That is the cost per unit area divided by the efficiency.
 * Primary impact of solar is land use: area covered by solar panels. Relevant measure is watts per square meter, the amount of land needed to produce a given amount of power. The best solar plants are now over 15 watts per square meter.
 * Other impacts include
 * toxicity in production or mining of material ingredients,
 * the use of certain rare material which may affect its scalability,
 * the energy and carbon footprint of producing solar PV panel arrays.
 * Energy payback is the number of years of operating a panel to generate the energy used to make the panel and its associated infrastructure. See U.S. Department of Energy
 * For Crystalline silicon (c-Si) PV fabrication, this is about 4 years.
 * For Thin film PV production it can be approximately 3 years. Due to rarity of metals involved,  First Solar and other thin film PV manufacturers have adopted a "cradle-to-grave" policy towards panels to recover elements and avoid end of use toxicity issues.

Solar Price Trends

 * the cost for solar PV panels has fallen steeply, exceeding expectations. By 2016, solar prices were competitive with gas-fired power plants, and have continued to drop.
 * The cost for solar panels can be modeled with a power-law "learning curve" or "experience curve". Even if modules prices fall at the rate from Figure 3, and assuming 35% yearly growth of cumulative PV production, they would still be around $.2/W a decade from now. Other hardware costs can likely decrease as well, but probably not by a large amount.

Rooftop vs Industrial Scale Solar
due to the cost of installing and connecting to the grid, and poorer alignment
 * Industrial installations will be cheaper per unit than isolated rooftop installations
 * reasons
 * autonomy - con: ineffective if everyone has their own instead of sharing
 * consumer benefits due to tax and deployment incentives - how to best use govt money
 * efficiency - it isn't that much cheaper to put energy into the grid near where it is created and there are costs to connecting a lot of inputs
 * a tangible contribution to the environment.

Energy budget

 * we can estimate earth's energy budget, in watts per square meter, averaged over a year.
 * Global Horizontal Irradiation (GHI) includes all of the radiation falling on a horizontal surface (both the direct radiation and any "diffuse" radiation scattered by the atmosphere), and is the most common metric for estimating solar PV performance
 * Direct Normal Irradiation (DNI) includes only the direct radiation received by a surface tracking the sun; it's measured with tracking instruments with very narrow fields of view that miss most diffuse radiation, and it's most useful for estimating the performance of concentrating solar power systems.
 * Insolation

Integrated design
The general lesson on efficiency is, first, big savings can be cheaper than small savings if you optimize the building or equipment or factory or vehicle as a whole system.

Consumption and Energy efficiency
and dividing it by the electrical power supplied to the bulb.
 * energy efficiency and efficacy, and how we can estimate an upper bound to efficiency improvements but not to efficacy improvements
 * Energy conversion efficiency (units in = units out, <1)
 * efficacy (units differ in and out, no upper bound, multiple possible concerns to be measured)
 * Jevons paradox - This apparent paradox that energy consumption can continue to rise even as efficiency or efficacy improve. Read 2 views - Contentious topic
 * Direct, indirect, and system-wide Rebound effect (conservation) are unlikely to lead to backfire
 * improving the efficiency of a technology, may create new categories of demand. (e.g. lighting for hydroponic gardening) --reductions in energy demand cannot be assumed and taken for granted
 * policy implications: may want to focus more on the upstream end of the energy system, because there we're more certain to get the deep reductions in pollution we need.
 * three types of lighting technology reflect deep differences in physics understanding
 * 1) incandescent bulbs
 * 2) fluorescent bulbs
 * 3) LEDs, Light Emitting Diodes, invented by Shuji Nakamura
 * The luminous efficacy, not actually an efficiency of lighting, have hugely improved
 * the efficiency of a light bulb could be calculated by measuring the power, energy per unit time, into a light,
 * Human eyes have a particular sensitivity to wavelengths of light
 * To measure the efficiency of light bulbs, you need a measurement standard related to human vision.e.g. the lumen, a unit of luminous efficacy
 * the "efficiency" of building air conditioning or the fuel "efficiency" of cars are actually efficacies