How do we know who is lying?

A friend of mine told me a story today that really makes me wonder how we (humanity) can find a good path forward.

His friend, a 50ish woman from Russia, told him about phone call she had with her mother, who still lives in Russia. This discussion happened on March 6, just two days ago. As you might imagine, the conversation quickly turned to the current situation in Ukraine.

Apparently the conversation didn’t go well. Her mother lambasted this woman for blaming the Russians on the war in Ukraine. She said she didn’t understand why everyone was getting upset with Russia because all of the righting and bombing are being done by Ukrainians against Ukrainians – the Russians are there with the sole purpose of stopping the bloodshed. They are there for humanitarian reasons to help the Ukraine government and people – they are being heroes, facing great danger at great expense to protect the people of Ukraine.

My friend’s friend could find no way to enter into a dialogue or have a civil discussion on the topic – from her mother’s point of view it is obvious that we are being lied to, that the Russians are totally in go the side of good and helpful, while the NATO world (and the USA) are only attempting to destroy Ukraine, and Russia along with it. She believes that we are attacking Russia for our purposes, and that our media is in a large concerted propaganda campaign of disinformation.

Obviously, that isn’t what my friend’s friend believes. She believes the information we get is mostly accurate, or at least truthful. Her mother believes the same about the information she gets – and there is no obvious way to get around this impasse.

It seems like the new easy connectivity created by the Internet has created (or exacerbated) the problem of sorting out fake news from real news. Now that everyone can have a say we no longer have a trusted source of information – and anything goes. The problem with figuring out whether Russia is invading or assisting is a prime example – it is obvious to me, but if the story is a true one (which I believe it is), it is not obvious to everyone. This means that there is a different “truth” depending upon the source of information, and there is no clear external demarcation for “true” or “false”. In the case of the Ukrainian issue, I suppose one could travel there and see first hand what was happening, but even then all you can see is what is nearby – and that might be a tiny bubble in the midst of a totally different situation. We can’t be everywhere at all times, and even if we were we would still be stuck with not knowing how to know the entire situation. We HAVE to take the shortcut of believing (or trusting), but have no method for determining how to do that.

What a complicated, and frustrating, world we have come to live in. I suppose it has always been a little like that, but we didn’t know that we didn’t know. Now we do, and we can’t do anything about it.

“New” Ceramics Kiln

I had a successful weekend that I want to share.

When I met my wife at college she was an art student specializing in high-fire, non-functional (artistic) ceramics. I loved her work, and really enjoyed visiting with her as she worked – kind of mesmerized by the motion of the potter’s wheel and enjoying our discussions. (I suppose that is a large measure of while I married her.) We both graduated in our chosen fields, got jobs, had a family and left things like art behind. When her 50th birthday came we were finally getting some our “life” back so I bought her a potter wheel and converted our garage to a “studio” space in the hopes of her returning to her art. While it was a nice gesture, things kept coming up and the wheel sat unused. Without a kiln it is pretty difficult to transport “greenware” to a kiln because clay is extremely fragile in that state. It was just too much trouble to make things at one place and then fire them someplace else.

About ten years after getting the wheel she found a big, old, very used electric kiln for sale at a yard sale. The price was right even if it didn’t work and needed some repair. A new kiln like it costs about $6,000. She paid $300 for this one. It came home and sat for several more years – taking up space but without power or a place to be used.

When we became stuck at home because of covid I ran out of excuses for postponing the project of setting her up with a usable studio space. We have a small barn that I claimed as my shop space. Since we no longer had horses, the unused stalls were just full of the overflow storage from my shop. That meant that there was the possibility of creating a small studio in one of the unused stalls. Creating that studio space turned into quite a project, requiring insulation in the walls and ceiling, a new wall to separate it from my woodworking shop, windows, doors, sheetrock, and tiny HVAC system, lighting and more. I finally got all of that completed – but still had an old, untested, kiln waiting.

“New” old ceramics kiln.

The first problem was that the device for lifting and holding the lid was absent. The lid opens like the top of a chest freezer, but it is too heavy to lift without assistance, and there was nothing to hold it up once it is open. The first task was to create some sort of lifting device. I started by trying to rig up cables and things to lift it, but that wasn’t working very well. A good friend came by one day and mentioned that I needed ballast. That idea resulted in a very simple device shown in the photo. It was just a couple of sections of angle iron hinged on the same hinge pin as the lid. My son’s old body-building weights counterbalance and offset the weight of the lid. It works perfectly! Lifting the lid is accomplished by pushing down on the bar holding the weights, which becomes slightly “over centered” when the lid is fully open – holding the lid up without any other devices being required.

Having a working method of accessing the kiln meant it was time to invest in bringing it electricity. Other than the high cost of copper wire these days, it was a small job of our local electrician. It was now time to do a test run to see if it actually works, if the controls were still functional and the heating coils still working. A few small items in the controls are no longer functional, such as the knob on the heat controller and the device that turns off the power once the desired temperature is reached. I tried to order replacement parts for these items, but the kiln is so old (circa 1965) that the parts are no longer available! However, after studying the manuals I figured out how to make it work without the controls (more manual, less automatic).

My first day of trying to make it work wasn’t very successful. No matter what I tried, the thing just wouldn’t turn on. How frustrating! I read the manuals, searched ytube for assistance and all of that – to no avail. I even contacted the controller company who were very helpful, sending me the controller electrical schematics – but most of the components are no longer available. It was looking dire. I devised some ways to bypass the controller completely and run the kiln manually. As I was pondering how best to do that I did a lot of electrical testing to make sure power was acting as I thought it should – and it seemed to be correct, but still nothing was happening. I decided to just turn it on for awhile and see what, if anything, might happen. A couple of hours I went to check it and “IT WAS WORKING!” I had misunderstood how the control system worked. It has a clock that ramps up power by cycling the power on-and off. It starts with each “on” time period being a very short part of time cycle. As the firing progresses the portion of the cycle that is “on” increases until eventually it is on all of the time. When I was first trying to make it work the “on” cycle was mostly “off” – so when I measured voltages and things there was nothing to measure. However, when I just left it alone the length of the cycle increased until it was obviously running. Eight hours later and the kiln was up temperature, glowing almost yellow through the view port. As night came, all of the little seams were glowing bright yellow with heat waves enveloping the kiln. It works!

I can finally hand over the studio and kiln for my artist friend (my wife). I hope I haven’t taken so long that old age is going to interfere with its use. One of the neighbor ladies from down the street has expressed interest in sharing the kiln, which would be great fun. I am looking forward to many enjoyable days with friends sitting with the kiln as it slowly cooks the locally made art pieces. There is something really special to sit with a kiln as it fires, you have to be present for safety reasons, and to make adjustments as the firing progresses – but it is also a great time to chat with friends, watch the day go by and just sort of relax with something to do, but not too much. I have started fantasying about creating a pretty little sitting space for the pottery kibitzers and kiln sitters.

It was a great day to finally bring all of the threads of the protect together into a usable whole.

Prevention Through Design

This discussion might be a bit on the “geek” side of things for some of you. Sorry about that, now and then I run into things in my profession as a System Safety Engineer that get me fired up enough to write about. Perhaps you will find it interesting, even though a bit geeky.

For a little background, for the past few years I have been working with Arizona State University (ASU), the System Safety Society (SSS), NIOSH and a group of very large construction/engineering contractors to try to find ways to implement System Safety (SS) practices into general engineering practices. System safety is a process of identifying potential hazards (and their risks) associated with proposed new systems (cars, buildings, rockets, table saws – whatever) so that measures can be taken to eliminate them (or at least reduce the risks to acceptable levels). It is based upon engineering expertise with the intention of makings inherently safe as an integral aspect of the design.

OSHA has leveraged this concept by proposing a process called “Prevention Through Design (PtD)” (which sounds a lot like System Safety). Because it is being promoted by the construction division of OSHA, the scope of the activity tends to be limited to worker safety during construction – rather than user safety which is a big part of the scope of SS. They propose using a similar process as SS uses for doing the evaluations, just with a much reduced scope.

There are quit a few people around the world trying to figure out how to actually do PtD. I have joined a small group of interested organizations in the hope of showing them that they don’t have to make up new processes. The SS profession has been doing this work for decades, has tons of materials (books, standards, courses, etc) to help learn the process, and is willing and able to assist. So far there is a strong sense of “not invented here” bias preventing the “worker safety” profession from accepting what we have learned over the years. It is a shame that they feel the need to redesign the process because it will inevitably follow a similar path as the SS profession has with successes, failures, and expensive trials after millions of hours of effort, and trillions of dollars of development projects. We could help, but so far they continue to believe they are developing something brand new. I guess that feeling of “ownership” lends “energy” to the process, but it certainly frustrates me. I would rather that they work with us to figure out how to implement our known processes into their specific needs instead of watch them as they start from scratch yet again.

Yesterday I attended a meeting with the group as they planned out a workshop to be presented on-line in May. Several of the papers that they selected for the workshop focus on new protective equipment such as active body armor and exoskeleton force multiplier devices (wearable powered frames that do they work instead of the person’s muscles doing the work – another one of those sci-fi fantasies that have come true). The group was very excited by these opportunities to limit injuries to workers. Luckily one of the members voiced a concern that I was worrying about – “Is this new equipment really PtD, or is it just fancy PPE”. (PPE means Personal Protective Equipment such as hearing protection, eye protection, respirators, etc). (I didn’t bring up the question because I have been attempting to avoid pushing my point of view too hard, hoping to gently guiding their discussions rather than being too pushy.)

That opened a conversation whereby I pointed out that the idea of PtD (and SS) is to design the system to be safe, not to just add things to protect people from the dangers of the system. I told them that from my point of view the goal is to reduce the need for people to do the right thing to stay safe. Requiring the use of PPE is certainly a long way from that goal – and is only needed when the SS effort has failed.

I was heartened to notice that a couple of attendees seemed to get something of an “ah ha” moment, that the point isn’t just to protect the workers, but it is to make the system safe for workers, users, the environment – for everyone. I was surprised to see the changes in their expressions – somehow I assumed that when they were talking about PtD and SS they actually understood what they were talking about. Apparently not. The problem is a long standing one in the safety profession having to do with our respective paradigms concerning the nature of the business. We (SS folks) think in terms of minimizing all risks throughout the system lifecycle by designing out hazards. They (worker safety folks) think in terms of reducing lost time accidents on the job site (OSHA – Occupational Safety and Health Administration). An example of how this results in different answers is working at height. SS tries to design so that there is no need to work at height. OSHA focuses on providing fall protection devices when working at height.

The really interesting part of the “ah ha” moment that I noticed is that there appeared to be a shift from thinking that the design of PPE is and example of Prevention through the design of the PPE, versus the idea of enhancing safety by changing the design of the project. They were convinced that the design that they were focusing on was the design of the protective devices and procedures, rather than the design of the system under consideration. This is a HUGE difference of point of view, one that I have been trying to point out to my Worker Safety colleagues for the past forty or so years – usually with little or no success. The trouble with these kinds of paradigm mismatches such as this is that we both use the same words, use the same descriptions, have similar ultimate concerns – but don’t actually communicate when we speak.

I have read that when someone shifts their paradigm they can understand the differences – but those using the original paradigm can’t. This results in a situation where I can understand their point of view because that it how I thought about safety when I was a general building contractor. However, when I shifted to viewing the problems from the SS perspective I entered into a paradigm that I understand, but they don’t. I can see both my point of view and theirs (because I have experienced theirs), but they can’t so easily see mine until (or unless) they experience the kind of “ah ha” that I hope I saw at the meeting. It will be interesting to see if I am correct, or if they really just saw a slightly different approach within their paradigm.

Thoughts on Ukraine

Well, the debacle in Ukraine has managed to stop my blogs in their tracks. I have been avoiding chiming in on Ukraine, and nothing else seems particularly relevant – so I have gone silent. However, that can’t go on forever – I once more feel compelled to throw some of my thoughts into the ring.

One of the questions leading to my hesitancy to speak out has been along the lines “why isn’t Ukraine part of NATO?” There are lots of discussions flying around about this question, not the least of which is that Putin doesn’t cotton to the idea. But, so what if he doesn’t want them to join? A rumor is passing around that we made a “deal” with the Russians when the fall of the Berlin Wall and he is holding us to that deal. Apparently this is a rumor only, that was not part of the negotiations concerning the Berlin Wall. (I may be wrong about this – but so far that is what I have turned up). The only really solid reason that I have found for Ukraine not being accepted into NATO has to do with the Ukraine military being under military control rather than civilian control. (The “code” words on this topic are needing a “civil government”). My understanding that membership requires that the military be governed by the parliament instead of the President. Apparently Ukraine has been unwilling to implement that change in their constitution. I suppose their hesitancy harks back to their recent membership in the USSR where a military under exclusive control of the leader is the norm. Of course, that is exactly the reason that the war in Ukraine is so very, very dangerous.

So now there is a problem that doesn’t seem to have a good solution. Ukraine isn’t part of NATO, therefore there are no legally binding agreements to defend them. For this reason Russia (Putin – the sole leader of the military) has taken the position that NATO is only allowed to protect NATO countries, and any actions beyond that are an act of offensive war (rather than defensive war if they were protecting a NATO member country). Gads, we are now in a very dangerous game of “chicken” – and Putin has a long history of not being the one to swerve.

Clearly, NATO countries providing arms, aid and training are dancing on the edges of the problem of supporting Putin’s enemies. Perhaps without direct boots-on-the-ground support, or aircraft, but very close in any case. It reminds me of the experiment known as “tickling the dragon’s tail.” This link provides a good description of the “experiment” https://www.theatlantic.com/technology/archive/2018/04/tickling-the-dragons-tail-plutonium-time-bomb/557006/ Basically, the point was to see just how close to total disaster you can get without setting off the nuclear bomb “gadget”. We seem to be playing the same game today, but with thousands of nuclear bombs rather than just a bit of radiation that kills the experimenter. Mankind is amazingly willing to get risk annihilation in the name of pride. Our big brains come with extreme risks.

Assuming we manage to avoid an all-out war, what should we (the global community) be doing? Clearly the safest approach for Ukrainians would be for Ukraine to give in, join their neighbor, and hope for an opportunity to increase their freedoms at a later time. However, NATO (and others) are fearful that if that happens then Putin will know that he can win any game of chicken that he might want to play – not a good thing in the future.

At the moment, my thought is that perhaps we can find a way to enforce a safe evacuation, letting those in Ukraine that want to leave a way out. Putin claims that most Ukrainians want to join back up with Russia – so maybe that is what happens. Those that don’t want to join Russia, leave. Those that do, stay. We help those that leave get resettled, and Russia helps those that stay rebuild their country again. It would be interesting to see how many leave and how many stay. However, I can’t imagine such a scenario playing out.

Unfortunately, it appears that no good solution is to be found. Russia will continue bombing, Ukraine will continue to be demolished and their citizens killed, and eventually Russia will control the region. Hopefully we will manage to avoid an all-out war (with, or without, nuclear weapons) because the costs of the escalation to a full blown war will far outweigh the loses in Ukraine. Maybe Ukraine can hold out long enough that the Russian’s realize that they can’t afford to continue. That happened in Afghanistan. I believe that one of the main reasons that Russia wanted to take over Afghanistan was to obtain a secure route to the oceans for their oil pipelines. They didn’t manage to accomplish that because of local push-back, and they finally gave it up. Too bad we decided to jump in an join the fun. Now it appears that Russia is looking for a secure route to the Black Sea through Ukraine (it is an economic decision). If the Ukrainians can hold out long enough perhaps Putin will realize that the price is higher than the value of the access to the sea.

One of Russia’s big problems is that it has become essentially landlocked with no good sea ports. This is a huge problem for the country, and for their ability to maintain global power and a flourishing economy. It is also a huge problem for us because Putin is attempting to find a solution and any cost. Putin is doing whatever he can to solve this problem, as we are seeing played out in the current war and the history of Russia attempting to break through to the oceans.

Decisions in uncertainty

The February 2022 issue of Scientific American has an interesting article called, “Schooled in Lies” concerning the general failure of the education system to teach students how to identify fake news. The article discusses a few ways that have been attempted, but apparently with little success and even less evidence that any approach works. The problem starts with the idea that young children tend to take what they hear at face value, assuming that “adults” know the truth. Then, as they grow up they find that there are many “possible” versions of the “truth” – without an obvious method for determining which is true and which is not true. They often end up thinking that there isn’t ANY truth therefore all answers are of equal validity and value. There appears to be good evidence that students are beginning to think that everything is a lie, and that because of this there is no point in engaging in difficult topics. They just take what “feels” good (and is easy to grasp) and tune out.

The rather weird world of information on the internet has definitely opened up some surprising issues, not the least of which is how do we actually determine what is real and what isn’t. This isn’t helped by a general misunderstanding about why much (maybe most) of “science” (which is promoted as the gold standard of “true”) changes over time. The idea that science is based upon theories (guesses) that are then investigated in an attempt to validate or refute the theory makes little or not sense to the general public. When a theory is found to be wanting, and therefore replaced, much of the public takes that as a sign that science really doesn’t “know” anything – they decide that because it changes scientists are just making wild guesses intended to support some sort of evil government lead conspiracy.

The article promotes finding ways to get students to a better level of understanding; “to that place where you can start to see and appreciate the fact that the world is messy, and that’s okay.” This would include having a fundamental way of gathering “accepted” knowledge, but still allowing for uncertainty based upon future evidence about how the world works. The idea is to accept that uncertainty with the goal of achieving greater awareness and engagement in discovering “truth”.

While everything is messy and uncertain, that does not imply that truth doesn’t exist. The search for the “truth” is a process, not an end point. The science of physics is a great example where theories have changed, but in ways that continue to build upon previous knowledge while accounting for actual observations in the “real world.” Newton invented some powerful theories explaining the observations available to him at the time. He explained motion, gravity and much more. His theories were accurate in describing the world as observed by him. However, over time new methods of observation were invented, resulting in the realization that while his theories “work” at one level of observation, they don’t work at the extremes or in other situations – so new theories where created, most notably Einstein’s theories of relativity. That opened new areas of observation, but didn’t negate Newton’s theories, it just added some new domains to the problem. Quantum Mechanics followed hot on the heels of relativity, finding new theories for parts of “reality” that are not covered by Newton or Einstein’s theories. All of these theories still apply, but with the caveat that they don’t apply to everything. At this moment in time it is clear that none of these theories are actually “correct” in an absolute sense, they are only correct in the realms that they apply to. All three approaches yield extremely accurate, and amazing, predictions of how the world works, but none of them are “complete” – and in fact none can actually be totally “true” because they are all contradict each other. That doesn’t exactly make physics “wrong,” just incomplete and evolving. All of science does this.

So how do we help kids (and adults) learn to trust what is “known”, while being skeptical with the understanding that it is the best that we have at this time, but is subject to change?

I think that perhaps the “key” to this isn’t so much in learning definitely what is “true” and “false”, but what to do with that information. Once you realize that it is never possible to “KNOW” the truth the meaning of true and false changes. False is not so difficult because measurements and observations can show something to be false. The real problem has to do with determining what is true because while something might be correct for all known situations, one observation in the future might find it to be false. We are forced into a position of considering our knowledge to be tentative and provisional. Knowledge is more or less likely to be correct – with the understanding that perhaps it will change if new information is found. This is known as “uncertainty” – the universal situation for all knowledge and every theory of how things work.

The issue has to do with what we do with the information and our understanding of the world. We make assumptions about what is most likely to be “true” and then act upon those assumptions. For example, during this time of the covid pandemic we listen to our sources of information (the news, our friends, the CDC, the government, our doctors, etc) and decide what we think is the truth of the situation. Perhaps it is exactly as described by the CDC, or perhaps that is entirely a hoax – perhaps it is something in between. Perhaps vaccines offer protection, perhaps they harbor hidden dangers – perhaps they do both. However, at some point we have to make a decision (even not making a decision is making a decision in this case). How do we make a “good” decision in the face of so much uncertainty?

The process is similar to the decision about whether to vaccinate or not based upon sketchy, incomplete, uncertain information. I am fairly confident that the pandemic is “real” based upon a very wide range of sources, including personally knowing people who have gotten extremely ill or died from the disease. Initially I had to accept some “trusted” (but not personal) sources that a problem existed. After talking to friends and relatives who have contacted the disease I find it impossible to consider it a “fake” problem. The magnitude and severity is still uncertain to me, but the presence of a “real” problem seems obvious. Before vaccinations were available, the best guess that I heard was that if I were to catch it, there would be about a 5% chance of dying, a 30% chance of extreme illness requiring hospitalization, and about a 50% chance of little or no impact. At that time, the best information that I could find indicated that without taking precautions, the chances of getting infected are about 100%. If that is true, then 5% of the 320,000,000 in the USA are at risk of dying (16,000,000 people) with around 100,000,000 people requiring hospitalization. These are certainly big numbers, much larger than our society can withstand without experiencing massive problems. Even assuming that the initial estimates were ten times (or 100 times) too large, this was (and still is) an extremely serious problem for our country and economy.

So what do (or should) we do with this information? If it is correct but we do nothing, then the outcome will be massive death, economic collapse, and widespread misery. If it is correct and we do something, then there will be big disruptions and large costs, but the county and the economy will weather the situation. What if it is incorrect and we do nothing? Then we just hum along like we had been doing because nothing changed. What about it is incorrect and we do something? Then there is a big expense for nothing. That is the “nut” of the problem. How much money (and disruption to our lives) do we spend on the off chance that the problem doesn’t exist? We are betting the lives of perhaps 16 million people, and the health of perhaps a 100 million more, and the entire economy of the County against the cost of doing things to keep the numbers low.

Riding in an automobile has a similar problem with making decisions in uncertainty. We all know we can be killed in an automobile accident. There are approximately 40,000 deaths in the USA per year due to automobile accidents. This about 0.01% (1/10,000) of the population per year (about 1/500 of the initial estimate for covid deaths). We clearly find this to be an “acceptable” death rate since we continue to ride in automobiles. The benefit to us is worth the risk. We know we might die anytime we get into the car, but we judge the likelihood to be small enough – after all none of us have yet died from riding in cars and we do so up to a thousand times per year. Assuming 1000 trips per year per person, that means that there is something like 1 in a million chance of dying on any give trip. The interesting thing is that even with these very low death rates we are still willing to pay tens of thousands of dollars per vehicle in safety measures (crumple zones, protective passenger cages, air bags, anti-lock brakes, etc.), and usually wear seat belts. We are willing to pay a lot of money for reducing the chance of 1/10,000 of dying in per year in an automobile accident. The initial estimates for covid deaths were as high as a possible 1/20 (5% of those being infected, with a possible 100% infection rate).

Global warming is another type of issue that involves an even greater potential outcome. What are the costs if the rather dire predictions of the “do nothing” scenario turn out to be correct? It appears to me that it is essentially infinite in terms of loss of lives, livelihoods, health, the environment, etc. There is NO amount of current value worth balancing that negative outcome. If we had no uncertainty, the only logical solution to prevent complete destruction is to spend whatever it takes to prevent it from happening. Of course it feels like there is some uncertainty, so we are not willing to give up everything to keep everything.

Another way to look at this is to consider what happens if we are not the cause of global warming and we just waste our money and effort reducing the carbon dioxide emissions? What if we reduce them through reductions in burning of fossil fuels and it has no climate impacts? First off, it WILL decrease the problem of the oceans becoming more acidic because of the CO2 levels. This is a BIG deal, we are currently at risk of causing huge negative changes to the fish populations of the oceans, globally. It will also reduce the amount of all kinds of air pollutants, radically increasing the health of everyone that breaths. It is become increasingly apparent that we can shift to a vastly reduced use of fossil fuels in ways that decrease costs, increase jobs, and result in better working and more pleasurable living conditions. All indications are that life will get cheaper and better, not more expensive and worse, by stopping to use so much fossil fuel.

The major negative impact will be that those who sell fossil fuels will not have a market for them. That means they will not only lose much of their income, but will more importantly lose much of their global political power. The income isn’t a concern because they will just pivot to making money off of the new energy sources, but the change in global political power might be significant.

So we get down to dealing with uncertainty. Are there a potential massive problems heading our way? Most likely, but perhaps not. Are there ways to minimize (or avoid) those problems? If we act quickly and decisively before the climate crosses a “tipping point” that continues no matter what we do. Are the large costs associated with acting right now? Yes, but these are more about shifting where resources are spent then in an increased in costs. For example, we have the grid capability to shift to all electric cars right now, but that will require a significant short term investment in infrastructure and a shift in automobile manufacturing. However, doing so immediately would have huge repercussions to the oil industry. It can be done, it is affordable to do so and it would make life better in many ways – but there are large market and political forces preventing it from happening. Should we do it? Yes. Will we do it? Unlikely.

It seems to me that the real issues that we should be addressing in our schools concerning how to sort out lies from truth should focus on considerations of how to judge the relative value and importance of the information – what kinds of details and considerations are important when using uncertain information to make decisions? Perhaps, the issue is broader – perhaps it has to do with learning how to make decisions having the best “expected value.” We all need help in determining how to place the right value on uncertain information, and how to create sufficient “plan B” considerations about what happens if our expected “plan A” doesn’t work out as hoped. Rather than just assuming that we “know” with certainty, maybe we need to learn how to be a little more prepared for the eventuality that we were wrong.

Does the Risk Matrix Add Value?

As a long time System Safety engineer, working on major programs that implement system safety programs in accordance with Mil-Std-882, I understand that the topic of this post is rather controversial since it questions one of the main tenets of the profession – that a formal risk assessment based upon a pre-established Risk Assessment Matrix is a necessary part of the process.

For those that might not be “in the know”, in the world of system safety risk is considered to be the probability and severity of the outcome of an “accident” or undesired event.  The idea is that if something goes wrong (perhaps the rung of a ladder breaks while someone is using it) it will result in an injury or damage of some kind. Thus there is a severity (damage or injury) aspect, such as a broken bone, and a probability aspect – the probability of the hypothesized outcome.

The system safety process is most effective if it is begun while the system being investigated is still just a concept, before the concept has been turned into detailed designs or hardware. Thus, at the beginning it involves the investigation of ideas. The “system” (whatever is being considered) is evaluated or studied in an attempt to find as many hazards, and thus potential accidents, are lurking in the design. Each of these potential accidents is evaluated to determine the severity of an injury and the probability of that injury occurring to determine the potential risk.

The risk is assigned a code typically taken from a table such as this:

Sample Risk Assessment Matrix

This all makes perfectly good sense and gives the appearance of being objective and therefore somehow “scientific.” Certainly the idea that risk is related to the combination of severity and probability makes sense. It appears to be a straightforward cost-benefit evaluation. However, there are many problems with actually using a table such as this for making decisions.

The definition of risk being a multiplication of probability and cost comes from financial risk management where all of the severities (costs) are described in terms of economic value (dollars), while probability is taken from a statistical evaluation.  Modern economists treat this as a calculus problem of adding (in a calculus fashion) all of the possible outcomes and associated cost to find an “expected” value for the investment. As long as the expected value of the return is greater than the expected value of the costs of the associated risks it is judged to be a “good” investment.  Many millions of dollars are invested in the process of estimating the expected values costs and returns in an attempt to find the “optimal” investment choices.  The concept behind this process is pretty apparent and “scientific”. If you want to understand which option has the least risks, all you need to do is figure out the projected dollars loses and the probability of each. Simple, except that even with economic decisions it is not so easy to predict either of these values or understand the statistics behind them.

However, safety risk assessments are much more difficult when there are illnesses and injuries being the considered.  Assuming that the probability of postulated outcomes can be determined (no small feat in itself), attempting to put a rational value on the severity of the postulated outcomes is fraught with difficulties and uncertainty.  For example, I am not sure how many broken fingers equals a broken foot, or how many broken feet are the same as death.   I can’t multiple the likelihood of a broken foot by the probability of that broken foot and get a meaningful answer – in order to perform this operation the severity needs to be a numerical value, usually dollars. Insurance companies place a value on body parts, but I don’t find this particularly satisfying.  I am not comfortable about performing cost/benefit analyses based upon my opinion of the value of someone else’s foot.  I am not convinced that I can properly determine how much each of these types of outcomes is “worth”.  When I ask the question of how much MY life is worth there is nothing with a higher value. There is no inherent correspondence between an injury or illness and its dollar value. There are pronouncements, regulations and actuarial tables, but these are just made up by people, there is no inherent measuring stick.

In addition to the problem that you can’t actually multiply probabilities by an outcome (even if you find a way to quantify the outcome) the outcomes being investigated almost always have a range of outcomes. Using the previous example of the broken ladder rung, this might lead to a range of injuries ranging from none to death. This would result in a separate risk assessment for each hypothesized outcome – the total risk associated with falling off the ladder is the sum of these risks, but we don’t know how to add risk categories because we don’t know how to properly quantify severity.   A common approach to solving this problem is to use a value that is considered to be the highest “probable” or “credible” risk. I really don’t know what the most credible means beyond the probability of the event, it sounds like circular logic to me.  

It seems to me that rather than going down the path of trying to find more rational, scientific, or supportable values for the risk assessments, perhaps we should examine the purpose of the exercise to see if we can find a better solution.

A common assumption is that risk assessments are performed in order to prioritize actions to reduce the risks of the overall project. The concept is that resources are always limited, therefore it is important to take care of the high risk concerns first. This “seems” logical, but is it? It implies that we can ignore low risk hazards until all of the higher ones have been resolved. However, in an actual design/development project that doesn’t, and shouldn’t, happen. Complex design/development programs don’t follow a linear process.  Instead, many parts and pieces are developed in parallel by many individuals.  Features to controls are identified and integrated as the program develops – controls for all levels of risks are not “prioritized” – they are either found and integrated into the design, or not. Therefore, the risk table is NOT an effective prioritization tool. Potential risks need to be identified and controlled to a level that is deemed to be “acceptable” – regardless of the level of risk involved. They are not “prioritized.”

If risks are not prioritized using the risk matrix, perhaps the matrix can somehow be used to determine when the risk has been reduced enough to be considered “acceptable.”  Maybe it can help with determining how much risk is “acceptable” A lot of engineers, managers and regulators like the idea of defining levels of risk that are “acceptable” and therefore don’t require further efforts to reduce them. This might be an appropriate solution if we have confidence in the determination of the risk parameters (probability and severity of an unwanted outcome).  However, as discussed earlier that is fraught with difficulties and quickly becomes unaffordable.  This is seldom a viable solution because of the unknowable aspects of the process.

Even if it were somehow feasible to accurately determine the risk in terms of probability and severity, there is still an open question about how to determine “acceptable” risk levels.  Safety risks pose dangers to many different stakeholders in a decision.  The company developing the project has financial (and moral) risks, the program manager another set of concerns, the development team another, the user another, society in general yet another.  Those that might be directly injured may have different acceptance criteria than those that intend to make a profit from the program/product.  Not only that, but there are many different things that come into play when making the determination of “acceptability” including things such as utility, perceived value, dread of the type of injury, social norms, and many others.  There is no single, universally agreed upon method to determine “acceptable.” It always involves opinion, ethics, morality, cost, and perceptions – in other words, personal judgment. 

Instead of using to risk matrix, perhaps it might be useful as a communication tool assisting the safety engineer express an opinion about the resulting risks. The risk code and/or position on the risk matrix table can’t be used to determine “acceptability”, it can’t be used to determine a “priority” for action – it really can’t be used for much, except that it might help inform the decision makers about the “importance” of an identified hazard. That in combination with a lot of other information can help make the ultimate decision about whether or not to spend time and money to fix a potential problem.

I wonder if there is sufficient value in doing “false” risk quantification to offset the many abuses to the process that occur throughout industry and regulators.  The reason that I call them “false” isn’t that I think anyone is attempting to hide or obscure anything.  My contention is that they are seldom more than an expression of “engineering judgment”.  It might be better to express that judgment in a format that clearly identifies it as a judgment, rather than in a form that has the appearance of a “quantified truth.” 

Most managers and regulators are looking for a quick, simple and responsibility-free (and hence liability-free) means of deciding the question of acceptability. Abuses abound, showing up as a regular feature of in-depth accident investigations showing that the “acceptable” decision was determined by whether or not the anticipated risk code fell within an essentially arbitrary criteria. While the criteria may have been met, the risks were not acceptable as evidenced by the outcome. There are far too many examples of these categories being converted to elements of a cost/benefit analysis showing that solving the problem is more expensive than the cumulative person costs to the unknown future injured parties.  Unfortunately this use of the risk codes can lead to rationale along the lines of, “I can’t afford to reduce the risk because it would cost me more than your cost of your injuries.”  This is a rather odd risk acceptance criteria, but common.     

I wonder if it might not be better to drop the risk matrix entirely and instead use an interactive process where “experts” (stakeholders) with a range of points of view come together to achieve a unified decision concerning the acceptability of the risks. All of the stakeholders need to agree that the risks are acceptable, not just a subset – and definitely not because they met an existing criteria.  This idea is close to the “old” approach of “concurrent engineering” in that all of the stakeholders are included in the decision making process at the same time, rather than each group working separately and then “throwing” a finished project “over the wall” to be accepted or rejected by the using community.  The idea of “consilience” comes close to what I have in mind.  One definition of consilience is, “the perception of a seamless web of cause and effect.” This is opposed to the often used idea of a single cause and effect genesis leading to accidents.  A single cause is seldom “the” cause of an accident, it is much closer to a seamless web of cause and effect.

Perhaps the risk matrix might be used as a communication tool, but the real risk acceptance process brings into consideration many, many important considerations that were not included in that part of the safety assessment. To minimize confusion and mis-use, perhaps it would be best to drop the use of the matrix entirely, using well thought out rationale statement and studies instead of attempting to over-simplify the process.   

Are Jews a Separate Race?

Now I am confused. I see that Whoopi Goldberg got chastised and temporarily banned from “The View” for saying that the holocaust wasn’t about race. The reactions to that statement really surprised me because I agreed with her. As far as I know, the topic with regard to the Jews and the holocaust is ethnicity, not race. Yes, the Nazi’s talked about it in terms of “race” – but I always just assumed that they were wrong. I am not aware of any visible (or perhaps invisible but detectable) genetic feature that causes someone to be Jewish. I think race implies such as a feature. Just because the Nazi’s talked about it as if they were trying to eliminate a race doesn’t necessarily make it so. However, I suppose if the Germans thought they were dealing with a race, and acted upon that belief – perhaps the actions were racist even though the reality was different.

This is close to the point where I get really confused by the entire topic of “race” and “racism” – I don’t quite know what it even means. We are all people, and all people carry features that come with along with their genes, coming from their families and ancestors. As far as I can determine, every single person could be considered a separate “race.” That might be a little extreme, but for certain every “family” (consisting of parents and children) could have that designation. Beyond that everyone is a mix. It appears to me that the Jewish historical identification includes quite a diverse mix of races.

Anyway, it seems to me to be rather extreme to call out Whoopi for her thinking on the topic. It doesn’t equate to “ignorance” or lack of education, nor does it equate to somehow being insensitive to the long term persecution of the Jews, or the horrible atrocities of the holocaust. It just means that she is a thinking person that understands that not all persecution, or all types of prejudices are racially driven – they can just as well be driven by ethnicity of all kinds. At least it might have brought up the issue for thought and discussion – which is probably a good thing.

Fair Electricity Rates

There has recently been a lot of “chatter” on the news and in advertisements concerning how “unfair” electricity rates (tariffs) are for residential users that don’t have “rooftop” solar.  The contention is that somehow the low income users of electricity are subsidizing the rates for those “wealthy” users that have invested in PV solar on their homes.  Perhaps there is some truth in that claim, and perhaps there is none – it all depends upon what costs are included in the determination of relative values.  It is perhaps worth noting that there has never been an attempt to make energy rates “fair” for all users.  There are more than one hundred different electricity rate schedules in California, each with widely different rate details.  These rates are intended to promote special interest groups to meet political agendas. The idea of achieving “fairness” among users isn’t, and never has been, an important issue for setting tariff schedules. Since the chatter seems to only be addressing the “fairness” for residential users, I am going to limit my considerations to that topic.  However, there are similar questions of “fairness” versus societal “importance” across all of the tariff schedules.

The proposed new rate schedule is known as NEM3 (the existing schedule is called NEM2).  The proposal for the new schedule is based on the idea that since solar users get credit for over-production of electricity when the sun shines that can be used to offset the use of electricity at other times, there is a potential for using the services of the grid without having to pay for them.  This occurs because the costs of the infrastructure (wires, poles, substations, and associated maintenance) are charged as part of the cost of a kilowatt-hour (kWhr) of energy.  If a solar user achieves “net zero” there is no overall use of energy and hence no charge for the infrastructure.  (Net zero means that as much power is sent to the  grid during times of high solar production as taken from the grid at times of lower (or no) production.)  Because the energy provided or used is evaluated upon kilowatt-hours, rather than dollars, kW-hrs of energy are “lent” to the utilities and returned at the current retail rate. Charges for the annual difference are paid as an annual “true up” bill. 

The reason that rates for those that install solar power seem “unfair” is that the entire residential rate structure is inherently unfair.  The residential rate system is an out-of-date billing model that doesn’t work well with the new reality of getting a significant amount of power from small, distributed, energy sources such as rooftop solar.  The current rate structure does not reflect the physical structure of the grid and therefore cannot be made “fair” to everyone. The solution to achieving “fair” rates among residential users is to change the approach to setting rates.

Currently all of the costs for electric power are combined into the cost of a unit of energy. However, the costs of services reflected in the tariffs are actually made up of the combination, or bundling, of three very different “services”, only one of which is related to electrical energy. The cost of operating a utility and providing electricity to homes is made up of three broad categories: (1) the cost of the delivered energy (kWhr), (2) the cost of the delivery system (transmission and distribution infrastructure), and (3) various mandated “social” costs including forest fire recovery, decommissioning of nuclear power plants, community education, and more. The rate structure bundles these costs together in a single charge as if they are each related to the amount of energy purchased. This is physically not the case and therefore never results in a “fair” or equitable cost structure. It does not match reality, and is inherently unfair. The three categories of costs should be reflected separately in the rate schedule for electricity to ensure that each is treated fairly for all customers. The “cost of energy” (in terms of dollars per kWhr) should be a separate charge from the cost of providing the infrastructure required to deliver that energy.

The cost of delivering electricity (the transmission and distribution infrastructure) depends upon the cost of that infrastructure and is not related to the amount of energy actually delivered. The costs of power lines, substations and maintenance are the same whether or not any electricity is actually delivered; it only depends upon how much might be required at one time. Transmission and distribution costs are based upon the maximum amount of power demanded (kW) rather than volume (kWhr) delivered over time. To achieve “fair” rate schedules, customers should be charged for the size of their electrical service separately from the amount of energy being delivered. (This is not a “new” idea, almost all of the rates except for residential rates have “demand changes” separate from “energy” charges.)

The third cost in an electricity bill is for the “social” costs covering socially mandated costs.  These are similarly not directly related to the amount of energy used.   There should be a flat rate covering these “social” costs for each user of the system.  These costs are not related or associated with the cost of electricity.  They should be shared equally among users of the grid system.

Electricity rates should have three separate parts for energy (kWhr), power demand (kW) and social purposes. The energy portion is the only variable for a given installation (service), the others should be the same for everyone based upon the size of their service and political decisions of how much we should cover as a “social” common good.  This approach would be “fair” in that everyone would pay for the amount of energy they use and for the cost of providing the service that they require. Everyone would pay for their energy use plus a flat rate for their service based upon size of their main service breaker (50 amp, 100 amps, 200 amps, other) plus a mandated fixed amount for the common good. 

This approach allows for the creation of solutions to encouraging efficiency improvements as well as energy on-site energy storage systems (using batteries, hydrogen fuel cells, elevated weights, pumped water, etc.).  Energy efficiency is encouraged because it is directly tied to the amount of energy used, and therefore encourages investments in return for lower energy costs. Since the “demand” aspect of the charge is based upon peak usage, there is a potential savings by investing in storage capability to store energy during periods of low usage to lower the peaks.  These approaches could reduce both the demand for electricity (kW) as well as the actual usage (kWhr), thus reducing energy use (and costs) while achieving State energy and sustainable energy goals.  It is interesting to note that both of these approaches are available and cost effective with or without solar.  Installing solar can augment both of these approaches by encouraging affordable investments toward achieving global goals of reduction of energy use as in a sustainable, environmentally appropriate way while providing a sustainable, eco-friendly source of electrical energy.

Some solar users might opt for a very small main service breaker (20 amps or so) because they are “self-sufficient” enough to require very little interaction with the utilities –resulting in reduced “demand” charges along with reduced energy costs. Perhaps they wish to accomplish some of this by installing energy storage devices to store their excess energy for later use rather than depend upon the utilities.

There remains at an additional question to be answered, “At what location in the system should the cost of energy be based upon?”  Power plants located a long distance from the users require long, inefficient, transmission lines to move the energy from the power plant to the local (community) distribution system. Typically, more than half of the power produced at a power plant located at a distance from the load is lost during transmission.  Therefore, the amount of energy produced by the power plant is not the same as the amount of energy that is available for use.  For example, a 90% efficient gas powered power plant becomes a 45% efficient power source at the user end of the transmission line. A 100% efficient solar array in the desert becomes a 50% efficient power source as compared with one that is located on a rooftop (with essentially zero loses in transmission).

Because there are no transmission loses, a kilowatt-hour of solar produced on a rooftop in a community is worth around two kilowatt-hours of solar electricity made in the desert or by a distant power plant. The “cost of electrical energy” should be determined at the end of the transmission lines nearest the user, where the transmission lines connect to the distribution lines, rather than at the power plant. This is an important consideration when determining the relative value of locally produced power versus that created at a distance.

The average wholesale price of electrical energy in California is about $0.10 per kWhr. Because of transmission loses, the cost of power delivered to the distribution system is often twice that, or about $0.20 per kWhr. Power produced within the distribution system (such as roof top solar) should be valued at the cost at the distribution system, not at a distance source.   A kilowatt-hour of electrical energy at the point of distribution system is worth $0.20, not the wholesale price of $0.10 paid at the power plant or “hub”.  

In addition to the compensating the utility for the direct cost of providing power there are very large “indirect” costs that are not included in the price of power, but are instead paid for by society at large.  During its deliberations over the proposed NEM3, the PUC (Public Utilities Commission) decided against considerations of indirect “societal” costs when evaluating the value of rooftop solar.  The reason given is they don’t have information on that topic because they haven’t funded the necessary studies. It seems that the real reason may have been because they found that it would create answers that they didn’t like.  The indirect (“societal”) costs should be included because they are costs that are directly related to the cost of power but paid by all citizens of the State (and world) by means other than directly for the cost of power.  These indirect costs include things such as the impacts of greenhouse gas created global warming due to greenhouse gases, pollution, habitat destruction, loss of “open space”, problems created by mining, fracking, destruction of free flowing rivers, damage to fish habitat in impounded rivers, etc.  While these are all real and actual current and future costs, they are not captured or included in the price of electricity.  They are paid by everyone, but not included in the true cost of electricity. 

The reason that this is important is that while these types of “indirect” costs apply to the production of energy from traditional energy sources, there are clear and important benefits received from those who are willing to invest in making their homes more efficient and/or adding rooftop solar, both of which have small environmental footprints and thus result in decreasing the negative footprint of energy production and use.  The State of California, as well as the United States and many countries around the world have expressed their intent to reduce the negative impacts of power production.  They all agree that reducing residential impacts are important to achieving those goals. Incentives and subsidies at the individual resident level are necessary to ensure that the costs to homeowners for investing in a “clean” future are repaid– thus rooftop solar and efficiency improvements should receive a subsidy for the clean power that they produce (or “dirty” power they avoid).  There should be  substantial subsidies/incentives for the cost of installing, energy storage systems, and achieving lower energy use through design and efficiency measures.  This is all required in order to achieve a “fair” and “equitable” sharing of energy costs for all residential users.    

NEM (Net Energy Metering) 3.0

INTRODUCTION

The California Public Utility Commission (CPUC) is voting on a decision about changing the way that distributed (“rooftop”) solar power is billed and credited.  Distributed solar power includes energy produced in the proximity of the end user, including residential solar as well as small scale community solar projects where a group gets together and shares a community owned array. This contrasts with centralized generation where solar electricity is produced by a large plant, transmitted over long distances and then distributed to consumers through a power distribution network (grid).   The proposed new rates are referred to as NEM 3.0 (Net Energy Metering) to replace current NEM 1.0 and NEM 2.0.

BACKGROUND

NEM 1.0 and NEM 2.0 provide a means for a user (solar producer) to send unused power back to the grid to be credited for use at a future time.  There are slight differences between NEM 1.0 and NEM 2.0 in how the “extra” energy is evaluated, but the general idea is that excess power used at one time can be “used” anytime during the future year. The basic idea is to use the grid as a kind of long term “battery”.  This approach provides a way to size a system in such a way as to provide a credit for excess power produced in the summer months when solar is plentiful, and use the resulting credit it to offset the use (and cost) in the winter months when solar is scarce.  At the end of the year there is a “true up” where the user either pays for any power that they used in excess of what they made, or get paid a small amount for excess power that they didn’t use (usually in the form of a credit).  In addition, there is a small (around $10/month) bill for “non-power” mandated subsidies for other things including energy efficiency programs, public purpose programs, the Wildfire Fund, and Nuclear Decommissioning and grid service costs.

Residential electric bills are divided into three major categories that cover the (1) cost of energy, (2) cost of transmission and distribution, and the (3) mandated non-power costs.  These are all combined into a rate that is then billed in a single “cost per-kWHr” rate. Thus if a solar user is “net zero”, they have no energy use and therefore do not pay for the energy or distribution infrastructure costs – except for a small flat rate as previously noted.  That means that they not only don’t pay for the power they use (which seems correct since they didn’t use any), but they also don’t pay for the costs associated with installing and maintaining the grid or providing the other services that are necessary to make the system function.  Depending upon the value of their excess power production, they might be getting something for nothing, meaning those that do not have solar are paying more than they would otherwise pay if nobody had solar.

Based upon this consideration,  the PUC determined that the NEM 2.0 billing practices are “unfair” for those that are not solar users and that something needs to be done to “solve” this terrible “problem” by finding a way to charge solar users more for the use of the grid.

PUC ANALYSIS

The PUC commissioned their energy consultants E3 (Energy and Environmental Economics), Verdant Associates and Itron, Inc to perform Lookback Studies and Avoided Cost Calculations in an effort to determine the cost/benefits associated with the presence of distributed solar power on the grid.  These consultants were asked to determine whether or not solar was a cost benefit or deficit based upon six studies: (1) A lookback study evaluating historical cost/benefits; (2) Avoided Cost Calculations to determine the value of added solar resulting from avoiding grid costs; (3) Participant Cost Test (PCT); (4) Program Administrator Test (PAC); (5) Total Resource Cost (TRC); and (6) Rate Payer Impact Measure (RIM).  In addition, the PUC was charged with performing a seventh test, the Societal Cost Test (SCT) but did not do so because of a lack of interest, and therefore a lack of funding.  Each of the measures for the PCT, PAC, TRC and RIM resulted in a value on a scale centered on zero impact, with relative positive or negative values.  The tests estimated whether solar had a negative or a positive impact on the costs of each of the areas of concern.  While the name of the Avoided Cost Analysis sounds like it is all encompassing, it actually provided a narrow view into the total avoided costs. It was specifically described in the report as being incomplete and unsuitable for rate structuring purposes – however, it was the MAIN calculation used in the decisions concerning changes to the rate structure.

The exclusion of Societal Cost Test (SCT) is perhaps the most egregious failure of the PUC Commission’s evaluation because that is the place where costs beyond the narrow costs of providing power would have been addressed.  Costs such as the loss (or preservation) of open land, environmental/ecosystem impacts, leakage of methane, reduction in the risk of global warming, and many other “extra” costs would have been identified.  However, the PUC ruled that not only were they not funding those studies, but they believe these “other” costs are insignificant and needn’t be included in the modeling.

The PUC then formed a group of several interested parties charged with the task of developing independent proposals based upon the results these studies to help identify a solution that would achieve mandated energy goals while maintaining a “fair” billing structure for all parties.  Proposals for Net Energy Metering Tariff Changes were submitted by CALSSA (California Low-Income Coalition); CCSA (Coalition for California Utility Employees); Californians for Renewable Energy; CESA (California Energy Storage Association); CalWEA (California Wind Energy Association); Clean Coalition; Foundation Windpower; GRID Alternatives with Vote Solar and Sierra Club; Ivy Energy Multifamily VNEM; Joint Utilities; NRDC (National Resources Defense Council); PCF; Public Advocates Office; Sierra Club; SBUA (Small Business Utility Advocates); SEIA/Vote (Vote Solar with the Solar Enginery Industry Association) ; and TURN (The Utility Reform Network). 

PUC DECISION

The PUC held extensive hearings on the topic of the “fairness” of the current rate structure, with almost exclusive emphasis on how much of the Transmission/Distribution costs were being shifted from distributed solar users to those who do not use solar.  There were no considerations that perhaps private citizens building their own power plants might decrease the need for such purchased by the utilities or any other system level benefits that might be provided by distributed power. Their findings are that the solar users are not paying “their fair share” and therefore the rates need to be changed to balance the costs.  While this is perhaps a reasonable conclusion, their approach selecting a solution was arbitrary and capricious  in that it was in reaction to a claim of “unfair” but had little, or no, basis in facts or the data.  They heard all of the proposals, turning them down as being unconvincing.  There final decision was actually pretty simple.  They decided that distributed solar was receiving too much incentive and therefore should pay higher rates for the use of the grid.  Their statement of the problem is that distributed solar users make too much on their investment.  In order to decrease their return on investment, they decided to design a rate schedule that will limit the simple payback time for a solar installation to 14 years without batteries, and 10 years with them (as an incentive to install batteries). 

Therefore the rate schedule that they developed by the PUC was designed to limit the value of the investment on solar rather than cover the costs of maintaining the grid.   A 14 year payback period is equivalent to about 7% return on investment.  While this is a “reasonable” return, it is less than can be expected from many other types of long term investments, such as investments in stocks or bonds.  This is significantly less than the historical 10% return on stocks and is hardly an “incentive” rate, especially since it requires tying up a large amount of cash for many years, limited flexibility – and includes significant uncertainty associated with future prices of energy and future PUC rate setting actions.  Not only is there significant uncertainty of the future price of energy, but distributed solar arrays typically have warranties of around 10 years for inverters and 25 years for solar arrays, with no warranty on the other parts of the system and the warranties are for parts only, they do not cover labor costs beyond ten years.  There is the potential for a large future cost to repair/replace failed components. 

By way of comparison, large solar installations receive various types of government incentives and subsidies designed to provide the owners with a minimum rate of return of over 15% (usually over 20% after tax incentives are included).  A 15% ROI translates into 6.7 year simple payback time.  The reason that large solar installations are subsidized by that amount is that there is a strong desire to switch to non-polluting energy sources for “social” reasons such as avoiding a catastrophic collapse of the worlds environment due to greenhouse gas emissions, and because it takes this much (or more) to make taking rather risky investments such as solar economically worthwhile – whether for a business or an individual.  

CONCLUSIONS

The PUC NEM 3.0 proposal does not result in a “fair” rate schedule, nor does it achieve the important goals of achieving carbon reduction goals while protecting the rest of the environment.  To fund electricity infrastructure by attaching the cost to energy costs is outdated, and unworkable, in precisely the same way that attaching road taxes to gasoline has become as more and more cars are becoming electric.  In the case of automobiles, paying for the transportation infrastructure costs through gas tax means that those that use gasoline pay all of the costs, and those that use electric cars pay none.  The infrastructure costs should be attached to the use of the infrastructure, in this case to mileage and vehicle weight.  Perhaps there should be no gas tax, but there certainly should be a mileage tax. 

In the case of electricity, the infrastructure costs should be attached to the use of the infrastructure – in this case the amount of power available (demand service size).  The maximum delivered demand load drives the size of all of the infrastructure components, and hence the cost of providing the infrastructure supporting each service.  There should be “hook up” charges based on the size of the service plus some non-demand related fixed costs such as the “non-energy” mandated charges.  The energy charges should be based upon the amount of energy used, or purchased since solar allows the service to act at a power source to the grid.

Burying one type of expense in a different type of service/product results in “unfair” practices, and makes it much more difficult to understand and therefore provide appropriate incentives and subsidies where desired for socially desirable reasons.

The move to a new NEM 3.0 should be postponed until such time as the rest of the costs are included in the cost modeling, and until a decision about how to better separate energy costs from grid infrastructure costs are determined.  Making such important decisions based upon a casual opinion that a 14 years payback should be sufficient for homeowners is bad policy and is inexcusable in this case.   A logical extension of the PUC proposal is to limit, or charge the customer, for improved efficiency measures since that means they will use less power and therefore pay less for the use of the grid than others who don’t invest in efficiency measures.  The goal is to reduce the need for power, reduce the use of non-renewable energy, and minimize the greenhouse impacts on the environment.  Focusing on keeping prices high, and providing dis-incentives for efficiency improvements (including self-generated solar) are counterproductive.  

Emily

I had an interesting, and rather unsettling, encounter with a young lady named Emily while on vacation in Mexico.  My wife and I had taken a trip to Cabo San Lucas with some of our daughter’s in-laws and family friends. Altogether, we made up a party of 32 people ranging in age from about 25 to about 65 years old.  A mid-winter trip to Cabo had become a bit of a family tradition for my daughter’s in-laws.  They stay in an “all inclusive” hotel for a week.  That pretty much means that there is plenty of food and drink all day long, except when napping.  Even the pool had a swim-up bar so you didn’t have to do more than drift over to that end of the pool to get another drink.  Luckily the drinks were very `weak.

I didn’t expect to enjoy the time there because this sort of vacation doesn’t seem very interesting, but it was actually quite pleasant.  I mainly sat in the shade and read.  We didn’t do much talking with the rest of the folks in the party because they sat in the full sun while (being blonds with very light skin) my wife and I found shady places – thus we spent most of the day in separate locations from the main party.

The hotel had various activities during the evening and into the night.  Generally, we all went our own directions at night – the “kids” went off partying and we “adults” went to our rooms.  On several evenings I would sit by an outdoor fire circle for a couple of hours and talk to other hotel guests.

One of the evenings was karaoke night.  I have never sung karaoke, but I decided to go watch for awhile.  Most of the rest of our party was there, hooting, hollering, and singing their hearts out.  Our group was made up of a bunch of cowboys (actual, honest to goodness cowboys with the boots, hats and big belt buckles as proof) and farmers.  The cowboys got right into the spirit of the event.  They rather dominated the evening, singing solo and in groups – not particularly well, but with a lot of enthusiasm.

Things were rapidly going down hill as the evening wore on until a lovely stranger stepped up to the microphone.  I was pleasantly surprised at her appearance – young, tall, and beautiful.  Then she started to sing and I was blown away!  Her voice was even prettier than she was.  It was an amazing change from the cowboys we had been listening to; she sang loud, beautiful and crystal clear. 

That was an interesting interlude, but soon the guys were back at it again.  I decided that since I was with the group I should play along with them rather than just being a wallflower observer.  I started looking through the list of songs hoping to find something that I might know.  I like music, but seldom pay attention to who is playing or the names of the songs – so it was a bit of a challenge for me to find something that I might be able to sing all the way through.  I decided that I might know some old Beatles songs, so was looking through them when one of the ladies in our group suggested “All You Need Is Love.” The idea was that it is simple and others would join in.  So, I chose that for my introduction to the karaoke game. 

It started off easy enough.  I at least knew the introductory chorus, but it quickly degenerated once I got past the few words that I knew.  All of a sudden I found myself madly trying to read the words on the screen, but getting further and further behind.  Then it dawned on me that there was an abrupt change in vocal range coming up. I remembered that there is an octave or so jump that has to be made.  Well, that was a real problem because I don’t have much of a vocal range.  I have a really deep voice that just kind of hangs around there at the bottom of whatever other people are singing.  I guess you might call it a bass, and a low one at that.  I was already singing an octave above my normal range, so the idea of going even higher was a bit stressful to say the least.  As I was singing along worrying about this shift, it felt a bit like being pushed into a box canyon where I had no way out but would have to jump to the top of the cliff.  I began to feel panic rising in me.

When the time finally came for that shift, a rather odd thing happened – I just jumped for it and went into a falsetto that I had never experienced.  I just gave it all and bang, there I was singing another octave (or maybe two!) above where I had been.  The really weird thing is that I think I might have even been on the correct notes and in key.  Who would have expected such a thing?  Then I noticed that pretty girl, who was sitting next to the stage – watching me.  When I made that jump, her face totally lit up, she broke into the most amazing, and approving, smile that I can recall ever seeing.  She liked it!?  Of course, once I saw her smile I completely forgot where I was in the song and ended up just kind of dribbling out the end.  I don’t even know if I finished the song – probably not.

I just kind of shuffled back to my seat, relieved that I had been a “good sport” and wouldn’t have to repeat the attempt.  Soon the karaoke portion of the night came to an end, and it turned into a dance party.  I was there by myself and really didn’t much feel like dancing, but decided to stick around and watch for awhile.  However, my plans were interrupted when that girl came up and asked me to dance.  I did, but felt pretty self conscious because she was so pretty and I felt out of place and married.  I finally agreed. However, when we stopped dancing she stuck around with me, and we chatted about nothing special – that was when I found out that her name was Emily. 

I then started noticing an interesting activity in the night club.  Several men had also noticed Emily and were making their moves.  They came and got her to dance, but she always came right back to me.  They came and tried all sort of approaches – which I had never seen before, but she was probably totally familiar with.  However, it soon became clear to me that they could do whatever they wanted, but she was with me for the evening.  At one point one of the guys even tried to get me into a fight with him to prove his prowess.  One guy came and talked religious talk, another talked about how much money he had.  Others talked about athletic things.  All were useless; Emily was with me for the night – and I knew it. 

I tried to make sure I mixed in with the rest of our group, not wanting to make it too obvious that I had a beautiful girl being nice to me.  Whenever she went to dance with someone, I would go talk to a friend or mingle – but she always came back and pulled me back out of the crowd.

It was kind of an amazing thing.  I have heard about meeting a soul mate from a past life, and never much paid any attention to it.  However, Emily was like that.  It just felt like she was a good friend from somewhere in the distant past.  We picked up our conversation at a point that felt like it had been going on forever.  There was no feeling of lust (well, not a lot of that at any rate), more of a feeling of great friendship.  When she went to dance with another, or she went off and talked to some other guys, it was obvious that she would be back and that there was no reason to worry about it, or fret. 

Finally it was closing time, and we were standing in the hall with a group of my party.  I realized at that moment that it was over, there was no way that I could continue that into the night – and really didn’t have a need or desire to do so.  She offered to go get some beer she had in her room and continue with the party (which she did with some of the guys) – but I knew that was the end of it for me. I just walked off through the dark and quiet halls to the elevator to return to my room and my wife.    

That was the end of the story, she was gone – but I still have the very strong and rather odd feeling that she is gone “once again.”  I heard that she was up until 5:00 am partying with the boys, still singing as the sun was coming up.  The next day was the day to travel home, so there was no more chance to see her or talk to her any more.  Then, as I was leaving, she was standing next to the pool in her bikini – as pretty as she was the night before.  I went up to her to say goodbye and thank her for the evening.  She just kind of looked blankly at me for a moment, then broke into the same smile as the night before and threw her arms around me in a hug to say goodbye.  She had just put on suntan lotion, so my shirt stuck to her belly – we were momentarily “glued” together.  Then I turned and left for the trip back home.

It has been a couple of years since that night, but there is still a lingering feeling.  There is still a feeling of loss, of having made an important connection of the soul, but it just pulled apart once again.  Maybe we will meet again in yet another life, maybe not.  In any case, she brought an energy and connection that I am not likely to forget. 

I think the most important part of this event to me was a reminder that there is more to being with people than just being with them.  I had been noticing that my connections with people – friends, loved ones, strangers, those I don’t much like – had become somehow “flat” emotionally.  I enjoyed them, and liked to be around them, but there was not much “energy” involved.  For a few years I had been wondering if this was because I had moved further into my Buddhist and Toltec practices so that strong emotions had become dampened, or if it is just a natural thing that comes from growing older.  I wondered if maybe I had somehow become too self-centered to respond to people with strong emotions – love, compassion, interest, hate, disgust and all the rest.  It has been pleasant enough to be a little disconnected from others, but a little lonely too. 

Emily somehow shook me awake again.  I feel like she kind of slapped me around, reminding me to pay attention to energy and emotions, to fully engage in life rather than sit on the sidelines and watch.  Not that I really did that, I usually get engaged in life – but it had become muted.  That night my connection with Emily was certainly not muted!  I was all of a sudden wide awake again.  I am a little melancholy that she was just a vision passing in the night, but am grateful for the experience and reminder. A couple of weeks after this encounter I was driving to town early in the pre-dawn morning when I realized that Emily wasn’t really a stranger to me, it seemed like she was the angel that I had encountered years earlier in an automobile accident (see “Angel Lady”).  As soon as that thought came to mind, the hairs all over my body stood on end – and tears welled up in my eyes.  It seemed that I had recognized the connection, that whoever this lady is – she has been there to help me before.