Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

NNadir

NNadir's Journal
NNadir's Journal
December 11, 2019

It's Too Bad Journalists Don't Read Editorials in Scientific Journals: Units and Energy Literacy.

The paper I'll discuss in this post is an editorial in a scientific journal: Energy Literacy Begins with Units That Make Sense: The Daily Energy Unit D (Bruce Logan, Environ. Sci. Technol. Lett. 2019, 6, 12, 686-687)

Dr. Logan is the editor of Environ. Sci. Technol. Lett. It is the rapid communications sister journal of Environ. Sci. Tech. I read both regularly.

Recently on this website, someone posted an excerpt of this bit of journalistic nonsense marketing bullshit and unsurprisingly it immediately generated 25 recommends: Tesla's Virtual Power Plant rescues grid after coal peaker fails, and it's only 2% finished.

The qualifications of the author of this piece of benighted marketing is named Simon Alvarez, who proudly - if you click on the link for his name - has this to say about his qualifications to write this bit of "news:"

Simon is a reporter with a passion for electric cars and clean energy. Fascinated by the world envisioned by Elon Musk, he hopes to make it to Mars (at least as a tourist) someday.


As a scientist, all I can say is, "Don't worry about it, Simon. You are already on Mars."

It's pretty funny that I came across this editorial on the same day I came across Simon's Elon Musk worship piece.

Here is how Simon, who is no worse than nearly all of the journalists writing about the grand solar/battery "miracle:"

Once complete, Tesla’s Virtual Power Plant in South Australia will deliver 250MW of solar energy and store 650 MWh of backup energy for the region. That’s notably larger than the Hornsdale Power Reserve, which is already changing South Australia’s energy landscape with its 100MW/129MWh capacity. In a way, Tesla’s Virtual Power Plant may prove to be a dark horse for the company’s Energy Business, which is unfortunately underestimated most of the time. Couple this with the 50% expansion of the Hornsdale Power Reserve, and Tesla Energy might very well be poised to surprise in the coming quarters.


According to Simon, we're "only" 2% along the way for the huckster Musk's "Virtual Powerplant." There's that magic word so popular in this kind of narcoleptic rhetoric that is destroying the world with complacency, "percent."

While the illiterate use of the unit MW is used to describe the solar peak power capacity, Simon is slightly better than most journalists inasmuch as he (in the same sentence) also includes a unit of energy, the MWh, which is equal to 3.6 billion joules.

The big lie we tell ourselves with huge enthusiasm even as the atmosphere collapses in a festival of ignorance is that a 250 MW solar plant is the equivalent of a 250 MW gas or coal or nuclear plant. However, it is rare for a solar plant to ever reach its peak capacity, and overall, even in deserts, the capacity utilization of a solar plant is typically 15% or less. If a gas or coal plant shuts down because a solar plant is producing a significant portion of its rated peak capacity for an hour, it has to burn extra gas or coal to restart, because, as anyone with a cooled down tea kettle knows, if it's cooled, the water does not boil instantaneously when you turn the gas or electric burner back on. A "250 MW" solar plant is thus the equivalent of a 37.5 MW plant that can operate continuously. Moreover, since the solar plant's output is in no way connected with demand, it's not clear that the energy provided by it will be useful.

Simon doesn't tell us how big the tripped coal plant was, but let's say it was a small coal plant, rated at 500 MW. Two percent of 650 is 13. Thirteen MWh means that the Tesla future electronic waste could cover the output of the coal plant (if it's 500 MW) for 13/500 = 0.026 hours = 1.56 minutes = 93 seconds.

Really? It "saved" Queensland Simon?

Because we want to believe this sort of wishful thinking by Simon who wants to go to Mars someday on Elon Musk's
"vision," we are well past the 400 ppm milestone for the accumulations of the dangerous fossil fuel waste carbon dioxide in the planetary atmosphere. We passed it (as measured at the Mauna Loa Observatory) permanently in the week ending November 8th, 2015. No one alive now will ever see a reading below 400 ppm again.

Smoke another joint Elon, and tell us all about your solar powered car and your rockets to Mars.

History will not forgive us, nor should it.

The serious paper written not by a little kid with science fiction dreams, but by a real scientist (Bruce Logan) referenced above at the outset of this post is open sourced, and anyone can read it. I will excerpt it briefly and post a table from it in any case; an interested party who actually is invested in reality can read it in full.

It is amazing how much we learn to perceive things through units that become common in our lives. On a cool autumn morning, you look at the thermostat in the United States and from experience you know how to choose the perfect coat for 52 °F. However, if you hear the temperature in Gallargues-le- Montueux, France, reached 46 °C (this past July), you probably have to Google a temperature conversion to change it to Fahrenheit (115 °F) to understand it. When you go to work and drive on a road posted at 35 mph, you know what that speed feels like, but what if you were in Europe and it was posted in kph? Or what if a European tells you the mileage for her car in liters per 100 km, and you struggle to relate that to numbers you know based on miles per gallon. We develop a sense of things based on experience with certain units, and when those are different, you lose your perception of the quantity.

Most of us do not have a basic sense of the amount of energy we consume for different activities in our lives. One reason is that we find it difficult to compare things that have different units, even if they describe the same property (such as temperature), and units of energy are particularly challenging! We often make comparisons based on something we can relate to, such as saying how many football fields we could cover or how many Olympic size pools we could fill. It is more difficult to relate energy units within one context, such as energy for our apartment or house, to other things in our life, such as fuel for our car.


Dr. Logan does not, unfortunately suggest the general public the opportunity to use the SI unit for energy, which is the joule. It is easily transferred to units of large scale with the prefixes Kilo-, Mega-, Giga-, Terra-... ...Peta-, Exa- Zeta-...

He suggests a unit D for day, which is 2000 calories, the dietary food requirement of a "normal" human being in a single day, which is 2.3 kWh or 8.28 million joules. I think this unnecessary. The Joule is the best energy unit there is.

The world in 2018 passed an energy consumption of 600 exajoules, an all time record. Exa- means 600 with 18 zeros after it. Solar energy, after the expenditure of trillions of dollars on it, doesn't, combined with wind energy, produce 13 exajoules. In the percent language so popular in the lies the public tells itself, led by scientifically illiterate journalists, all this money and all this hype - half a century of it - produces less than 2% of world energy demand, and, in percentage terms, the fraction provided by dangerous fossil fuels is increasing, not decreasing.

In units of energy kWh, Dr. Logan provides the following table, later translating it in a subsequent table into his tortured unit "D." This is the "typical" amount of energy required or produced by each device for a typical day:



He writes below the table:

Note that these units are in energy use per day (kWh/d), which has units of power, and a gallon of gasoline is included as a reference point. Some of these units makes sense to compare, but for others, such comparisons are awkward. For example, the 120 hp engine from your car translates to an engine rated at 2160 kWh, but you would not (I hope) operate your car all day at its maximum power. These units of kWh also span different time frames (you do not eat continuously all day), and some units lack a more personal connection, such as food units in kWh.


The unit of power here MWh/day, is easily converted to a unit of energy by multiplying it by 1 day. Thus it is easily understood as energy. Note that it would take 33 solar cells to produce a single gallon of gasoline, 90 to produce the electricity demand (for all purposes, including labor) as much electricity as a person in this country consumes in a day. The second law of thermodynamics which is almost never discussed in the garbage people like Simon produce, limits how much of the stored energy in a battery, a piece of future electronic waste that will never be sustainable on a scale of hundreds of exajoules, can be recovered.

As long as we cheer for crap like this, we will be doing nothing useful to address the great crime we are perpetrating on all future generations, the permanent destruction of the planetary atmosphere.

Have a nice day tomorrow.


December 8, 2019

Continuous On Line Analysis of Constituents of the Radioactive Hanford Tanks.

The paper I'll discuss in this post is this one: Online, Real-Time Analysis of Highly Complex Processing Streams: Quantification of Analytes in Hanford Tank Sample (Bryan et al, Ind. Eng. Chem. Res. 2019, 58, 47, 21194-21200).

Nobel Laureate Glenn Seaborg described the chemical processing in the Manhattan Project to produce plutonium, an element of which he was co-discoverer, as the fastest and greatest chemical scale up in history. The first sample of plutonium he created, which is now displayed in the Smithsonian Institution's History of Science Museum - I've seen it - contained a tiny quantity of plutonium that was invisible; it's existence was recognized by detection of its radioactive decay signal. The nuclear reactions that created it was 238U[d,2n]238Np. The reaction was carried out using the 60 inch cyclotron at UC Berkeley. The neptunium (which was not initially detected) decayed within days to the plutonium isotope, 238Pu, which was characterized by trace chemical procedures in Gilman Hall, room 307, in late February, 1941.

The first human built device to leave our solar system necessarily contains kg quantities of the 238Pu isotope.

As everybody knows, the discovery of plutonium played a huge role in the Manhattan project, and the scale up in which Seaborg played a key role, involved scaling the isolation of plutonium from essentially the atomic scale to multiple kg quantities. This was an industrial process, designed and executed in a completely ad hoc fashion, using materials and substances that had never been seen by anyone previously, possessing properties, notably intense radioactivity, that had never been addressed on an industrial scale.

As someone with considerable experience, albeit largely (but not entirely) indirect, involving the scale up of chemical processes, this is not the way chemical processes are scaled today.

In this process, it was absolutely necessary, given the physics of plutonium at the rate at which it formed to utilize sources of it that were extremely dilute solid solutions of uranium. This procedure therefore necessarily produced significant quantities of by products, many of them highly radioactive. At the time, very few people thought about the long term consequences of handling these by products, now generally described by the public lexicon as "nuclear waste." A far greater concern at that time was that scientists working for Adolf Hitler would develop a nuclear weapons first. In some cases the by products were simply dumped in trenches. Ultimately storage tanks were built. Almost all of this process work was conducted at the Hanford plant in Washington State, the site having been selected because the nuclear reactors that were ultimately built to produce plutonium required significant quantities of cooling water to run.

As everybody knows, the "hot" war, World War II - which started, at least as far as the United States and the former Soviet Union were concerned, as an oil war - became the world's only observed nuclear war, which was followed by a cold war, by the two participants in the war who possessed and produced significant amounts of oil. (There have been many oil wars since 1945, but happily, no more nuclear wars.)

During the cold war, the production of weapons grade plutonium, in Washington State and elsewhere, accelerated to an even larger scale, from kilograms to metric tons. The requirement for the production weapons grade plutonium - tons of which was vaporized in the open atmosphere, and distributed across the planet by the United States, the former Soviet Union, Great Britain, France and China - has always involved the use of dilute solid solutions of the element, and has thus always generated huge quantities of by products. At the Hanford site, 149 single shell tanks were constructed to contain these by products between 1945 and 1964, and after 1964, when it was understood that some of these tanks were leaking by products, many of which were highly radioactive, into the ground. After this was discovered, a new class of tanks were built, double shell tanks, an additional 28 tanks.

During the history of the filling of these tanks, the types of materials in them varied widely, often with marginal record keeping because they were subject to multiple and changeable processes. The initial process for plutonium recovery was called the "Bismuth Phosphate" process, which was followed by the Purex process (still in use in various places around the world), the Urex process, and the Truex process, the "ex" referring to the basic chemical approach in the processes, which is solvent extraction using solvents and extractants produced from the dangerous fossil fuel petroleum, for example, kerosene, and tributyl phosphate. The fuel rods were dissolved in highly corrosive (necessarily corrosive) acids, primarily nitric acid. The nitric acid solutions were neutralized, after extensive processing to isolate plutonium (and in some cases other elements of interest), with sodium hydroxide, enough to keep aluminum from the processes in solution, although in some cases, this aluminum precipitated in a form of the mineral gibbsite.

The early tanks were designed to accommodate solutions that were subject to continuous boiling, since the side products were not only radiologically hot, but also thermally hot. Once it was recognized the tanks were leaking, it was decided to reduce the heat load in them by removing the cesium from the tanks, using another set of processes that were also somewhat ad hoc. I wrote about the processing involved elsewhere in this space: 16 Years of (Radioactive) Cesium Recovery Processing at Hanford's B Plant. As I noted in that post, the process utilized to remove the cesium was recognized, after the fact, as having created a theoretical risk of a massive chemical explosion owing to a potential for a chemical reaction between ferricyanide and nitrate. It was happily discovered however that the radiation in the tanks had destroyed the cyanide and rendered any risks nil.

This outcome, by the way, suggests why so called "nuclear waste" has largely unappreciated value, since it has the demonstrated value of destroying high risk chemicals, some of which are far more intractable than cyanide and are features of far larger quantities of wastes than are present at Hanford, specifically electronic and the very frightening (at least to people paying attention) agricultural waste nitrous oxide.

Unlike nitrous oxide, the "nuclear waste" tanks, and the Hanford site in general, has garnered a huge amount of interest and concern, particularly from a set of people, anti-nukes, who I personally regard as intellectual and moral cripples. I, as anyone who has ever read the tripe I write here - which is not necessarily designed to be informative as it is to drive my autodidactic exercises - knows, am a rather rabid advocate of the rapid scale up of nuclear energy, which I regard as the only practically available tool to save humanity from its most intractable wastes, the most dangerous form of waste being dangerous fossil fuel waste. Combustion wastes, including the combustion wastes associated with "renewable" biofuels kill, as I often point out, kills about 19,000 people per day. These wastes are most commonly called "air pollution." Another 1200 people die per day from diarrhea associated with untreated fecal waste. As an advocate of the rapid expansion of nuclear energy, people who oppose my admittedly less than uniformly admired stance, are always directing my attention here to the Hanford reservation, about which they know less than I do, since they are a uniformly uneducated bunch when it comes to nuclear issues, and simply hate stuff about which they know nothing. The Hanford tanks are not risk free. It is very possible that materials leaching from them will someday result in death or injury for some people, but the number of "at risk" people is vanishingly small when compared to the observed and on going death toll of people killed by other wastes, in particular, combustion wastes associated with dangerous fossil fuel and "renewable" biomass combustion. I therefore morally and intellectually reject the notion that we should spend hundreds of billions of dollars to save a few lives that may be lost from Hanford leaching when we are unwilling to spend a comparable amount of money to clean up the planetary atmosphere which are in the process of destroying.

The moral idiots making this case, that Hanford is a dire emergency requiring the abandonment of nuclear power, while the death toll of air pollution, climate change from dangerous fossil fuels and, for that matter, fecal waste, is not, simply make me angry and upset.

Thank God DU has an ignore function. I have a very low tolerance for deliberate ignorance.

Despite this objection of mine, huge amounts of money are being spent to "clean up" Hanford utilizing an arbitrary risk to cost ratio that would never be applied to dangerous fossil fuels, since the application of such a ratio to dangerous fossil fuels would make them immediately unaffordable, and we believe we can't live without our consumer stuff that dangerous fossil fuels power. The silver lining on this cloud of selective attention is that the money being spent is producing some very good science, science that will have value in many fields, including the field of the recovery and utilization (ideally) of radioactive materials.

That brings me to the paper referenced at the outset.

Because of the ad hoc nature of the processes to which the contents of the Hanford tanks were subject, the nature of their contents is highly variable and in some cases, unknown. The paper is about the contents Hanford Tank AP-105, a single shell tank from the 1960's that has been leaking for some time. However to see how variable the contents of the tanks can be, here is a graphic from a government report, PNNL-18054 WTP-RPT-167, Rev 0, describing variability in a set of Hanford tanks not including AP-105:



In order to reduce costs, improve safety and quality in any industrial process, real time analysis of the process is to be preferred to what the authors called "grab sample collection and offline analysis. To wit, from the introduction of the paper:

Online monitoring of chemical processes is a growing field with the potential to impact manufacturing, field detection, and fundamental research studies.(1?5) This approach allows for unprecedented, in situ characterizations of chemical systems. A variety of analytical techniques have been employed, ranging from ultrasonics to mass spectrometry.(6,7) However, optical spectroscopy offers a pathway with the greatest potential for providing chemical information including concentration, oxidation state, and speciation.(8?11) The primary strength of optical spectroscopy is the ability to provide significant amounts of characterization data for many chemical species, which leads to the primary challenge associated with this technique. In complex systems with multiple chemical species, the measured optical signals will be proportionally complex. The resulting spectral overlap, matrix effects, ionic strength effects, or signal interferences can inhibit accurate or timely response.(12,13)

This is strongly evident when monitoring the complex streams of the Hanford waste site, the largest superfund cleanup site in the United States.(14,15) With millions of gallons of radioactive waste needing to be remediated and moved to environmentally secured locations, current processing schemes rely heavily on sample collection and off-line analysis to ensure the correct management of materials. Grab sample collection and off-line analysis, however, are time consuming, costly, and have the potential to expose personnel to hazardous conditions.(13,16?19) Most importantly for processing timelines, waiting on grab sample analysis can force a batch-processing approach with extended periods of wait time between processing steps.(20) The Hanford site would benefit from the application of online monitoring by realizing faster (real-time) characterization of process streams while substantially reducing the need to expose personnel to hazardous conditions in the collection of grab samples.

Optical spectroscopy, and particularly Raman spectroscopy, is useful in the analysis of Hanford tank wastes. A majority of tank components are Raman-active with unique fingerprints that can be used to identify and quantify target analytes.(20)
The primary analytical challenge lies in accurately quantifying target analytes within the Hanford tank matrix. Hanford tanks contain a wide range of chemical species, with limited precharacterization to inform and aid in signal analysis.


Raman spectroscopy was discovered in the late 1920's by C.V. Raman, the first Asian to win the Nobel Prize. At the time of his discovery, during the British Raj, when the British regarded themselves as superior to Indians with absolutely no justification, Raman spectroscopy involved extremely hard work, with a single experiment taking many days to perform. The technique involves exciting a molecule with intense monochromatic light, and observing weak emissions radiating at wavelengths differing from the monochromatic light. The development of lasers and CCD detection devices has made it possible to develop commercial instruments that can run experiments in seconds rather than days. Since the emissions involve vibrational and rotational changes in molecules, raman signatures can only be obtained for multi-atomic molecules and not for atoms or ions that are not bonded to another atom or ion.

Here, from the paper, is a description of the contents of the components of Tank AP105.




The equipment:

Spectra were collected using a Raman spectrometer from Spectra Solutions Inc. and associated Spectra Soft software (version 1.3). Instrumentation consisted of a thermoelectric-cooled charge-coupled device detector and 671 nm diode laser. Collection times of 1 s were utilized, where every five spectra were collected and averaged into one spectrum for modeling and online monitoring applications. No spectral data processing other than data collection was performed using the Raman instrumental software.
A specialized flow cell, consisting of a machined holder to maintain the Raman probe alignment into a quartz flow cell, was used to interrogate both stationary and flowing samples. Flow loops were maintained with a QVG50 variable speed piston pump (Fluid Metering, Inc.) capable of pumping fluids at rates from 0 to 35.6 mL/min as set by a controller module. Flow rate calibration curves can be seen in the Supporting Information.


The experiments take 1 second.


The following graphics demonstrate the result of the Raman real time spectroscopy experiments performed on simulated and real Hanford tank contents:



The caption:

Figure 1. Spectra of pure components anticipated in tanks focused on the fingerprint range (top), overlapping NO3– and CO32– bands (middle), and the water band (bottom).





The caption:

Figure 2. Parity plots for NO3– (top) and CrO42– (bottom) showing results for both the training set (gray circles) and validation set (other markers).




The caption:

Figure 3. Spectral response of the multicomponent sample (top) and the concentrations over the course of the run (bottom).




The caption:

Figure 4. Raman spectral response (top) over the course of the flow test and resulting chemometric measurements (open circles) of NO3– (middle) and CrO42– (bottom) to known values (black dashed lines).




The caption:

Figure 5. Spectra of real AP-105 at multiple flow rates and resulting chemometric results from flow test.


An important feature of the instrument must be the radiation resistance of the components.

Irradiation Experiments

A Raman probe and two different samples of a quartz window material (sample cuvettes) were exposed to ? dose from a cobalt-60 source. These materials were irradiated stepwise, increasing by a decade each irradiation, from 1 × 104 rad to a cumulative dose of 1.7 × 108 rad. Between each irradiation step, the spectra of the AP-105 tank simulant were acquired using irradiated and nonirradiated micro-Raman and 1 cm cuvettes.



The results:



The caption:

Figure 6. Picture of the window material before and after complete irradiation (top), spectra of AP-105 simulant as a function of dose (middle), and resulting NO3– measurements across the dose steps (bottom).


The table of analytical results.



The R2 values are, in some cases, a little lower than what we would accept in the pharmaceutical industry, but almost certainly sufficient for this type of analysis.

The paper's conclusion:

Raman spectroscopy is a robust and highly applicable tool that can be applied to the online monitoring of complex and hazardous processing streams. Subsequent analysis of spectra utilizing chemometric analysis allows for highly accurate, real-time quantification of target analytes. Raman spectroscopy and chemometric analysis were successfully utilized to accurately identify and quantify nine critical components of real tank waste from Hanford tank AP-105: a radioactive sample that has more than 10 components in a high ionic strength environment. Furthermore, the Raman probes and subsequent analysis demonstrated highly robust capabilities to perform accurately after receiving over 1 × 10^8 rad of ? dose. Overall, Raman-spectroscopy-based online monitoring is a powerful route to characterize processing streams that present challenges such as chemical complexity and hazardous or damaging environments.


Interesting, I think.

I trust you're having a wonderful Sunday and that if you will be celebrating the upcoming holidays, that your preparations are going well.






December 6, 2019

Trump and Judy.

For some reason, Baby Trump's adventures with Trudeau and Macron made me think of this Kliban cartoon.


December 2, 2019

Jackson Station

December 1, 2019

Experimental Determination of the Bare Sphere Critical Mass of Neptunium-237.

The paper I'll discuss in this post is this one: Criticality of a 237Np Sphere (Rene Sanchez et al., Nuclear Science and Engineering, Nuclear Science and Engineering, 158:1, 1-14 (2008)).

Neptunium is the only actinide element that is easy to obtain in an isotopically pure form simply by chemically isolating it. This is because all of the isotopes except Np-237, which has a half-life of 2,144,000 years, that are known and which form readily in thermal spectrum nuclear reactors - which represent almost all of the world's commercial nuclear reactors - are short lived. The half-life of Np-238, the parent of plutonium-238 is 2.117 days, and the half-life of Np-239, the parent of plutonium-239 is 2.356 days. Thus even in a continuous on line isolation system from a critical nuclear fluid of the types now under discussion, chiefly molten salt type reactors, any isolated neptunium would decay, with a few weeks time to essentially pure Np-237.

Neptunium is routinely formed in the operation of commercial nuclear reactors. In thermal reactors, neptunium has a high neutron capture cross section and its fission is rare. Chiefly it is transmuted into plutonium-238, the accumulation of which has the happy result, in high enough concentrations (albeit not necessarily routinely formed concentrations), to make reactor grade plutonium that is essentially unusable in nuclear weapons. (As a practical matter, it is much easier to make nuclear weapons from natural uranium by separating the U-235 than it is to make it from reactor grade plutonium, and since it is impossible for humanity to consume all of the natural uranium on the planet, it will never be possible to make nuclear war impossible.)

In a fast neutron nuclear spectrum, neptunium can form a critical mass, and thus can be utilized as a nuclear fuel (or in theory, a nuclear weapon).

I personally favor fast spectrum nuclear reactors, since they represent the potential to ban all energy related mining, dangerous natural gas wells, fracked and "normal," dangerous petroleum wells, fracked and "normal," all the world's coal mines, and in fact, all of the world's uranium mines for many centuries to come, utilizing the uranium already mined and the thorium already dumped by the lanthanide industry.

The so called "minor actinides," generally including neptunium, americium, curium and sometimes berkelium and californium, all have useful properties; there has been a lot of discussion in the scientific literature of using neptunium and americium as constituents of nuclear fuels, to eliminate the often discussed, but entirely unnecessary waste dumps for the components of used nuclear fuel.

From the introduction of the paper:

For the past 5 yr, scientists at Los Alamos National Laboratory LANL have mounted an unprecedented effort to obtain a better estimate of the critical mass of 237Np. To accomplish this task, a 6-kg neptunium sphere was recently cast1 at the Chemical and Metallurgy Research Facility, which is part of LANL. The neptunium sphere was clad with tungsten and nickel to reduce the dose rates from the 310-keV gamma rays originating from the first daughter of the a-decay of neptunium, namely,233Pa.

Neptunium-237 is a byproduct of power production in nuclear reactors. It is primarily produced by successive neutron captures in 235U or through the n, 2n reaction in 238U. These nuclear reactions lead to the production of 237U, which decays by beta emission into 237Np (Equation 1):



It is estimated that a typical 1000-MW electric reactor produces on the order of 12 to 13 kg/yr of neptunium.2 Some of this neptunium in irradiated fuel elements has been separated and is presently stored in containers in a liquid form. This method of storage is quite adequate because the fission cross section for 237Np at thermal energies is quite low, and any moderation of the neutron population by diluting the configurations with water would increase the critical mass to infinity. However, for long-term storage, the neptunium liquid solutions must be converted into oxides and metals because these forms are less movable and less likely to leak out of containers.

As noted in Ref. 3, metals and oxides made out of neptunium have finite critical masses, but there is a great uncertainty about these values because of the lack of experimental criticality data. Knowing precisely the critical mass of neptunium not only will help to validate mass storage limits and optimize storage configurations for safe disposition of these materials but will also save thousands of dollars in transportation and disposition costs.

The experimental results presented in this paper establish the critical masses of neptunium surrounded with highly enriched uranium (HEU) and reflected by various reflectors. The primary purpose of these experiments is to provide criticality data that will be used to validate models in support of decommissioning activities at the Savannah River plant and establish welldefined subcritical-mass limits that can be used in the transportation of these materials to other U.S. Department of Energy facilities. Finally, a critical experiment using an a-phase plutonium sphere surrounded with similar HEU shells and using the same setup used for the neptunium experiments was performed to validate plutonium and uranium cross-section data.


A brief excerpt of the materials utilized in these experiments:

The fissionable and fissile materials available consisted of a neptunium sphere, HEU shells, and an a-phase plutonium sphere. The neptunium sphere was ;8.29 cm in diameter and weighed 6070.4 g. Based on its weight and volume, the calculated density for the neptunium sphere was 20.29 g0cm3. A chemical analysis was performed on the neptunium sphere sprue…

…The analysis showed that the sphere was 98.8 wt% neptunium, 0.035 wt% uranium, and 0.0355 wt% plutonium. There were also traces of americium in the sphere. Table I shows the elements found in the chemical analysis of the sprue. Approximately 1% of the mass of the sphere was missing because the sprue sample did not dissolve completely.

To reduce the gamma-radiation exposure to workers, which comes mostly from the 310-keV gamma ray from the first daughter of 237Np, 233Pa, the neptunium sphere was clad with a 0.261-cm-thick layer of tungsten and two 0.191-cm-thick layers of nickel. The gamma radiation at contact with the bare sphere was reduced from 2 R/h to 300mR/h for the shielded sphere. Table II shows the dimensions, weights, and calculated densities of the neptunium sphere and different cladding materials. The total weight of the sphere, including cladding materials, was 8026.9 g. Figure 2 illustrates how the neptunium sphere was encapsulated. Except for the tungsten layer, both of the nickel-clad materials were electronbeam welded. In addition, a leak test was conducted for the nickel-clad layers to ensure that the neptunium metal and possibly some neptunium oxide produced in the event of a leak were contained within these materials and not released into the room or the environment.


Table 1:



This is a highly technical paper, and it is probably not of any value here to excerpt all that much of it. Nevertheless, there is a great deal of public mysticism about nuclear technology, mysticism that is killing the world, since nuclear energy is the only technology that might work to ameliorate, stop, or even reverse climate change. There is so much mysticism and misinformation that completely scientifically illiterate morons like say, Harvey Wasserman, can find people ignorant enough to believe he is, in fact, an "expert" on nuclear issues. (He's not. He is an abysmally ignorant fool, whose ignorance is killing people right now.)

With this in mind, I thought it might be useful to show some diagrams and photographs of the work that was performed here and that is found in the original paper:















A student of nuclear history will recognize that these experiments are very much like the experiments with the "demon core" that killed the nuclear weapons scientists Harry Daghlian and Louis Slotin in separate experiments in 1946. The remote equipment here is obviously designed to prevent that sort of accident from recurring.

The authors explored a number of different systems and reflectors, including both polyethylene and steel. In the process of conducting these studies, they refined some nuclear data on uranium isotopes, a valuable outcome.

From their conclusion:

Several experiments were performed at the Los Alamos Critical Experiments Facility to measure the critical mass of neptunium surrounded with HEU shells and reflected with various reflectors. For some experiments, Rossi-? measurements were performed to determine an eigenvalue that could be calculated by transport computer codes. These experiments were modeled with MCNP. For neptunium/HEU experiments, ENDF0B-VI data underestimated the keff of the experiment by ;1%. ENDF0B-V data and an evaluation provided by the T-16 group at LANL were in better agreement, although these cross sections continue to underestimate the keff by only 0.3% on average. After adjusting the neutron cross section for 237Np and 235U so that the MCNP simulations reproduce the experiments, we have estimated that the bare critical mass of 237Np is 57 +/- 4 kg.


Currently the main use for Np-237 is as a precursor for Pu-238 for use in deep space missions. Production of this important isotope has resumed at Oak Ridge National Laboratory, albeit on a small scale.

If we are interested in saving the world - there isn't much evidence that we are - neptunium can play a larger role in doing so, and thus this historical work is of considerable value.

A related minor actinide, which is also a potential source of Pu-238, although this plutonium will always be contaminated with Pu-242 owing to the branching ratio of the intermediate Curium-242, is americium-241.

It was estimated, in 2007, that the world inventory of these valuable elements was, as of 2005, was about 70 tons of Np-237, and 110 tons of Americium. It is desirable, critical actually (excuse the pun) that these materials be put to use.

I wish you a pleasant Sunday.

Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 33,518
Latest Discussions»NNadir's Journal