HomeLatest ThreadsGreatest ThreadsForums & GroupsMy SubscriptionsMy Posts
DU Home » Latest Threads » NNadir » Journal
Page: « Prev 1 ... 5 6 7 8 9 10 11 12 13 14 15 ... 126 Next »


Profile Information

Gender: Male
Current location: New Jersey
Member since: 2002
Number of posts: 27,628

Journal Archives

In the battle against senility, I've managed to smoothly and correctly spell "pyrolysis."

I don't know why, but for the last several years, whenever I go to spell the word "pyrolysis," I find myself typing pyrrolysis, pyrollysis, pyrrollysis, etc.

I've noticed that in the last several months, I'm spelling the word correctly every time.

When I was a kid, I thought I was a candidate for spelling bee competitions. The invention of the computer disabused me of that notion, but now, near the end of my life, I've managed to correctly spell, every time, this word, which I run across several times a week, sometimes, like today, ten or twelve times in a single day.

Ion-capture electrodialysis using multifunctional adsorptive membranes

The paper I'll discuss in this post is this one: Ion-capture electrodialysis using multifunctional adsorptive membranes (Adam A. Uliana, Ngoc T. Bui, Jovan Kamcev, Mercedes K. Taylor, Jeffrey J. Urban, Jeffrey R. Long Science 16 APR 2021 : 296-299)

Continuous processes are generally economically and environmentally superior, in general, to batch or discontinuous processes. Highly efficient processes are economically and environmentally superior to inefficient processes.

In the case of energy generation, however, demand is highly variable, albeit in a fairly predictable way, particularly in the case of electricity, where a profound material and thermodynamic penalty is associated with storage, including the much hyped use of batteries which are often presented as a cure all for the profound environmental flaws of so called "renewable energy." From my perspective - and I'm not shy about stating it at all - batteries will make the already onerous environmental cost of so called "renewable energy" worse, not better, and will cause the already rapidly deteriorating situation with respect to climate change accelerate ever faster. A battery is a device that wastes energy, and in particular, the energy it wastes is already a thermodynamically degraded form of energy, electrical energy. The production of energy in any form will never, under any circumstances involving any device, risk free nor will it be completely and totally environmentally neutral. Thus wasting energy is a very bad, and frankly, dangerous idea.


One of the serious environmental and economic drawbacks of so called "renewable energy" is that it is discontinuous by nature, and the discontinuities are entirely disconnected from demand, and not really predictable, even in the solar case where one can easily construct tables of sunrises and sunsets for any place on Earth as far into the future as one wishes. This often results in electricity being worthless, in such a way that no producer of energy can recover the costs of their infrastructure. This fact accounts for the fact that countries heavily dependent on so called renewable energy - which is advertised disingenuously as being "cheap" - have the highest consumer prices in the OECD.

Famously or infamously for anyone who has suffered exposure to my position on energy and the environment, I am an advocate of nuclear energy, which I regard as the only sustainable and environmentally acceptable form of energy there is.

There is a problem with existing nuclear infrastructure however that needs to be discussed, and which is obvious to anyone taking a sober and honest look at it.

One hears from time to time claims about nuclear plants that could be designed to follow demand loads for electricity - a friend of mine often points out that this is already done on nuclear submarines - as if electricity should continue to be the product, as it has been for more than half a century, of a nuclear plant; we call them nuclear power plants after all. In terms of capacity utilization, the fraction of energy produced compared to the amount of energy that theoretically could produced over a period of time with a plant running at 100% of capacity, nuclear energy is the best there is, with most plants around the world operating at 90% or better capacity utilization. Of course doing so depends on the baseload demand being high enough to accept the output of the plant. If electricity is being dumped on to the grid because the sun is shining brightly and the wind is blowing at just the right speed, all of the electricity being produced at a baseload plant like a nuclear plant as well as the electricity being produced by the solar and wind facilities is worthless. This uniformly negatively impacts nuclear economics, coal economics, petroleum economics, gas economics, hydroelectricity economics, and for that matter, solar and wind economics. It also raises - this cannot be stressed enough - the external costs, the costs to the environment, and more subtle but nonetheless real health costs of energy.

Even if it can be done, feathering nuclear plants to match low demand, there are some technical issues that need consideration. The most famous and well known of these in the nuclear engineering community concerns the fission products having a mass number of 135, in particular the iodine-135 and xenon-135 species in the decay chain, both of which are highly radioactive, having half-lives respectively of 6.58 hours and 9.18 hours. This "problem" was recognized during the Manhattan project - where it was managed by engineers and scientists who lived and died by the use of ancient and marvelous devices known as slide rules - and it is relevant today, when accumulations of these isotopes can be modeled to any degree of precision and accuracy required by ever improving computer power. It played a role in the very stupid decisions made by the night crew operating the Chernobyl nuclear reactor that was famously destroyed by a steam explosion.

Xenon-135, with the exception of a very rare and difficult to produce isotope of zirconium (Zr-88) , has the highest neutron capture cross section of any known nuclide. When a nuclear reactor is operating at full power, a situation known as "Bateman equilibrium" exists, during which the Xenon-135 is being destroyed, either by absorbing a neutron to give non-radioactive and economically valuable Xe-136, or by decaying to radioactive Cs-135, as fast as it is being formed. Thus at a given level of power, the yield of a fission product is more or less constant, whereas the amount of material decay rises with its total mass.

The Bateman equations are a series of differential equations that can be solved numerically, and, apparently, exactly.

Here is a somewhat more sophisticated analysis of the Bateman equation as it applies to xenon-135: Solving Bateman Equation for Xenon Transient Analysis Using Numerical Methods (Ding, MATEC Web of Conferences 186, 01004 (2018))

The Bateman equation for xenon is this:

The first term in this equation refers to the fission yield of a nuclide, in this case, Xenon-135. The second term represents the amount that is being produced by the decay of iodine-135 (which has its own Bateman equation). The third term represents the radioactive decay of xenon-135. The fourth term represents the loss of radioactive xenon-135 when it absorbs a neutron and is converted into non-radioactive (stable) xenon-136, something that no other nuclide (except Zr-88) has as high a probability of doing.

The symbol φ in this equation's first and fourth terms is the symbol for the neutron flux, the number of neutrons flowing through a unit of area when the system is critical.

When a reactor shuts down, φ goes essentially to zero (ignoring trivial spontaneous fission), the neutron capture portion of the equation is no longer present, and neither is the fission yield portion of the equation. The Bateman equilibrium thus shifts in such a way that xenon-135 accumulates according to the second and third terms, and the amount of xenon-135 present begins to rise, having a longer half-life than it's iodine-135 parent. At some point iodine-135 depletes to a level such that the amount of xenon-135 accumulated is decaying as fast as it is formed. Thereafter its concentration begins to fall. It is problematic to restart a nuclear reactor until enough of the xenon-135 decays to the level found in the Bateman equilibrium that exists during full power operation. Thus for "normal" electricity generation nuclear reactors, shutting them down leads to a lag time between shutdown and restart, meaning that the power is not available to meet increased demand for several hours.

Consideration of this fact is what led the fools operating the failed Chernobyl reactor to violate protocol and pull all of the control rods out of the reactor to conduct their "test" with all safety systems disabled. It was, um, a bad idea.

It is worth noting by the way that Bateman equilibrium can be reached by any radioactive nuclide, including all of the radioactive fission products, and all induced radioactive materials, a fact that escapes most critics of nuclear energy while making insipid comments about so called "nuclear waste," this while ignoring the far more serious issue of dangerous fossil fuel waste. There is an upper limit, depending on power levels, to the mass of radioactive materials that it is possible to collect, whereupon the amount of fission products cannot rise. (It can be shown that this limit is actually approached asymptotically.) This is a situation that does not apply to dangerous fossil fuel waste, which is far more massive, and in fact, unlike used nuclear fuel, impossible to contain.


Another factor seldom mentioned when one considers shutting down a nuclear reactor because of low demand is far more subtle, and rarely, if ever discussed: Doing so wastes neutrons. Neutron economy is a key to saving the world. Nevertheless, this point is so subtle I won't discuss it further.

The point of all these mouthfuls is that the best way to operate a nuclear reactor is flat out, at maximum power, continuously. In my view however, it is wasteful to design, build and operate nuclear reactors solely to produce electricity, again, a thermodynamically degraded form of energy. Rather - again my opinion - it would be useful to generate electricity as a side product of a form of energy I have just got done criticizing, that is as stored energy, specifically, chemical energy. Forms of stored chemical energy include the energy dangerous fossil fuels, which is stored solar energy stored over many thousands of millennia (but burned in just a few centuries), as well as...wait for it...the energy in batteries.

Batteries...batteries...didn't I just get done saying batteries suck?

Indulge me...

Modern nuclear reactors, as Rankine cycle devices (for the most part) operate at a thermal efficiency of around 33%, which - this is hardly the first time I've pointed this out - is lower than the thermal efficiency of a combined cycle natural gas plant, many of which operate at thermal efficiencies of around 50%, sometimes even more.

As one should learn in a good high school science class, theoretical thermal efficiency - Carnot efficiency - is the highest for devices that operate at the greatest difference between the temperature of the heat source and the heat sink. If one follows the link just shown for "Carnot Efficiency" one can calculate the theoretical efficiency of a heat engine running between the boiling point of strontium metal, 1650 Kelvin, and "room temperature," generally taken as 298 Kelvin, for the heat sink - the Kelvin heat scale, where absolute zero is zero is required for thermodynamic calculations - to learn that in theory a heat engine operating over this range would have a maximum efficiency of 81.94%. In practice, the actual efficiency of a device operating at these temperatures would be lower, but it is impossible for it to be higher. The deviations from ideal Carnot efficiency and the real efficiency of real heat engines is concerned with the fact that no material is truly adiabatic - all substances conduct some heat (otherwise well insulated houses would not require heating systems) - and no heat exchange process will be reversible, entropy free.

This said, the rejection of heat can be minimized, particularly at high temperatures, particularly in situations where entropy is minimized, not eliminated (which is impossible) but minimized. This is the idea behind heat networks.

The following commentary is very, very crude, and is not intended to be accurate but rather illustrative.

Consider a device with boiling strontium metal at 1650K at 101,000 Pascals of pressure. In theory, assuming that all material science issues are addressed, the strontium gas (thermal energy) could expand against a turbine and produce mechanical work that could be used to produce mechanical work which could then be utilized to turn a generator to produce electricity, and the electricity could be converted in a battery to stored chemical energy that would then be available to be reconverted to electrical energy to drive a motor to produce mechanical work. Seven conversions of one form of energy into another are involved in this process that could power, say, one of those Tesla cars for millionaires and billionaires. The theoretical maximum Carnot efficiency, using the calculator linked above, and assuming the low temperature reservoir is 1100K, 50K above the melting point of strontium metal (1050K), is 33%. If the turbine process is 33% efficient, the generation and transport of the electricity is 90% efficient, the charging of the battery is 90% efficient and the conversion of the chemical energy in the battery is to electricity is 90% efficient, and the electric motor is 90% efficient than the overall efficiency is 0.9 * 0.9 * 0.9 * 0.9 * 0.33 = approximately 21%. Almost 80% of the energy is lost as heat rejected to the atmosphere.

Suppose though that instead of expanding against a turbine, the hot strontium gas is used to heat a wet solution of concentrated sulfuric acid, again with all materials science issues addressed. At the sulfuric acid reaches about 1300K, it will decompose into oxygen, sulfur dioxide and steam, all gaseous and all hot. Suppose that these gases are cooled in a tubular heat exchanger cooled by compressed carbon dioxide, which is heated during this process to around 900 K, expands against a turbine and cooled by heating pressured water (steam) at 500 K which then expands against a turbine and is brought to 300K, "room temperature." Using the handy Carnot calculator above, the carbon dioxide (Brayton) heat engine has an efficiency limit of 44%. The steam heat engine has an efficiency limit of 40%. Let us return to the gases generated by the decomposition of sulfuric acid though. There are means to separate the oxygen and the sulfur dioxide, and let's consider that the sulfur dioxide is pumped into a mixture of steam and iodine. Under these conditions an exothermic reaction takes place, the Bunsen reaction, whereupon the sulfur dioxide is reoxidized to sulfuric acid with the decomposition of the steam to give hydrogen iodine gas while regenerating sulfuric acid. If this mixture is cooled to condense the sulfuric acid the hydrogen iodide gas can be heated (to about 600 K) to generate hydrogen gas while regenerating iodine. The series of chemical reactions described are known as the sulfur iodine cycle, which sums to give the decomposition of water into hydrogen and oxygen and which has a practical thermodynamic efficiency of between 55 and 65%, a theoretical efficiency of 71%. Let's assume the worst case, 55% efficiency, meaning 45% is lost. But the carbon dioxide heat engine recovers, let's say, in a real engine, something like 40% of this loss, or 18% of the original energy, reducing the overall loss to 45% - 18% = 27%. If the real steam cycle operates at 33%, then 33% * 27% = 9%. Thus the overall efficiency of the entire process is 9% + 18% + 55% = 82%, with some of the 55% represented as directly stored chemical energy produced without the use of mechanical and electrical intermediates.

But 27% of the original heat energy is now in the form of mechanical energy, spinning turbines. The simplest thing to do with this mechanical energy, perhaps not the most efficient thing to do with it, is to generate electricity. It is important to contrast the hydrogen produced in this case with the lithium in a chemical battery. The hydrogen can be shipped anywhere, and be used in any way. Indeed, if the hydrogen is used to reduce carbon dioxide to produce the wonder fuel dimethyl ether, the reaction will be exothermic meaning that addition energy can be extracted using a turbine whenever demand justifies doing so. The dimethyl ether can be shipped anywhere, and used, at will to replace all the uses of dangerous natural gas, all the uses of dangerous LPG gas, all the uses of dangerous diesel fuel and also function as a refrigerant/heat exchange medium.

Let's leave aside the sloppiness of this crude, but illustrative, rhetoric and cut to the environmental and economic chase.

Of course, this process, were it to be both economically and environmentally sustainable, would be required to be continuous, meaning that the value of the two heat engine based energy recovery cycles would also be continuous. We should assume that the bad, but popular, idea of pursuing so called "renewable energy" may continue for a while, until both the futility and the unsustainability of this pursuit becomes obvious, as it must - we are seeing the first shots across the bow already. Therefore we must also assume that there will be portions of the day over the next several decades when electricity will be of extremely low value or even worthless, at least until all this so call "renewable energy" junk has begun to rot, requiring future generations to clean up the mess.

If electricity is worthless during a putative sulfur iodine process which produces electricity as a side product, what might be done with it in such a way as to not lose money?

This long, boring, convoluted argument brings me to the paper at discussed at the outset. Why not use this electricity to recover valuable materials from seawater while generating fresh water - drinking water, irrigation water, as a side product during the times electricity is next to worthless or totally worthless, while supplying it to the grid, carbon free, when demand and supply make it valuable? In considering this, one should recognize that capacitive desalination/metal recovery is but one approach to valorizing worthless electricity produced at times that there is no need for it on the grid, for example on windy sunny days in the proposed but idiotic "renewable energy" nirvana that has not come, is not here, and, as a practical consideration, will not come.

From the introduction of the paper cited at the outset:

Escalating demand for water in agriculture, energy, industry, and municipal sectors, coupled with limited natural freshwater, necessitates the rapid development of technologies that will enable access to clean water from alternative sources (1). Nontraditional water sources, such as wastewater, brackish water, or seawater, could provide abundant water globally, but these complex solutions contain high salt concentrations and trace toxic ions (such as heavy metals and oxyanions), which vary by location and type of water source (2–4). At the same time, nontraditional water sources often contain high-value ions (for example, uranyl in seawater and precious metals and nutrients in wastewater), but current technologies lack the efficiency and selectivity needed for their cost-effective extraction (5, 6). Electrodialysis, membrane capacitive deionization, and reverse osmosis are among the most common membrane-based technologies used for removing ions from water (2, 4). However, these approaches are incapable of selectively isolating individual solutes, and toxic ions are instead returned to the environment with the concentrated brine solutions (2). Accordingly, developing membrane technologies with substantially improved selectivity for either water desalination or the recovery of individual ions or molecules from water is considered one of the most important objectives in the separations industry (3–5, 7, 8).

Adsorptive membranes are an emerging class of materials that have been shown to exhibit improved performance in numerous separations when compared with conventional membranes, including for water purification (4, 9–13). However, improvements are needed in the capacities, selectivities, and regenerabilities achievable with these materials to enable their wide-scale use, which is also currently hindered by the limited structural and chemical tunability of most adsorptive membranes (4). We therefore sought to develop a highly modular adsorptive membrane platform for use in multifunctional water-purification applications, which is based on the incorporation of porous aromatic frameworks (PAFs) into ion exchange membranes. Built of organic nodes and aromatic linkers, PAFs have high-porosity diamondoid structures with pore morphologies and chemical affinities that can be tuned through the choice of node and linker (Fig. 1, A and B) (14, 15).

The authors then list the elements that might be recovered from seawater, but in current membrane desalination practice are returned to it, two of which, mercury and lead, are toxic elements added to the ocean by anthropomorphic activities, chiefly the combustion of the dangerous fossil fuel coal, but also from other sources, for example mercury used as ballast in sunken submarines and lead weights. Others include uranium, which is a natural constituent of seawater and has a long history of consideration as a source of this inexhaustible element, as well as copper, gold, boron, neodymium and iron.

The graphics in this paper, which I will produce shortly focus on mercury.

In 2014, it was estimated that the concentration of mercury in ocean water has tripled since the preindustrial times, with most of the mercury concentrated in surface waters: Lamborg, C., Hammerschmidt, C., Bowman, K. et al. A global ocean inventory of anthropogenic mercury based on water column measurements. Nature 512, 65–68 (2014).

Here is a table from that paper, suggesting concentration gradients in various places in the oceans:

The "thermocline" here can be taken as surface waters.

Since the atomic weight of mercury is 200.5, 1 pmol/kg out to 0.2 ng/ml or roughly, 0.2 ppm in some places. The USA EPA drinking water standard is ten times higher, 2 ppm. Certainly there are regions in this country where that standard is exceeded. As mercury is a neurotoxin which can cause, in excess, insanity, I sometimes muse to myself, more than half seriously, whether mercury and lead play a role in the rise of the Magats in this country. (How else can one explain the worship of a cheap white trash carny barker with poor taste, no ethics, and - there's something wrong with this - a history of handling - and losing - oodles of money?)

Certainly concentrations of mercury anywhere in the ocean at 1/10th drinking water standards are not a cause for comfort.

The Nature paper in the abstract, gives a wide ranging estimate of the amount of mercury humanity has added to the oceans, the upper estimate being around 1,300 million moles, which works out to around 260,000 tons.

The Science paper cited at the outset, in the supplementary materials, makes the following statements about a putative mercury capture membrane desalination plant:

To estimate the amount of water that can be treated in an ion-capture electrodialysis process before regeneration is required, we considered the use of 20 wt% PAF-1-SH in sPSF membranes as representative adsorptive membranes in treating water samples containing Hg2+ as the target contaminant at concentrations of 5, 1, and 0.1 ppm. Volumes of water treated were calculated assuming that PAF-1-SH embedded in the membranes reaches full Hg2+ saturation as shown in fig. S23, and that complete removal of Hg2+ from the feed water is achieved. Calculated volumes of water treated are provided in table S8, with values normalized by the amount of membrane used. The relative volumes of water treated compared to the desorption volumes required are additionally provided in table S8.

We also estimate the amount of water that can potentially be treated in an ion-capture electrodialysis plant per regeneration cycle, based on a typical industrial electrodialysis design. Here, we make the same performance assumptions as described above and assume 20 wt% PAF-1-SH in sPSF membranes are implemented as the cation exchange membranes. While electrodialysis designs and sizes vary by plant (51-55), we assumed the following design parameters based on typical setups reported:

• 300 membrane stack pairs (i.e., 300 cation exchange membranes consisting of 20 wt% PAF-1-SH in sPSF) (52-54)
• 1-m2 active area per membrane (51, 52)
• 300-μm thickness for each membrane (52, 54, 55)

Based on this design, a total 20 wt% PAF-1-SH membrane volume of 90 L, and thus a total PAF-1-SH mass of 16.8 kg, is expected for such a plant. The PAF-1-SH mass was determined by assuming that the 20 wt% PAF-1-SH membranes have a density of 0.931 kg L−1. This density was determined as the volume-averaged density between bulk PAF-1-SH and sPSF (0.420 kg L−1 and 1.337 kg L−1, respectively; see Section 1.4.6), using the 44.3 vol% PAF-1-SH value determined for a 20 wt% PAF-1-SH membrane (table S2). With the PAF-1-SH performance assumptions previously discussed, we estimate that the following volumes of water can be treated in an ion-capture electrodialysis plant before regeneration is required:

• ~3,000,000 L of water treated for a feed source containing 5 ppm Hg2+
• ~15,000,000 L of water treated for a feed source containing 1 ppm Hg2+
• ~150,000,000 L of water treated for a feed source containing 0.1 ppm Hg2+

The membranes are subject to regeneration by washing with HCl - which is readily available from seawater with other capacitive processes, for example, Heather Wilauer's process for recovering carbon dioxide from seawater to make jet fuel: Feasibility of CO2 Extraction from Seawater and Simultaneous Hydrogen Gas Generation Using a Novel and Robust Electrolytic Cation Exchange Module Based on Continuous Electrodeionization Technology (Heather D. Willauer, Felice DiMascio, Dennis R. Hardy, and Frederick W. Williams Industrial & Engineering Chemistry Research 2014 53 (31), 12192-12200) This acid can be neutralized by sodium carbonate, perhaps generated in the same process by treatment with air.

In the United States, large scale water demand is often stated using the somewhat unfortunate unit for volume, the "acre-foot." An "acre-foot" is equal to 811,030 liters of water.

The capacity of the resins is stated to be around 0.4 g Hg/g membrane, meaning that to process one cycle before requiring regeneration of the membranes, 16.8 kg of membrane would capture 6 kg of mercury.

The State of California consumes something around 35 million acre-feet of water per year or roughly 28 trillion liters.

Suppose some future generation, smarter than the one into which I was born, more appreciative of the planet than my generation of bourgeois consumers, decided to restore the environment of California, to do things like refill the depleting groundwater of the central valley, refill Owens Lake, open the dams on the Sacramento River to let the Salmon run, blow up the Hetch Hetchy dam and recreate that magnificent Hetch Hetchy Canyon, restore the Colorado River Delta... This is perhaps, a stupid dream, a big dream, a wild dream, but at the end of one's life, a dream worth having, speaking only for myself.

Suppose that California also decided to remove the greasy bird grinding wind turbines for desert and chaparral wilderness, remove the mirrors, the heat transfer fluids etc, suppose California decided to ban the use of dangerous natural gas and all fossil fuels.

...a dream worth having, it's called "hope," Pandora's hope, perhaps, but hope...

If by regulation, all of California's water was obtained from the sea, 28 trillion liters of it, perhaps even more, California could, in theory recover about 1,270 tons of mercury per year from the sea.

Just saying...

Some graphics from the body of the paper:

The caption:

Fig. 1 Design of composite membranes and application in ion-capture electrodialysis (IC-ED).

(A and B) Tunable composite membranes were prepared by embedding PAFs with selective ion binding sites into cation exchange polymer matrices. (C) We demonstrate the use of these adsorptive membranes in an electrodialysis-based process for the selective capture of target cations (right-hand side) from water and simultaneous desalination. Water splitting occurs at both electrodes to maintain electroneutrality. (D and E) PAF-embedded membranes are defect-free and exhibit optical transparency and high flexibility. (F) Cross-sectional scanning electron micrographs (expanded view in inset) revealed high PAF dispersibility and strong, favorable interactions between the PAF and polymer matrix

The caption:

Fig. 2 Properties of PAF-embedded ion exchange membranes.

(A and B) Composite membranes exhibit increasing water uptake, swelling resistance, and glass transition temperature (Tg) with increasing PAF-1-SH loading. (C) Comparison of equilibrium Hg2+ uptake in neat sPSF and sPSF with 20 wt % PAF-1-SH. Solid lines represent fits with a Langmuir model. Mercury ion uptake in the composite membrane closely approaches the predicted saturation uptake (329 mg/g), assuming all binding sites in the PAF particles are accessible. (D) Equilibrium uptake of Hg2+ in neat sPSF and sPSF with 20 wt % PAF-1-SH exposed to deionized (DI) water and various synthetic water samples with 100 ppm added Hg2+. (E) Mercury ion uptake in 20 wt % PAF-1-SH membranes as a function of cycle number. Minimal decrease in Hg2+ uptake occurs over 10 cycles. The initial Hg2+ concentration was 100 ppm for each cycle, and all Hg2+ captured in each cycle was recovered by using HCl and NaNO3. Error bars denote ±1 SD around the mean from at least three separate measurements.

The caption:

Fig. 3 IC-ED of diverse water sources.

(A to C) Results from IC-ED of synthetic (A) groundwater, (B) brackish water, and (C) industrial wastewater containing 5 ppm Hg2+ by using 20 wt % PAF-1-SH in sPSF (applied voltage, −4 V versus Ag/AgCl). All Hg2+ was selectively captured from the feeds (open circles) without detectable permeation into the receiving solutions (solid circles). (Insets) All other cations were transported across the membranes to desalinate the feeds. The long duration of the IC-ED tests is an artifact of the experimental setup rather than the materials or IC-ED method (section 2.2 of the supplementary materials). (D) Breakthrough data for IC-ED using sPSF embedded with 10 or 20 wt % PAF-1-SH. Receiving Hg2+ concentrations are plotted against the amount of Hg2+ captured at different time intervals (in milligrams per gram of PAF-1-SH in each composite membrane). The predicted capacity (gray dotted line) corresponds to the Hg2+ uptake achieved by using PAF-1-SH powder under analogous testing conditions (section 1.10 of the supplementary materials). (Inset) Concentration of Hg2+ in the receiving solutions for IC-ED processes using neat sPSF (diamonds) and sPSF with 10 wt % PAF-1-SH (squares) and 20 wt % PAF-1-SH (circles), plotted versus time t normalized by the breakthrough time for the 20 wt % PAF-1-SH composite membrane, t0. Mean values determined from two replicate experiments are shown. Initial feed, 100 ppm Hg2+ in 0.1 M NaNO3; applied voltage, −2 V versus Ag/AgCl.

The caption:

Fig. 4 Tuning membranes to selectively recover various target solutes.

(A and B) Cu2+- (A) and Fe3+-capture (B) electrodialysis (applied voltages, −2 and −1.5 V versus Ag/AgCl, respectively) using composite membranes with 20 wt % PAF-1-SMe and PAF-1-ET in sPSF, respectively. HEPES buffer (0.1 M) was used as the source water in each solution to supply competing ions and maintain constant pH. The insets show the successful transport of all competing cations across the membrane to desalinate the feed. (C) B(OH)3-capture diffusion dialysis of groundwater containing 4.5 ppm boron using composite membranes with 20 wt % PAF-1-NMDG in sPSF (no applied voltage). The inset shows results by using neat sPSF membranes for comparison. Open and solid symbols denote feed and receiving concentrations, respectively. Each plot point represents the mean value determined from two replicate experiments. Gray dotted lines indicate recommended maximum contaminant limits imposed by the US Environmental Protection Agency (EPA) for Cu2+ (29), the EPA and World Health Organization for Fe3+ (29, 30), and agricultural restrictions for sensitive crops for B(OH)3 (31).

The impressive selectivity from the Supplementary Material:

The caption:

Fig. S28.

(Top) Single-component equilibrium uptake of Hg2+ and various common waterborne ions by PAF-1-SH powder (initial concentrations: 0.5 mM). (Bottom) Equilibrium adsorption of Hg2+ by PAF-1-SH powder in different realistic water solutions with 100 ppm added Hg2+. Uptake of Hg2+ by PAF-1-SH from a solution of only Hg2+ only (100 ppm) in DI water is also shown for comparison. No loss in Hg2+ capacity occurs in the presence of various abundant competing ions in each solution, indicating exceptional multicomponent selectivity of PAF-1-SH for Hg2+. Reported values and error bars in each figure represent the mean and standard deviation, respectively, obtained from measurements on at least three different samples.

Be safe. Be well. Be vaccinated. Have a pleasant Sunday.

My wife avoided discussing the attire.

When we met, there was a woman on our campus who dressed that way, thus generating quite a bit of interest among science dorks, including frankly, me.

I married that woman.

My wife and I are having a disagreement.

On the day of my oldest son's second Pfizer...

...I say the lyrics are "My Corona."

She says it's about some woman with a name that sounds like Corona, She-Rona, Ricearona, something like that. Who would make a song up about such a weird name, anyway?

To me, the woman pictured in the tank top is wearing it to prep for her shot.

What say ye?

SARS-CoV- 2 one year on: evidence for ongoing viral adaptation

The review paper I'll briefly discuss in this post is this one:

SARS-CoV- 2 one year on: evidence for ongoing viral adaptation (Thomas P. Peacock, Rebekah Penrice-Randal2, Julian A. Hiscox, Wendy S. Barclay, Journal of General Virology 2021;102:001584)

It's really not necessary for me to go to deeply into it; especially for people with a modicum of scientific training. Like all Covid-19 papers it's open sourced. It has a very nice description of the mutations that are on going. I will though say a few things by way of explanation, particularly for those portions that might be misinterpreted so as to result in panic, for example the remarks made about E484K, the substitution of a glutamic acid residue by lysine.

I don't think that most people are truly aware of how marvelous the RNA vaccines, specifically the Moderna vaccine, for which I've had both doses, and the Pfizer vaccine, for which my sons have each had a first dose, really are. This technology, having been scaled rapidly to industrial production, and should be available indefinitely to save lives with new vaccine variants to address viral variants, should build confidence, not fear. In this case Science has come through, big time.

If one does not have a routine familiarity of the one letter amino acid code abbreviations, they can be found on the Wikipedia page, along with the structures of the amino acids: Wikipedia Proteinogenic Amino acids A mutant is usually described by the original residue, the number, reading from the amino terminus to the final carboxlic acid terminus, and the new amino acid following, as above E484K.

This review has a wonderful and detailed description of all the major variants. These are nicely summarized graphicly near the paper's outset.

Fig. 1.
Genome organisation of SARS-CoV-2 with regions of interest annotated. Mutations of interest (for example those found in B.1.1.7) shown as both nucleotide and amino acid changes. Figure made using Biorender (https://biorender.com/).

The mutants are shown in both their three letter codon forms. For example on the extreme lower left hand corner of the diagram, the substitution in the spike protein, the RNA triad that codes for A (alanine), GCT - guanidine-cytosine-thymidine (written in the reverse transcribed DNA form, rather than the RNA form that takes place in the virus) - is replaced by GTT, which codes for V (Valine).

Some brief excerpts from the text:

The introduction begins with the usual recap of what SARS-CoV-2 and then has this brief description of its genome:

SARS-CoV-2 is a betacoronavirus, containing a ~30 kb positive-sense RNA genome, among the largest of any RNA virus (Fig. 1). Coronaviruses, such as SARS-CoV-2, avoid error catastrophe by encoding an exoribonuclease (nsp14) that confers a unique proofreading mechanism during viral RNA synthesis [4, 5]. Genome sequencing of SARS-CoV-2 throughout the course of the outbreak, has revealed a nucleotide substitution rate of ~1Χ10−3 substitutions per year [6]. This is comparable to the substitution rate observed for Ebola virus (1.42Χ10−3) during the 2013–2016 West African outbreak [7]. However, SNPS are not the only genetic variation seen commonly in coronaviruses.

The exoribonuclease, which is kind of a error checking transcription feature, was something of which I was unaware. (I'm hardly a virologist.) This is actually a good feature, since it slows the mutation rate of the virus somewhat, but at the cost of keeping the viral particles functional. HIV, for example lacks this feature, and although many viruses produced are non-functional, many are viable, pathogenic and resistant to developed treatments, such as protease inhibitors. "NSP" refers to "non-structural proteins" that is proteins on the business end of viral replication.

There is a nice description of the process of viral replication:

Replication of the coronavirus genome and transcription of viral subgenomic mRNAs (sgmRNAs) are complex processes. The genome is roughly organised into two regions. The first two thirds of the genome is immediately translated and proteolytically processed in the host cell cytoplasm to generate the viral polymerase/transcriptase complex and other viral proteins. The remaining one third of the genome is expressed and translated through a nested set of sgmRNAs, this includes the spike glycoprotein and other structural and accessory proteins. These sgmRNAs are 5′ and 3′ co-terminal with the genome; the 5′ end contains a leader sequence that is present on the 5′ end of the genome. Along the genome, proceeding each ORF is a transcription regulatory sequence (TRS). The prevailing thought is that an integral part of the transcription mechanism in coronaviruses for the synthesis of viral sgmRNAs involves a discontinuous step. The easiest way to visualise this, is that the polymerase/transcriptase complex binds to the 3′ end of the positive strand and proceeds along the genome in a 3′ to 5′ direction synthesizing a negative strand. When the polymerase/transcriptase complex reaches a TRS, the newly synthesized negative strand can translocate to the 5′ leader sequence of the genome where it is then copied. This forms a negative sense sgmRNA that is then copied into the positive sense sgmRNA [8].

"SgmRNA" here refers to "subgenomic messenger RNA," which allows for the transcription of a single sequence of messenger RNA to several different proteins by skipping over some regions of the code, allowing for efficiency for the code.

Here's a picture of the spike protein, showing several of the important major mutants as well as the target ACE2 protein to which the virus binds:

Fig. 2.
Spike mutations of interest mapped to the spike trimer. Mutations shown in red, ACE2 shown in yellow, spike monomer in RBD ‘up’ conformation shown in green, spike monomers in RBD ‘down’ conformation shown in pink and blue. Structure made using PyMOL using PDBID 7A94 [24].

"RBD" refers to "receptor binding domain." Shown in this region, in red, is the location E484K mutation in this region, which confers increased virulence of the strain. The "NTD" region, "N-terminal domain" is the region to which antibodies, including those generated by the vaccines, bind. The distance between these regions is a good thing, explaining why increasingly contagious and virulent strains, whose infectivity is a function of the receptor binding domain, do not necessarily incur resistance to antibodies.

By the way, for people who run around questioning whether the virus was a "lab accident," I suggest reading the section on ferrets and mink, Y453F, the tyrosine to phenylalanine mutation, to show that these viruses can and do jump species.

Even though the antibodies bind to the NTD region and not the RBD region, there is some concern about the effect of the E484K mutant on the total effectiveness of the vaccine; this may be a function of kinetics more than anything else - mere speculation on my part - but, if this were the case, the vaccine would still greatly reduce the virulence, and much data suggests as much.

Each of the major known mutants is given an overview in the full paper, which as discussed is open sourced. The "scary" part about
E484K - which should not be too scary - is briefly excerpted here:

In recent months, several independent lineages of viruses containing the spike glycoprotein mutation E484K have been detected worldwide – once in South Africa (B.1.351 or 20B/501Y.V2) and at least twice independently in Brazil (P.1 or 20B/501Y.V3, and P.2) [121–123]. This mutation is of particular concern as independent studies have suggested E484K is a bona fide escape mutant to many convalescent antisera [108, 119, 124]. Both Brazil (particularly the Amazonas region) and South Africa experienced high disease burdens in 2020 and likely have high seroprevalence which may have driven emergence of these antigenic variants [125, 126]. This is further reinforced by several case studies showing E484K containing variants reinfecting healthcare workers in Brazil and a high rate of reinfection of seropositive individuals in the placebo arm of a vaccine trial in South Africa [122, 126–129]. Concerningly, recent evidence suggests that these E484K variants likely partially or fully escape vaccine- or naturally immunity-derived antisera [66, 108, 111, 130].

Whether these concerns are valid is not entirely clear. Here for reference is a paper suggesting that these concerns may be inflated:

Neutralization of SARS-CoV-2 spike 69/70 deletion, E484K and N501Y variants by BNT162b2 vaccine-elicited sera (Xie, X., Liu, Y., Liu, J. et al. Nat Med 27, 620–621 (2021))

Be safe, be well, be vaccinated.

Have a nice Sunday.

Ternary Phase Diagram of the Neptunium/Plutonium/Americium System at 1300K

Alloying behaviour among U, Np, Pu and Am predicted with the Brewer valence bond model (Ogawa, Journal of Alloys and Compounds, Volume 194, Issue 1, 13 April 1993, Pages 1-7)

Somehow, I'm anxious tonight and can't sleep, so I thought I'd contemplate this phase diagram, which represents to me is a perfect way of getting rid of large numbers of nuclear weapons while simultaneously saving humanity from itself.

A little strange thing to ponder, I know, but that's where I am tonight hoping to dream of a better world. We could do so much better than we are doing.

The Unknown Dissident.

The race to curb the spread of COVID anti-vax propaganda using 2020 election techniques...

This is from Nature News, probably open sourced: The race to curb the spread of COVID vaccine disinformation,


Researchers are applying strategies honed during the 2020 US presidential election to track anti-vax propaganda.

Some excerpts:

In March, Twitter put its foot down: users who repeatedly spread false information about COVID-19 vaccines will have their accounts suspended or shut down. It was a new front in a high-stakes battle over misinformation that could help to determine how many people get vaccinated, and how swiftly the pandemic ends.

The battle is also being fought in computer-science and sociology labs across the United States, where scientists who track the spread of false information on social media honed their skills during the US presidential election last year. They are now shifting focus, from false claims that the election was ‘stolen’ to untruths about COVID-19 vaccines. Some surveys suggest that more than one-fifth of people in the United States are opposed to receiving a vaccine.

The epic battle against coronavirus misinformation and conspiracy theories

Researchers are launching projects to track and tag vaccine misinformation and disinformation on social media, as well as collecting massive amounts of data to understand the ways in which misinformation, political rhetoric and public policies all interact to influence vaccine uptake across the United States.

Scientists have identified a wide variety of disinformation surrounding COVID-19 and vaccines, ranging from conspiracy theories that the pandemic was engineered to control society or boost hospital profits, through to claims that the vaccines are risky and unnecessary.

One research consortium, dubbed the Virality Project, is expanding on strategies pioneered during the election to help inform how platforms such as Twitter and Facebook tackle vaccine disinformation. Created by researchers at multiple US institutions — including Stanford University in California, the University of Washington in Seattle and New York University — the team is working with public-health agencies and social-media companies to identify, track and report disinformation that violates their rules...

It does seem like a part of our population has gone insane. It's an interesting, albeit scary question, how and why this happened.

Pic from the article:

So the two week virtual American Chemical Society meeting is over. I'm just filled with joy.

It's not joy that it's over, but joy in that I lived to see it, without even leaving my office, or in the evening, home.

I learned so much. It was so rich, so many subjects.

It featured, by zoom, a diverse group of scientists from all over the world, from Asia, to Africa, to Europe, South America, and of course, North America, working together, at home, far from home, brilliant people, women, men, young, old, middle aged, ...especially the young people, fabulous science, lasting, on East Coast time, late into the night.

...especially the young people...

...especially them...

...they are going to be a great generation, and they fill me with hope; they are so sharp, so wonderful, so liberating, so free...

I feel really bad for the young 5th year grad student whose presentation just crashed.

You could see how nervous she was, how hard she worked to put it all together, but she struggled with Powerpoint, and got just so far - English not being her first language and then the whole thing crashed.

The talk, if you got through all that was quite interesting, involving novel phosphoamidites and novel protecting groups for DNA synthesis, but it all just crashed...

She probably spent a long time preparing, and then this...

I hope she has a great career though, she's a smart kid doing good work.
Go to Page: « Prev 1 ... 5 6 7 8 9 10 11 12 13 14 15 ... 126 Next »