On the skepticism surrounding the "Goodenough battery"

tl;dr: It is my belief that the claimed mechanism by which the "Goodenough battery" is said to work is not supported by the evidence given, and appears to violate the first law of thermodynamics. There is considerable reason (and precedent) to suspect that the energy stored and released by the cell is significantly influenced by decomposition reactions of the electrolyte and/or reactions of impurities in the electrolyte. Most of the claims related to the properties of this battery are also not supported by the available evidence.

Last month's announcement of a safe, cheap, fast-charging, all-solid-state and long-lived battery technology from John Goodenough's lab was received with much fanfare by the science and technology media. The huge interest in this technology is in no small part due to the reputation of Goodenough, perhaps the most high profile scientist in the Li-ion battery field for his role in the discoveries of both lithium cobalt oxide and lithium iron phosphate, two of the most important electrode materials in Li-ion batteries. Goodenough was widely tipped for the Nobel Prize in Chemistry last year for these findings, and is still active in the field at 94 years of age. It is naturally intriguing that one of the giants of the field — and at his age — might have "done it again" and at last invented the super-battery at a time when the spotlight is on energy storage like never before.

However, the reaction within the battery field itself has been one of quiet but nonetheless strong skepticism. Some of this skepticism has surfaced, but I'm not convinced that any of the coverage fully explains how significant — and problematic — the claims surrounding this newly christened "glass battery" or "Goodenough battery" truly are. The observation that this battery appears to break the laws of thermodynamics is the most significant and eye-catching aspect to this announcement, but this is merely the tip of the iceberg. My feeling is that someone not intimately familiar with the field may readily get the impression that "the battery works, but scientists don't know why" from the existing coverage — rather than "this may not be a battery at all, even though it may appear to work", which I would argue is more accurate.

I am also concerned that several major claims have been made in the press release and promoted in the media which are not actually supported by what Goodenough and his co-authors have shown in their published articles. This is by no means unique to this story, but is a common and serious problem with popular scientific journalism, at least when it comes to reporting on developments in battery science.

For these reasons, I feel obliged to make some effort to try and explain why I too am skeptical about the "Goodenough battery". My own thoughts on this have come after a considerable amount of reading of the original articles related to this work[1-4], as well as the previous literature in this area. I wish to stress I have no stake in this particular area — I am not working on any similar or competing topic, and I have never met John Goodenough or any of his co-authors. But this story has captured my curiosity.

I will try and keep my explanations as simple as possible, but I want to be scientifically unambiguous and objective. Hopefully, someone with the time and resources to repeat these experiments, confirm or disprove the "Goodenough battery", and publish it in a peer-reviewed journal will eventually do so. That person probably won't be me, because this is not my area. Until then, I'm happy to debate my analysis in the comments and update this post should any new information come to light.

Thermodynamics

Let's start with the most significant claim: the operating mechanism of the battery. Dan Steingart at Princeton has already explained this well and in detail in his own excellent post, but I will briefly describe this here anyway.

The first experiment in [1] describes a cell with a Li metal negative electrode, the solid electrolyte, and a positive electrode which was a mixture of sulfur and carbon. When the cell was discharged, the amount of charge passed was more than ten times what should theoretically be obtained by discharging the sulfur, as should be expected, but was 90% the theoretical capacity of the Li metal negative electrode. Immediately, it is concluded that:

[...] the sulfur acts as a redox center determining the voltage of the cell at which electrons from the anode reduce the Li+ at the electrolyte/cathode interface to plate lithium rather than reducing the sulfur, so long as the voltage remains above 2.34 V; below 2.34 V, the S8 molecules are reduced to Li2Sx (1 < x < 8 ) [...]

A battery operating in this way is without precedent. In terms of the half reactions of the battery, we have Li metal being oxidised (stripped) off the negative electrode:

$$ \text{Li} \longrightarrow \text{Li}^+ + \text{e}^-$$

and being redeposited back on the positive side:

$$ \text{Li}^+ + \text{e}^- \longrightarrow \text{Li}$$

Combining these two equations gives us no overall chemical reaction, but a battery voltage of ~2.5 V and a theoretical capacity determined only by the lithium electrode, for a total of 8,500 Wh/kg(!!). This would translate into perhaps ten times higher energy density than Li-ion batteries (a good rule of thumb is to take the energy density of the materials and divide by four — so in this case a cell level energy density of 2,000 Wh/kg would be a reasonable estimate), and more than any other battery system known to science.

But with no overall chemical reaction, where does the energy come from? In response to the public skepticism, John Goodenough himself provided an explanation to Computer World (my emphasis added):

The answer is that if the lithium plated on the cathode current collector is thin enough for its reaction with the current collector to have its Fermi energy lowered to that of the current collector, the Fermi energy of the lithium anode is higher than that of the thin lithium plated on the cathode current collector.

It is worth noting at the outset that this explanation is not given or even implied in the original paper, nor in Goodenough's more recent single-author paper in ACS Catalysis[5]. What is being described here, as far as I can tell, is underpotential deposition, which is well known. It is true that it can be more favourable for a metal ion (let's say, copper, as an example) to be deposited onto a different metal (such as platinum) than to deposit on itself. Copper will deposit onto platinum at a higher voltage than it will onto copper itself. However — once one monolayer, of copper atoms is deposited on the platinum, any further deposition of copper is onto copper, which occurs at the equilibrium voltage.

Much the same would be true for the lithium battery: a single layer of lithium atoms a fraction of a nanometer thick may indeed deposit onto a different substrate at a higher voltage (but 2.5 V higher is huge — lithium doesn't even alloy with any metals at such high voltages). At most, this effect might occur for a few monolayers, but still corresponding to a very small amount of charge. Once lithium is depositing onto itself, it will do so at the equilibrium voltage, and there will be no difference in the chemical potential of lithium on either side of the cell, and hence no voltage. There are countless examples of this behaviour, and why it should be any different in this situation is unexplained.

Is there even lithium metal deposition on the positive side?

The problem with getting stuck into a discussion about how this battery seems to violate the first law of thermodynamics is that it is easy to overlook a more fundamental issue: by any conventional standard, there is no evidence that lithium has been deposited onto the positive side of the cell in the first place. Again from [1]:

In order to verify this conclusion, we disassembled the cell of Fig. 1 and examined the electrodes with the naked eye and with SEM EDS analysis, as shown in Fig. 2, which indeed shows lithium plated on the cathode current collector and no evidence of metallic lithium remaining on the stainless steel at the anode or the anode side of the electrolyte after full discharge of the lithium anode.

Firstly, no EDS spectra are given in the paper (and there is no Supporting Information). In any case, EDS is a particularly poor technique for studying light elements such as lithium. Most electron microscopes are incapable of even detecting the element (although it seems like it can be done with the latest instruments), and if they could they would not be able to distinguish metallic lithium from any other material containing lithium. Neither the photographs nor the electron micrographs provided look like lithium metal (I say this as someone with experience of looking at lithium metal with an electron microscope), and there is no point of reference (a micrograph of a pristine or "re-charged" electrode, for example). I accept that it is probably not that easy to unambiguously confirm the presence of metallic lithium, but considering the significance of the claims being made, that confirmation is essential.

So what is happening?

Before I go on I want to calculate some numbers (capacities and currents) related to the battery tests made in the paper, because this is useful when assessing the cell "performance". There are three different "batteries" tested:

  • A lithium negative electrode and a sulfur/carbon positive electrode
  • A sodium negative electrode and a ferrocene/carbon positive electrode
  • A lithium negative electrode and a MnO2 positive electrode

Of these, I will only consider the first one; there is simply not enough information provided about the others to do any calculations with, however they all appear to be constructed and behave similarly. Comparable behaviour between the different batteries seems to be implied in the paper — the complete description of the sodium system, for example, is only two sentences long.

The "lithium-sulfur" cell is said to contain 1.99 mg of sulfur, and in the first experiment is discharged until the total charge is 10.8 times the theoretical capacity of the sulfur (which is 1,672 mAh g-1). I can estimate this as being 1.99 mg × 1.672 mAh mg-1 × 10.8 = 35.9 mAh. This is quite a lot of charge for a laboratory coin cell - I make decent (if I say so myself) lithium-sulfur batteries in coin cells of the same size and they typically have a capacity of 3 — 5 mAh.

The current though is quite low - this 35.9 mAh is discharged over 28 days, which implies a constant current of 35.9 mAh / 672 h = 53 µA through the whole cell. The current is also stated to be 30 mA/g — I assume based on the mass of sulfur — which comes out to be 30 µA/mg × 1.99 mg = 59.7 µA. These numbers are a bit different, but fairly close, so I seem to be in the right ballpark. The authors state that:

At voltages V > 2.34 V, the cell is rechargeable and the sulfur is not reduced.

A voltage of 2.34 V is reached after the cell has discharged about 8.5 times the theoretical capacity of the sulfur, so repeating the previous calculation, I understand that the reversible capacity of the cell should be 28.3 mAh. Still quite a lot.

The "rechargeability" of the cell is demonstrated in the next experiment. This time, the current is increased slightly to "40 mA/g", so based on the previous numbers I estimate the current is now at most 79.6 µA. The cell is discharged and charged for 10 hours each, so now the total charge passed through the cell on each charge/discharge cycle is 7.9 µA × 10 h = 0.796 mAh. This is just 2.8% of the reversible capacity of the cell, and how the cell behaves when discharged and charged more deeply (or more quickly) is not shown. It is perhaps worth noting at this point that with this narrow window, the 8,500 Wh/kg mentioned earlier becomes closer to 240 Wh/kg. Using my "divide by four" rule of thumb again, this might correspond to about 60 Wh/kg on the cell level, so quite a lot lower than today's Li-ion batteries. The maximum reversible capacity of the other batteries is impossible to estimate.

An inadvertent Li-air battery?

Regardless, something is charging and discharging, but if it is not lithium metal plating, then what? The suggestion that maybe oxygen leaked into the cell, inadvertently forming a lithium-air battery, was discussed in the Quartz article linked to previously. In the lithium-air battery, oxygen is reduced below approximately 2.7 V to form lithium peroxide. The question of whether a leak could be involved is a fair one: the authors themselves, in describing this experiment, say:

The charge and discharge voltages show a good coulombic efficiency over 1000 h; the cycling was continued beyond 46 cycles despite an imperfect seal of the cell.

However, the paper's first author, Maria Helena Braga, has rejected the explanation, being quoted in the same Quartz article as saying:

Well if we have a Lithium-air battery then we have a very good Lithium-air battery

Since the topic of the Li-air system has come up, it's worth remembering what has defined the research in that field over the last decade. K. M. Abraham reported the first lithium-air battery in 1996, and showed that the discharge product was lithium peroxide, Li2O2. These batteries used a polymer electrolyte and the reversible capacities were relatively low. About a decade later, other researchers did similar experiments using liquid electrolytes, showing much higher capacities, and demonstrating rechargeability over tens of cycles. But... they didn't check that Li2O2 was formed on discharge, or that it gave back the oxygen on recharge.

It was later found that only a very small amount of the charge passed on discharge went into the formation of Li2O2 — and no oxygen was reformed on charge. Even though the cell looked like a battery, almost all the charge passed went into burning off the electrolyte. The "cycle life" depended only on how much electrolyte was in the cell — nothing was reversible. For this reason, a lot of work has gone into developing better methods to determine what truly happens inside the Li-air battery during usage — both to check that the "correct" substances are formed and that the unwanted decomposition products are avoided — and to use these techniques to search for more stable systems. The recent history of the Li-air field should be a warning to all battery scientists to be careful about what unwanted reactions might be occurring.

But let's consider this a bit further. The positive electrode in the first experiment is 10% carbon by weight, and 47% sulfur. If there was 1.99 mg of sulfur in the positive electrode, then presumably there was 0.42 mg of carbon in the electrode. Dividing the 35.9 mAh discharge by this mass of carbon gives us a whopping 85,500 mAh per gram of carbon (mAh gC-1). The charge relative to the mass of carbon in the electrode has long been a common performance metric for Li-air batteries (because it's not easy to measure the mass of the oxygen itself), so we can make a comparison here. 85,500 mAh gC-1 is a lot — a few thousand is more reasonable for most Li-air battery electrodes. For this reason, I don't think an oxygen leak alone can account for this observation — but it is only one observation.

What about the glass electrolyte? Could it decompose?

I'll describe a little bit about the electrolyte first. It's based on a crystalline material, Li3OCl, which shows high ("superionic") conductivity at room temperature. Li3OCl belongs to a relatively new class of ion-conducting solids which have an "antiperovskite" crystal structure. With some amount of doping (exchanging of 2 Li+ for a single M2+, leaving a vacant site where the other atom should be), the authors claim to have created an amorphous material with an even higher conductivity.

In the four papers[1-4] published by these authors on the reported glass electrolyte, there is only a limited discussion of the stability of the electrolyte in[3]. The electrochemical stability is determined to be at least 8 V. I could question this number, because there seem to be some inconsistencies in the experiments and the results — but for the sake of this discussion let's go with it. An earlier paper, however, calculated the properties of crystalline Li3OCl from first principles and predicted that it should decompose into Li2O2 and LiCl above 2.55 V. This might be an oxidation of the form:

$$ 2 \text{ Li}_3\text{OCl} \longrightarrow \text{Li}_2\text{O}_2 + \text{LiCl} + 2 \text{ Li}^+ + 2 \text{ e}^- $$

occurring at a voltage similar to the charging voltage of the cell in [1]. Further oxidation into lithium perchlorate (LiClO4) was also predicted at higher voltages. It is also worth noting that another previous paper describes the synthesis of Li3OCl films by pulsed laser deposition (PLD) — and even the construction of batteries, with lithium cobalt oxide and graphite electrodes. In that paper, the batteries show the behaviour expected from the electrode materials, and have been charged and discharged for 20 cycles. But — there is certainly evidence of continuous over-charging, which might be related to oxidation of the electrolyte. Nothing is obvious here, but this is something that merits closer consideration.

Is the electrolyte impure?

At this point I have to make clear my lack of experience in solid electrolytes and this type of chemistry. I know, however, that some scientists have privately suggested that the conditions used to prepare the electrolyte are not sufficient to give a material with the claimed composition. Having read the four papers [1-4] in more detail, I think there are numerous reasons to suspect so, with little evidence available to determine the true composition of the electrolyte. Firstly, I think it is important to note that the electrolyte composition is given slightly differently in each paper:

  • In [1], it is A2.99B0.005O1+xCl1-2x
  • In [2], it is A2.99B0.005OCl1-x(OH)x
  • In [3], it is A2.99B0.005OCl
  • In [4], it is A2.99B0.005OCl.xH2O, (x < 1).

where A is Li or Na, and B is the dopant (e.g., Mg, Ca or Ba). Since [1] and [4] both reference paper [2] for the synthesis, it is strange that the composition should be given differently each time.

The original paper reporting the crystalline Li3OCl describes the synthesis as follows:

In a typical synthesis, 2.40 g of LiOH [...] and 2.12 g of LiCl [...] are ground together for several minutes with a mortar and pestle. The resulting paste is placed in a quartz tube and heated to 330−360 °C (past the melting point Tm = 282 °C of the product) under vacuum for several days. During heating, water is effectively removed with a condensation (liquid nitrogen) trap and a high-vacuum pump [...] At the end of the synthesis, the apparatus is flushed with a dry inert gas (e.g., Ar) and the very hygroscopic sample is never subsequently exposed to atmospheric moisture. Continuous heating (330−360 °C at melt) and removal of water under high vacuum drive the chemical equilibrium toward the formation of the Li3OCl product: $$ 2 \text{ LiOH} + \text{LiCl} \longrightarrow \text{Li}_3\text{OCl} + \text{H}_2\text{O}$$

Quick summary: heat above melting point, high-vacuum pump, liquid nitrogen trap, and never expose it to the atmosphere. In [3], the first of the papers by Braga et al., the reaction conditions are somewhat simpler: after mixing the same precursors and adding "a few drops" of water:

a paste was formed and introduced in a Teflon reactor, which was firmly closed. The reactor was heated at 220–240 °C for at least 4 days before it was opened to let the water evaporate for approximately 1 h. Then it was closed in glassware and allowed to cool to room temperature. A vacuum pump was used to dry the water out.

Again, to summarise - excess water is evaporated off at ambient pressure, and then vacuum dried at room temperature. This is less extreme than in the original paper, but Li3OCl can be obtained, but with unknown purity - there is at least one impurity phase the authors detect, Li5(OH)2Cl3, and there may conceivably be other amorphous impurities. I am not sure if even lithium hydroxide monohydrate (LiOH.H2O) can be dehydrated at 240 °C and atmospheric pressure, let alone condense it into the oxide. The vacuum pressure and duration are surely important, but no further details are given. It is also worth noting that the authors say that the added water is essential. In [2], the later paper, the conditions have been simplified even further.

The precursors were weighed and mixed in stoichiometric amounts for 25 g batches; 10 to 30 mL of deionized water was added to the powder mixture before the solution was enclosed in a teflon reactor that was heated to 230 to 250 °C for 2–3 h in a heated sand bath. The hot reactor was then opened to evaporate water and HCl from the glass/amorphous products at the heating temperature. A slurry was prepared by grinding the glass/amorphous product to a powder in liquid ethanol (99.9%, Merck) to prevent attack of the particle surfaces by humid air. Similar procedures have been described for glass-electrolyte experiments with gold blocking electrodes and alkali-metal electrodes.[2]

That reference at the end there is in fact the authors referring to their own previous work which we have just seen — [3] in my reference list below. Now, instead of a few drops of water, 10 — 30 mL is added, and the product is ground to a powder in a solvent which is also hygroscopic (that is, it absorbs water from the air). No further vacuum drying is mentioned — only a 130 °C heating step to remove the ethanol. For those who are unfamiliar, lithium salts in general are very, very hygroscopic, and many can keep absorbing water until they dissolve themselves into the water they've absorbed (I've seen LiCl do this in a closed container, for example).

The evidence for the formation of the desired product is provided by differential scanning calorimetry (DSC) and dielectric spectroscopy. Neither of these techniques provide information about chemical composition. I have little experience with the former and none with the latter, so hesitate to make my own interpretations from the data — but the data does not speak for itself and I am unconvinced by the arguments. A simple technique such as thermogravimetric analysis (TGA) would easily and clearly support claims of water loss at specific temperatures, but this is not included. On the available evidence, I think there are good reasons to expect a significant amount of hydroxides and water in the electrolyte. The presence of water, especially, could be a major factor in the unprecedented high conductivity.

The presence of water, as well, complicates the possible side-reactions in the cell enormously. Water can be electrolysed directly; either oxidised to oxygen gas and H+ or reduced to hydrogen gas and hydroxide, both with the transfer of 2 electrons per water molecule. Because water has such a low molecular mass, a lot of charge can be passed in its electrolysis: approximately 3 mAh is required to destroy one microlitre of water. Whether or not water is electrolysed directly in the "Goodenough battery" I couldn't say — in fact I would be surprised if it were the case — but water can cause all sorts of other reactions. I have already mentioned the possibility for formation of Li2O2 from electrolyte decomposition; this can react with water to form hydrogen peroxide and lithium hydroxide:

$$ \text{Li}_2\text{O}_2 + 2 \text{ H}_2\text{O} \longrightarrow \text{ H}_2\text{O}_2 + 2 \text{ LiOH}$$

Hydrogen peroxide might be oxidised or reduced by the applied current itself, or it would slowly decompose into water and oxygen. The consequences of oxygen being present in the cell I have already discussed. And if there is a leak in the cell, as has also been argued, then water as well as oxygen could be leaking in, fuelling these side-reactions further.

Unfortunately, at this point, it is all speculation. We will not know what is really happening until the original authors or another group repeat these experiments but with a much more detailed analysis of the chemistry.

1,200 cycles? Fast recharge?

To finalise this post, which has become more of a tome than I had originally anticipated, I want to address some of the claims that have been made and repeated in the media which I feel are either unsubstantiated or misleading. I will go through them (mostly) in the order they appear in the original press release.

A team of engineers led by 94-year-old John Goodenough [...] has developed the first all-solid-state battery cells that could lead to safer [...]

This battery is not the first all-solid-state battery, and it is hard to see on what basis this claim is made. Solid polymer electrolytes have been known for decades, and for years now, Bolloré have thousands of cars on French roads powered by an all-solid-state battery with a solid polymer electrolyte and a lithium metal anode. Solid electrolytes based on ceramics are also well-known, and Sakti3 — despite their controversies — were bought by Dyson a couple of years ago for $90m. I'm sure there are a number of other companies developing similar technologies. This isn't even the first report of a battery based on the Li3OCl electrolyte - at least one paper I mentioned previously demonstrates a working battery, even if it's a small, low power one.

The researchers demonstrated that their new battery cells have at least three times as much energy density as today’s lithium-ion batteries.

This I discussed before. Based on the claims in the paper, I would say about ten times — but maybe three was chosen to sound more reasonable. The claimed energy density of 8,500 Wh/kg is more than three times the theoretical energy density of lithium-sulfur batteries, which are themselves already close to twice the energy of Li-ion batteries, and thought to be capable of much more. But if we're talking about how much reversible capacity is actually demonstrated, then this number is much lower indeed.

The UT Austin battery formulation also allows for [...] a faster rate of recharge (minutes rather than hours).

This is unsubstantiated. The fastest charge/discharge rate in any of the "batteries" in [1] is four hours, and only a small percentage of the cell's claimed capacity is "recharged" at all.

Instead of liquid electrolytes, the researchers rely on glass electrolytes that enable the use of an alkali-metal anode without the formation of dendrites.

Protection against dendrites is one obvious advantage of a solid electrolyte over a liquid, but it is no guarantee. The ability of dendrites to grow through soft polymer electrolytes has been long known, and it did not take long for me to find a paper reporting the growth of dendrites through a ceramic electrolyte. A completely non-porous, grain boundary-free electrolyte is needed to completely protect against dendrite growth. In [1] and [2], Braga et al. claim that the 130 °C heating step after preparing the electrolyte film "reform[s] the solid glass electrolyte without grain boundaries". It is hard to see how this step, below the melting point and without any external pressure, and with the "glass" held inside a fibreglass or paper sheet, can accomplish this — no evidence is provided.

In experiments, the researchers’ cells have demonstrated more than 1,200 cycles with low cell resistance.

The largest number of cycles shown in the papers is 250. If the authors have observed cells cycling for 1,200 cycles, they have not shown it. Nor have they shown evidence of low cell resistance — only in symmetrical cells, which are not the same. The "batteries" show relatively large differences between charge and discharge voltage, which may in fact turn out to be a result of high cell resistance.

This is the first all-solid-state battery cell that can operate under 60 degree Celsius.

Again, it's hard to see on what basis this claim is made. Here's a solid polymer electrolyte battery at room temperature from 1990. Here's a more recent all-solid-state lithium-air battery, at room temperature. Even in my own research group, I have colleagues making polymer electrolyte-based batteries which operate at room temperature. I am sure I could find more examples if I looked.

And finally, let's consider the assertion that the battery would be safe, because of the non-combustible electrolyte. I am being silly here, but let's just take these claims to their logical conclusion. We have a battery which is said to deliver 8,500 Wh/kg on the materials level (let's call it 2,000 Wh/kg on the cell level), and can charge in minutes. It's safe to say if it can do that, it can also discharge in minutes, or less. So what happens if, God forbid, something goes wrong and the battery short-circuits, maybe because of damage? The cell will turn that 2,000 Wh/kg into heat as fast as it can, and start to get hot — really hot. I believe a good rule of thumb for the maximum temperature is about a 1 °C rise per Wh/kg of the battery energy density. In which case, the battery will be glowing red hot, melting, and even if it's not on fire itself, anything flammable in the vicinity will be. What really makes any high energy density battery safe?

Conclusions

I am deeply skeptical about the claims made about the "Goodenough battery". To accept the claimed mechanism of operation means accepting something at odds with two centuries of accumulated knowledge on electrochemistry and battery science, on the basis of evidence significantly below the generally accepted standard and with a large number of reasons to suspect an alternative, conventional explanation. There is no discussion in the paper [1] in which an alternative explanation is even considered, and there is almost no discussion of the related literature besides the authors' own work. It is remarkable that this, and many of the other points I and others have already publicly raised do not seem to have been addressed at the peer review stage. Both the most recent paper [1] and especially the previous paper [2] (which I have not discussed here in any detail) invite far more questions than they answer. The recent public responses by Braga and Goodenough have not — so far — clarified any of these concerns, but have rather created more questions.

Given the situation it's hard to avoid caveating these conclusions with "there's a small chance it may be right". Maybe it is — anyone can be wrong. However, given the available evidence, the scientist in me is insisting there is essentially no reason to believe any of the central claims, and every reason to suspect that the observed behaviour can be explained by other processes, including redox activity of these supposed "catalytic relays" (sulfur, MnO2, etc.). The electrolyte is not well characterised, and there is simply not enough evidence available in the authors' papers, or their patent application, to draw any firm conclusions about what is really happening. As a result, there is almost no foundation to any speculation about the potential applications of this new "battery". I want to stress something I mentioned at the top of this page, that the question surrounding this battery is not "the battery works, but we don't know why", but instead "this may not be a battery at all, even if it may appear to work". The difference is huge.

It is discomforting to write this post knowing that John Goodenough has written authoritatively about these developments in his own article[5] and is privately expressing confidence that these results will be confirmed. Goodenough is, after all, a decorated scientist who made significant contributions to human knowledge before even my parents were born. But, I know the skepticism in the field surrounding this paper is intense, and I think in this case it is very much in the public interest for it to be debated openly. I hope that this post will give some food for thought.

References

[1] M.H. Braga, N.S. Grundish, A.J. Murchison, J.B. Goodenough, Energy Environ. Sci. 2017, 10, 331
[2] M.H. Braga, A.J. Murchison, J.A. Ferreira, P. Singh, J.B. Goodenough, Energy Environ. Sci. 2016, 9, 3, 948
[3] M.H. Braga, J.A. Ferreira, V. Stockhausen, J.E. Oliveira, A. El-Azab, J. Mater. Chem. A 2014, 2, 15, 5470
[4] M.H. Braga, J.A. Ferreira, A.J. Murchison, J.B. Goodenough, J. Electrochem. Soc. 2017, 164, 2, A207
[5] J.B. Goodenough, ACS Catal. 2017, 7, 2, 1132


Update - 1/5/2017

As has been noted in the comments below, John Goodenough recently did an interview with Slashdot in which he answered a range of questions from the commenters there. One aspect of the operation of the cell which has been described in more detail relates to thickness of the lithium supposedly deposited at the positive electrode:

The key to the concept of a battery voltage that takes metallic lithium from the anode and plates it on the cathode is that a thin lithium (order of a micron thick) current collector is plated on a copper (or other) cathode lithium having a chemical potential over 3.5 V below that of metallic.

And again in another reply (my emphasis added):

...plating on the cathode from the anode can only give a voltage for a finite thickness of the plated material on the cathode side. We have not yet obtained a good measure of the thickness of the cathode plating that is viable, but it appears to be micro not nanometers thick. Optimizing the capacity will involve the ability to optimize the surface area of the cathode material. This optimization has yet to be performed, but we can plate sodium as well as lithium.

"Microns" would be much too thick for underpotential deposition, as I described earlier on, so whatever limits or determines this thickness remains a mystery. Most concerning, however, is that this directly contradicts one of the central claims of [1]:

...the ability to plate/strip an alkali-metal anode in contact with a Li-glass or Na-glass electrolyte allows a totally unconventional strategy for the design of a rechargeable battery in which reversible plating of an alkali metal from the anode onto the cathode current collector gives a battery cell having a capacity determined by the amount of alkali metal used as the anode rather than the solid-solution range of the working ion in a host cathode lattice.

The system cannot have a capacity limited both by the amount of metal at the negative electrode and by a maximum thickness that can be plated on the positive electrode. The latter is not described in the published literature, and is remarkable that such crucial information should be left out. Unfortunately, this interview does little to resolve any of my concerns about this work.

Follow this link to comment...


A half-solution for two (or more) y-axes with ggplot

I've been teaching R, and especially ggplot, to beginners in the language this week, and predictably the topic of how to put two separate y-axes (with a common x-axis) on the same plot came up.

Unfortunately, the answer is "not easily", since the inability to do this is on purpose (Hadley Wickham gives the reasons here, for example). Actually putting one y-axis on the left side of the graph and a different y-axis can be done, but requires some delving into the heart of ggplot which is a beyond my understanding at the moment.

What is easier - and in my opinion, preferable in most cases - is to use facetting or a package like gridExtra to have separate stacked panels. But gridExtra (specifically the grid.arrange() function) misaligns plots which have expressions (subscripts and superscripts) in the axis titles - and facetting by default doesn't make it easy to label axes the way I want (again, because I often need to add super/subscripts in labels), or rescale the y-axes of individual facets to values I want.

I had a think about it after the discussion we had in class, and managed to reach a reasonable compromise with the facetting approach, which is fairly straightforward and doesn't require any extra packages.

I'll demonstrate this with some arbitrary functions with very different ranges of y values:

x <- seq(from = -5, to = 5, by = 0.05)

df <- data.frame(
  x = x,
  fun_a = sin(x^2),
  fun_b = 50 * sin(x)
)

To make use of facet_grid() this data needs to be converted to "long" format, which is easily accomplished with tidyr::gather():

library(tidyverse)

df2 <- df %>%
  gather(key = fun, value = y, -x)

Now we can ggplot() this data with the two functions in separate facets, making use of the scales = "free_y" argument:

ggplot(df2) +
  geom_path(aes(x = x, y = y, color = fun)) +
  facet_grid(fun ~ ., scales = "free_y")

plot of chunk unnamed-chunk-3

This is fine, but what if I want to plot data series which have different units? I'd prefer to have an axis title on the left for each facet. There's only one y-axis title here, and I can't easily change that - but what I can do instead is change the facet labels, move them and make them look like axis titles.

The easiest thing to do seems to be to change the column names with dplyr::rename() before gather(). To show the superscripts, etc, the column names have to have the form of expressions, like they would if you were to do the same thing with ylab().

df3 <- df %>%
  rename(`sin~(x^2)` = fun_a, `'50'~sin~(x)` = fun_b) %>%
  gather(key = fun, value = y, -x)

Now I can remake the plot with a couple of extra arguments to facet_grid(), and some theme() modifications to make the strip.text (facet label) look the same as the x-axis label.

ggplot(df3, aes(x = x, y = y, color = fun)) +
  geom_path() +
  facet_grid(fun ~ ., scales = "free_y", 
             labeller = label_parsed,
             switch = "y") +
  theme(strip.background = element_blank(),
        axis.title.y = element_blank(),
        strip.text = element_text(size = rel(1))) +
  guides(color = FALSE)

plot of chunk unnamed-chunk-5

Great! This is more or less what I'm after. I still have some grumbles though. One of the main ones is that I can't easily rescale the y-axis on an individual facet - I'm stuck with scales = "free_y".

The most I can do - as far as I know - is force it to rescale a bit outside the range of the data by making some dummy data which I include in the plot as an invisible geom_blank. Like this:

dummy <- data.frame(
  x = 0,
  y = c(-1.5, 2),
  fun = "sin~(x^2)"
)

ggplot(df3, aes(x = x, y = y, color = fun)) +
  geom_path() +
  geom_blank(data = dummy) +
  facet_grid(fun ~ ., scales = "free_y", 
             labeller = label_parsed,
             switch = "y") +
  theme(strip.background = element_blank(),
        axis.title.y = element_blank(),
        strip.text = element_text(size = rel(1))) +
  guides(color = FALSE)

plot of chunk unnamed-chunk-6

But I don't know if rescaling the y-axis of a single facet to a range within that of the data is easily achievable. I would also like to be able to change the facet heights manually - maybe that's possible with gtable, for example, but that's out of my expertise. Here's hoping for an easier implementation in a future version of ggplot!

Follow this link to comment...


Could wind + batteries really replace a nuclear power plant?

tl;dr: Probably not. The intermittency and seasonal variation of wind is so severe that even optimistically, it would be considerably more expensive and, if it was to use Li-ion batteries for storage, would require as many as have been produced in the entire world over the last four years.

The UK government's recent decision to delay the final decision on the planned Hinkley Point C nuclear power plant has somewhat rekindled the debate on whether the UK should have the plant at all. Some have gone as far as to suggest that given the projected cost, the guaranteed price of the energy produced and the timescale of the project, it should be scrapped and allowed to be replaced with a combination of renewable energy sources and with energy storage, both of which are dropping in cost.

[Jeremy Leggett, the founder of solar panel maker Solarcentury] is delighted that others are picking up on arguments he has been making for years. "Finally the message is getting through that Hinkley, and indeed nuclear, make no sense today simply because wind and solar are cheaper. If we accelerate renewables in the UK, we can get to 100% renewable power well before 2050," he says.

I'm sure we can all agree that cheap, low-carbon renewable electricity would be a great thing. But if you are proposing to eventually remove nuclear and fossil fuels entirely, are renewables still as "cheap" when we need to rely on them to maintain demand? From that same Guardian article:

The Economist believes improved electricity storage is a key answer to the frequently repeated criticism of wind and solar that it is intermittent, and points out that battery technology is fast improving.

First, let's be clear. It is not a mere "criticism" of wind and solar that it is intermittent - it is a cold, hard fact. It's physically impossible to generate solar power at night and to generate significant wind power when the wind's not blowing. In order for electricity generation to meet or follow demand, excess generation must be curtailed somewhere, or the energy stored. Similarly, insufficient generation must be supported by some other form of energy generation to avoid blackouts. At the moment, this role is largely provided by gas in the UK. But in the absence of other conventional means of generation, this role needs to be filled by some form of storage, perhaps the much vaunted batteries. This is especially important if neighbouring countries make similar moves towards renewables, since available wind speed and sunlight does not tend to vary much across neighbouring countries - those neighbours may not be able to export energy when generation is barely sufficient across an entire continent.

But how much storage would be needed, say, to effectively convert intermittent renewable power into providing baseload power equivalent to that which would be provided by Hinkley Point C - a constant 3.2 GW? Well, this can be estimated with some crude analysis of publicly available data. I was interested to see how it turned out, and I figured it was worth reproducing here. I did the analysis with R, and have included the code (except for the code generating the plots) and the data here so that it can be reproduced.

Analysis

I'm going to use existing wind generation data for this analysis, since the UK already has a significant amount of wind power, and on the assumption that large-scale deployment of solar power would not be all that sensible for one of the darkest countries in the world. The data I've used is the energy production data for the UK for the entire year 2015 - from gridwatch.co.uk - which I've reuploaded to this website.

The data is in the standard csv format, and I use a couple of addon packages for analysis.

gridwatch <- read.csv("http://lacey.se/dl/gridwatch-2015.csv")

library(dplyr)
library(lubridate)
library(ggplot2)

Let's check what it looks like:

head(gridwatch)
##       id            timestamp demand frequency coal nuclear ccgt wind
## 1 377525  2015-01-01 00:00:04  28809    50.090 9079    8049 3360 5251
## 2 377526  2015-01-01 00:05:02  28645    50.092 8947    8053 3369 5254
## 3 377527  2015-01-01 00:10:02  28768    50.116 8843    8052 3372 5272
## 4 377528  2015-01-01 00:15:02  28917    50.045 8763    8047 3339 5303
## 5 377529  2015-01-01 00:20:02  28964    50.030 8818    8051 3386 5223
## 6 377530  2015-01-01 00:25:02  29055    50.006 8906    8055 3392 5189
##   french_ict dutch_ict irish_ict ew_ict pumped hydro oil ocgt other
## 1        582       900       -72   -136     15   443   0    0  1157
## 2        586       898      -100   -134      0   441   0    0  1157
## 3        586       898      -100   -134      0   440   0    0  1157
## 4        586       898      -100   -134      0   439   0    0  1155
## 5        586       898      -100   -134      0   440   0    0  1155
## 6        586       898      -100   -134      0   441   0    0  1155

The data is quite thorough, but all I really want for now is the data for wind. First I'm going to convert the timestamp to POSIXct date/time format with the appropriate function from the lubridate package, then I can select out the data I need.

gridwatch$timestamp <- ymd_hms(gridwatch$timestamp)
df1 <- select(gridwatch, timestamp, wind)

Check again:

head(df1)
##             timestamp wind
## 1 2015-01-01 00:00:04 5251
## 2 2015-01-01 00:05:02 5254
## 3 2015-01-01 00:10:02 5272
## 4 2015-01-01 00:15:02 5303
## 5 2015-01-01 00:20:02 5223
## 6 2015-01-01 00:25:02 5189

The wind column shows the power generated in units of MW.

plot of chunk unnamed-chunk-6

Straight away you can see the issue with intermittency. Wind production averages about 2.6 GW over the whole year, but this can be in excess of 6 GW during windy times, and almost nothing during some lulls in the summer. At the end of 2015, the UK had a total of 13.6 GW of capacity installed, indicating a capacity factor of 19%, which seems reasonable.

I'll make two new columns in this data frame - one for the time increment (in seconds) and then use that to integrate the power column to get the total energy generated in MWh.

df1$difftime <- c(0, diff(df1$timestamp))
df1$totwind <- cumsum(df1$wind * df1$difftime / 3600)

So, I want to see how a constant 3.2 GW baseload can be generated by wind, with excess energy stored and then released when the wind isn't sufficient. We can reasonably assume that with more turbines the power generated will scale linearly. We can make a new table from the same data, but adjust the wind so that the total energy generated by wind power throughout the year will be equal to 3.2 GW x 24 hours x 365 days.

df2 <- select(gridwatch, timestamp, wind)
df2$difftime <- c(0, diff(df2$timestamp))

df2$wind <- df2$wind * (3200 * 24 * 365 / (max(df1$totwind)))
df2$totwind <- cumsum(df2$wind * df2$difftime / 3600)

The average should come out to be about 3200 MW now, so let's check that's the case:

summary(df2$wind)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   77.92 1506.00 2889.00 3201.00 4805.00 8022.00

For plotting purposes I'll include equivalent columns for the constant 3.2 GW.

df2$base <- 3200
df2$baseenergy <- cumsum(3200 * df2$difftime / 3600)

Now the table looks like this:

head(df2)
##             timestamp     wind difftime   totwind base baseenergy
## 1 2015-01-01 00:00:04 6294.875        0    0.0000 3200     0.0000
## 2 2015-01-01 00:05:02 6298.472      298  521.3735 3200   264.8889
## 3 2015-01-01 00:10:02 6320.050      300 1048.0443 3200   531.5556
## 4 2015-01-01 00:15:02 6357.213      300 1577.8120 3200   798.2222
## 5 2015-01-01 00:20:02 6261.309      300 2099.5878 3200  1064.8889
## 6 2015-01-01 00:25:02 6220.550      300 2617.9669 3200  1331.5556

Let's see how the total amount of energy generated from wind over the year would look like compared to a constant 3.2 GW.

plot of chunk unnamed-chunk-12

The energy generated by wind is shown in red. The difference between these two lines will show us roughly the difference between the energy generated by wind power and the energy consumed by the constant 3.2 GW we're looking for. So let's do that:

df2$diffpower <- df2$wind - df2$base
df2$diffenergy <- cumsum(df2$diffpower * df2$difftime / 3600)

The power requirements will look like this (positive values indicate wind in excess, so therefore the batteries would be charging. Negative values indicate that wind is insufficient, so the batteries need to take over to ensure 3.2 GW):

plot of chunk unnamed-chunk-14

The total energy stored or released looks like this:

plot of chunk unnamed-chunk-15

It's clear from this plot there is a huge seasonal variation in wind power, with greater generation in the winter, and the storage needed as backup in the summer. The difference between those minimum and maximum peaks (in March and October respectively) is the total amount of energy we would need to backup the wind power - therefore it's the capacity of the storage we would need to guarantee a constant 3.2 GW baseload without relying on other methods of generation. In this case, we can see it's of the order of about 4 TWh (that's TERAwatt-hours).

4 TWh is a colossal amount of energy - roughly the yield of 35 Trident nuclear missiles, roughly the total battery capacity of about 45 million Tesla Model S cars, and about 100 years worth of Li-ion batteries at the current rate of production.

More than that, at a target price of $100/kWh for batteries (=$100bn/TWh) this would cost of the order of $400 bn for the batteries alone. This price for usable battery storage is considerably cheaper than currently available for any of the available chemistries, even if you're Tesla.

Meanwhile, based on a capacity factor of about 20%, we would need wind power with a capacity of 16 GW, which based on estimates of $1.3m - $2.2m per MW would cost between $20.8 and $35.2bn. The cost of Hinkley Point C by comparison is estimated to be of the order of $24 bn. So yes, the wind turbines could potentially be cheaper to install than the nuclear power plant, but even then, they can't supply power on the same basis without additional - and possibly insane - storage capability.

More wind turbines?

Ok, maybe it's not viable to try and store every last bit of energy produced by the wind turbines. Perhaps we can just disconnect them during very windy periods and only store enough energy needed when wind isn't enough. How many batteries might we need then?

To estimate this I went back again and scaled up the wind further, so that the averege power is 20% over the 3.2 GW we actually want. The rest of the code is the same.

df4 <- select(gridwatch, timestamp, wind)
df4$difftime <- c(0, diff(df4$timestamp))

df4$wind <- df4$wind * (1.2 * 3200 * 24 * 365 / (max(df1$totwind)))
df4$totwind <- cumsum(df4$wind * df4$difftime / 3600)

df4$base <- 3200
df4$baseenergy <- cumsum(3200 * df4$difftime / 3600)

df4$diffpower <- df4$wind - df4$base
df4$diffenergy <- cumsum(df4$diffpower * df4$difftime / 3600)

If I remake the same plot and rescale it a bit:

plot of chunk unnamed-chunk-17

Then the difference between the minimum and maximum values here is the amount of energy that needs to be stored, which is about 2 TWh - halving the cost of batteries needed for just 20% more wind turbines. This seems more reasonable, so what if you keep adding more wind power?

Well, I won't reproduce the plot here - you can do it yourself - but the cost does keep shrinking. If you double the amount of wind power - to 32 GW, giving an average of 6.4 GW over the year, you can still see that there are periods of insufficient wind to meet 3.2 GW that still require about 200 GWh worth of storage. This is still a huge amount, equivalent to the total storage of a few million electric cars and about four years worth of production of Li-ion batteries. For comparison, the world's largest grid storage battery opened earlier this year in Japan - a sodium-sulfur battery with a capacity of 300 MWh (that's 0.3 GWh).

The cost of installing this would probably then be in the range:

  • Wind turbines: 32 GW capacity, $41.6 - $70.4bn
  • Battery storage: 200 GWh, ~$20bn

This would imply an installation cost of maybe three times as much as Hinkley Point C, for a system with a considerably shorter lifetime, and most likely with much more expensive electricity for the consumer. I doubt if I kept going that it would get considerably cheaper than this, since at this point the cost of the wind turbines is already rather in excess of the cost of the batteries.

So what can we conclude? I think the main thing this thought experiment shows is just how important it is to have a diverse mix of technologies in the energy generation mix. I think this fact is usually lost on people with stubbornly anti-nuclear and anti-fossil fuel views. I know as well that I have not considered other things like tidal or hydroelectric power here, but these are very much geography-dependent and not always a viable option.

I don't intend to say that there is no place for renewables at all in our energy mix, but I cannot see a target of 100% solar or wind grid production as being anything except ruinously expensive, and it seems wildly improbable that battery storage could make it work.

Bear in mind too, that this is just one power plant. Average demand in the UK was 32.8 GW in 2015!

The last word

At this point it is reasonable to point out that the seasonal trend in solar generation runs (very roughly) opposite to that of wind. That is, it is less windy in the summer, but it is sunnier in the summer, so a mix of solar and wind might reduce the requirement for storage. This is true, but some level of storage would still be required, and my intuition tells me that it would not be a significant difference (and other analyses done elsewhere suggest as much). At the moment I don't have data for UK solar generation to play around with, but this would be interesting for a future post.

Follow this link to comment...


Can't underestimate luck

I took this photo of a soap bubble on Sunday:

Rural Sweden in a bubble

It received a surprising amount of attention on Flickr, with almost 8,000 views in one day, way more than any photo of mine has ever had. At the time I didn't even really think about what the photo would look like, and the perfect reflection of the house and the colours in the bubble itself were an unexpected bonus!

Follow this link to comment...


Don't believe in miracles

The above was one of the closing points in an excellent talk I saw earlier today by Dr Jef Ongena, the chairman of the Energy group at the European Physical Society. Based on the email announcement for the talk I was expecting to hear mostly about current progress in nuclear fusion research, but what we got was partly a thought-provoking and fiercely critical overview of the European approach towards renewable energy, and partly a call for better public awareness and consideration of wider context. It really deserved a larger audience than it had so I thought I’d write a bit about it!

“Europe alone cannot save the world” was the first key message, giving examples such as Germany: around 1 trillion Euros has been committed to the Energiewende, and has so far brought about very little meaningful reduction in Germany’s CO2 emissions, which themselves contribute only ~2.5% of the world’s emissions (interestingly enough, the decade following the collapse of the GDR saw German CO2 emissions drop by a quarter, simply because of the loss or modernisation of East German industry). Globally, this reduction is insignificant, because emissions from countries such as India and China have increased to a greater extent. More than this, as a result of this policy, German energy prices are among the highest in the world.

Dr Ongena was also keen to point out that at least a portion of the EU’s reduced emissions are effectively an accounting trick – we import more goods from China rather than produce them here, meaning the associated emissions appear as Chinese emissions and not European. It’s worth reading the EPS Energy Group’s position paper on this topic.

What was most interesting for me was the stark look at what the consequences of an energy system based 100% on renewable energy would be. At this point I am reminded of this excellent post on the situation in Scotland, since this is exactly what the Scottish government is aiming for in the short-term (the blog at which that post resides, Energy Matters, is excellent in general by the way). The biggest issue with renewable energy such as solar and wind is the intermittency; wind is unpredictable, and the demand for solar is typically out of phase with the consumption, so some energy storage is essential. But how much?

It’s also one thing to cope with diurnal (day/night variation) in production/consumption, but if you plan for a future in which solar is a large or the major part of the energy production mix, then the huge seasonal variation in energy production becomes a big problem.

Dr Ongena gave an example of a study looking at German energy production as it would look in 2050 based on current plans. Unfortunately, I didn’t note down the reference, but the short version is that coping with the seasonal variation in energy production and consumption would require of the order of 33 TWh of storage capacity for Germany alone! To put this into context, if this energy was to be stored using batteries, 27 cubic kilometers(!!) of space would be needed to store the batteries themselves. How much space is that? Well, I calculated that myself, and it would take an aircraft hangar tall enough to fit an Airbus A380, this big:

What 900 km<sup>2</sup> looks like

Or, about 2,000 buildings the size of the Boeing Everett Factory, the largest building in the world by volume. I won’t even try to estimate the impossible cost of such a solution.

This idea of “knowing your numbers” was I think the main scientific point in the talk, which pleased me greatly: this is something I think is really important, and is something I try to prioritise in any teaching that I do. Dr Ongena gave a few interesting factoids – for example, that the energy consumed by satellite TV boxes in Belgium is something like 17 times more than the energy consumed lighting all the country’s roads – but the main point was about the power density of energy production.

Power density, as in power per unit area, tells you about the land area needed to produce energy, and in this respect all renewable energies are considerably more “dilute” than conventional (fossil fuel, nuclear) technologies. I’ve since found some good references for this, notably this one, so I won’t write anything else on this except to say that it is often and easily forgotten that wind power, for example, requires 400-500 times the land area to provide the same power as nuclear (and optimistically about 50 times for solar). It’s long been a mystery to me as to why some types of environmental destruction (e.g., large scale pumped hydro or vast fields of massive on-shore wind turbines) are apparently preferable to others (e.g., storing relatively small amounts of nuclear waste, or fracking).

Although it was a talk with relatively few crumbs of comfort for the future energy landscape (fusion power was not even discussed), it was still rather refreshing to get such a brutal reality check and plenty of food for thought. Unfortunately, my suspicion is that politics and emotions will always trump science in any decision-making process, but I would be happy to be proved wrong on that.

Follow this link to comment...