Some examples of my workflow for analysing battery test data with R and tidyverse

Intro

I received an email from a former colleague the other day, who asked me a question about plotting some data from a battery test via some makeshift R functions I made several years ago (arbintools, as it happens), which got me thinking.

At that time, I was starting to really get into the swing of scripting all my data analysis work, I had written code to quickly import, analyse and plot data off instruments that a lot of people were using, so I polished it up and shared it, trying to flatten out the learning curve for others to try it out for themselves. But since then, I haven't kept arbintools updated, while a lot has changed in the R community, especially around the superb tidyverse ecosystem.

So, I thought it might be a good time to write a post here about what my normal workflow towards importing, analysing and plotting battery data in R would typically look like ("tidy arbintools", as it will turn out). I have a few motivations here. Foremost, I can't recommend strongly enough, especially to PhD students in this or any remotely similar field, to learn how to program, in whatever language you like best or consider most relevant. While I know a number of people are using code I've written to do data analysis like this now, it's always worth the time investment to learn the basics, so you can write code for whatever purpose you need. But also importantly, I'm a big proponent of using scripted data analysis as a record of how data is treated between acquisition and presentation, and including this in publications. I've written before about some reasons why I think this is important. And lastly, my hope is that this might serve as some inspiration as to how useful this can be, or as a source of tips for anyone interested in learning how to do this.

In the interests of keeping explanations brief, I'm sharing code here assuming some familiarity with R on the part of the reader. For learning the basics, and especially the tidyverse system, I can strongly recommend the freely-available book "R for Data Science".

Where to start

As I've said, I am a big fan of the tidyverse series of add-on packages in R, so the first line in pretty much every R script I'll ever write is:

library(tidyverse)

Now, if I'm going to show some data analysis examples, I need some data. For this I'll use the data from of my recent papers on lithium-sulfur batteries, for which we shared the dataset openly on Zenodo (the paper itself is here). If you're interested, therefore, you can follow the code here and play with the dataset yourself (note though, the full dataset is quite big, at around 415 MB).

Naturally, you can download the dataset directly with R:

# Download the file from Zenodo, and save as ccc.zip into the current working directory
download.file(url = "https://zenodo.org/record/3274377/files/ccc-supportingdata.zip?download=1", destfile = "ccc.zip")

# Now unzip the contents of ccc.zip into a new folder ccc/
unzip(zipfile = "ccc.zip", exdir = "ccc")

For the purposes of this post I'll just work with the data which is in the "3_separators" folder, which contains a long-duration charge/discharge cycling data for six Li-S cells with three different separators. So, the first thing I'll do is set my working directory to that folder.

setwd("ccc/data/3_separators/")

Write your own importing function

Importing plain text files in R is no problem, and tidyverse has its own version via the readr package. But our data here was collected using, as it happens, an Arbin battery tester, which only spits out its data in Excel format. Thankfully, tidyverse also includes the splendid readxl package for this purpose. This needs to be loaded separately, so:

library(readxl)

Now there are several Excel files in this folder:

grep(".xlsx", list.files(), value = TRUE)
[1] "YC25A.xlsx" "YC26A.xlsx" "YC64A.xlsx" "YC66D.xlsx" "YC66E.xlsx" "YC71B.xlsx"

I'll just choose one to work with for now, which I'll give to an object called filename for convenience later:

filename = "YC64A.xlsx"

Ordinarily, you can just import an Excel sheet using something of the form:

read_excel(path = "path/to/file.xlsx", sheet = "Sheet1")

In this case, because of the particular experiment, we have a lot of data points. And it so happens that if we have more than 65,000 rows in the raw data, then the macro creates a new sheet and carries on in the new sheet (this is related to limitations in the number of rows in old versions of Excel). So to import all the raw data, we have to take this into account. In R, this can be quite straightforward.

excel_sheets() returns a vector of the names of all the sheets in an Excel file. But I'm only interested in the ones that start with "Channel", as these are where the raw data is. I can pass the result of excel_sheets() to the str_subset() function in the stringr package, which I can use to match only the ones that include "Channel". (Incidentally, another tip: learn how to use the pipe, or %>% operator: here's a tutorial.)

excel_sheets(path = filename) %>%
  str_subset("Channel")
[1] "Channel_1-031_1" "Channel_1-031_2" "Channel_1-031_3" "Channel_1-031_4" "Channel_1-031_5"
[6] "Channel_1-031_6"

There are six sheets then, and I would like to read in each of them and stitch them together into a single table of data. For this sort of job, I use R's lapply() function, which I use for pretty much any sort of batch task over, rather than for loops. (Another tip: if you're learning R, it is well worth learning how to work with lists, which lapply() is all about. Here's a tutorial.)

So what I can do is take the above list of sheet names, and use lapply() to import them one at a time:

imported.data <- excel_sheets(path = filename) %>%
  str_subset("Channel") %>%
  lapply(function(sheet) {
    read_excel(path = filename, sheet = sheet)
  })

What the above code does is go through the Excel sheet names one by one, reads in that sheet from the Excel file, and at the end returns a list of all six tables (or, rather, "tibbles", since this is tidyverse), saved as the object imported.data. But this is not quite what I want — I would like:

  • One single table, not a list of six separate tables
  • Only the columns I want - say time, cycle number, current, voltage, charge... with shorter names, since the column names are a bit long, like "Discharge_Capacity(Ah)"
  • To have this as a function, so I can batch import multiple Excel files with lapply()
  • Maybe I would like to be able to normalise charge values by active mass?
  • Maybe I would like to add a unique identifier column, which might be useful later?

Skipping ahead a few steps, I can do all of these and write a simple import function which does all of these things:

import_arbin_raw <- function(filename, mass = NULL, ident = NULL) {

  excel_sheets(path = filename) %>%
    str_subset("Channel") %>%
    lapply(function(sheet) {

      # for each Excel sheet containing the word "Channel",
      # read_excel reads it in and assigns it to the object tbl
      tbl <- read_excel(path = filename, sheet = sheet)

      # to.select is a list of variables I want from each sheet
      # and how they should be renamed during import
      to.select <- list(t = sym("Test_Time(s)"),
                        cyc.n = sym("Cycle_Index"),
                        I = sym("Current(A)"),
                        E = sym("Voltage(V)"),
                        Q.c = sym("Charge_Capacity(Ah)"),
                        Q.d = sym("Discharge_Capacity(Ah)"))

      # Create a new object tbl2 with the selections we
      # want from tbl
      tbl2 <- tbl %>%
        select(!!!to.select)

      # If the mass argument is not null, correct capacity values
      # (in Ah) to mAh/g (assuming mass is in units of mg)
      if(!is.null(mass)) {
        tbl2 <- tbl2 %>%
          mutate(Q.c = Q.c * 1E6 / mass, Q.d = Q.d * 1E6 / mass)
      }

      # If the ident argument is not null, add it as a column
      if(!is.null(ident)) {
        tbl2$ident <- ident
      }

      # Return the tbl2 object
      return(tbl2)
    }) %>%
    # now use do.call(rbind, .) to take the list of "tibbles" (or data frames)
    # returned, and stitch them all together in sequence.
    do.call(rbind, .)
  }

Now, if I ever want to import a file like this again, I can do it in one line, with the option to provide the mass and ident information if I wish.

imported.data <- import_arbin_raw(filename, mass = 2.9705, ident = "Celgard")
imported.data
# A tibble: 385,755 x 7
           t cyc.n     I     E   Q.c   Q.d ident  
               
 1    0.0522     1     0  3.06     0     0 Celgard
 2  600.         1     0  3.06     0     0 Celgard
 3 1200.         1     0  3.06     0     0 Celgard
 4 1800.         1     0  3.06     0     0 Celgard
 5 2400.         1     0  3.06     0     0 Celgard
 6 3000.         1     0  3.05     0     0 Celgard
 7 3600.         1     0  3.05     0     0 Celgard
 8 4200.         1     0  3.05     0     0 Celgard
 9 4800.         1     0  3.05     0     0 Celgard
10 5400.         1     0  3.04     0     0 Celgard
# … with 385,745 more rows

Summarising and plotting data

Battery test instruments usually export aggregated statistics themselves, but why not do it myself? dplyr's functions for subsetting, sorting and modifying data make this easy. Suppose I want an aggregated table of capacity and coulombic efficiency vs cycle number, which I can assign to an object stats:

stats <- imported.data %>%
  group_by(cyc.n) %>%
  summarise(Q.c = last(Q.c), Q.d = last(Q.d)) %>%
  mutate(CE = Q.d / Q.c)

stats
# A tibble: 191 x 4
   cyc.n   Q.c   Q.d    CE
      
 1     1 1266. 1214. 0.959
 2     2 1075. 1054. 0.981
 3     3 1046. 1022. 0.977
 4     4 1025.  995. 0.971
 5     5 1012.  981. 0.970
 6     6 1000.  968. 0.967
 7     7  990.  958. 0.968
 8     8  987.  954. 0.967
 9     9  981.  949. 0.967
10    10  975.  942. 0.967
# … with 181 more rows

I very much like ggplot2 for my plotting work, and though I have no idea whether this is considered good practice, I like to pipe (%>%) data objects through various functions to wrangle it into the format I want it straight into ggplot(). In this way, I can go from raw data to a publication-quality plot in only a few lines. As an example, I can filter out only up to 100 cycles and create a very simple plot:

stats %>%
  filter(cyc.n <= 100) %>%
  ggplot(aes(x = cyc.n, y = Q.d)) +
  geom_point()

It's an OK start but needs some work - the axes need labelling, for one thing, rescale the y-axis to start at zero (personal preference for this sort of plot), but mostly I'd quite like to modify the theme and maybe add a touch of colour. ggplot2 has a few built-in themes, but of course we'd quite like to make our own, in the form of a function which can be added onto a ggplot() call like the one above. Here's the theme I usually use:

theme_Lacey2 <- function(base_size=16, base_family="Helvetica Neue", alt_family = base_family, legend.position = "top",
                         panel.background.fill = "#fafafa") {
  library(grid)
  library(ggthemes)
  (theme_foundation(base_size=base_size, base_family=base_family)
    + theme(plot.title = element_text(size = rel(1.3), hjust = 0, face = "bold"),
            plot.subtitle = element_text(face = "italic"),
            text = element_text(),
            panel.background = element_rect(fill = panel.background.fill),
            plot.background = element_rect(fill = "transparent", colour=NA),
            panel.border = element_rect(colour = "#333333", size = 0.3),
            axis.title = element_text(size = rel(1), colour="#333333"),
            axis.title.y = element_text(angle=90, colour="#333333"),
            axis.text = element_text(size = rel(0.8), family = alt_family),
            axis.ticks.length=unit(0.15, "cm"),
            axis.text.x = element_text(margin = margin(0.2, 0, 0.2, 0, "cm"), colour="#333333"),
            axis.text.y = element_text(margin = margin(0, 0.2, 0, 0.2, "cm"), colour="#333333"),
            panel.grid.major = element_line(colour="#eaeaea", size = 0.5),
            panel.grid.minor = element_line(colour="#eaeaea", size = 0.2),
            legend.key = element_rect(colour = NA),
            legend.key.size = unit(0.6, "cm"),
            legend.background = element_blank(),
            strip.background=element_rect(colour="#eaeaea",fill="#eaeaea"),
            strip.text = element_text(colour = "#333333", lineheight=0.7),
            legend.title = element_text(size = rel(0.8), family = alt_family),
            legend.position = legend.position
    ))
}

So adding in this, along with some properly-formatted axis labels, scaling, and a splash of colour, we can have:

stats %>%
  filter(cyc.n <= 100) %>%
  ggplot(aes(x = cyc.n, y = Q.d)) +
  geom_point(size = 2, color = "#0789ce") +
  scale_y_continuous(limits = c(0, NA)) +
  labs(x = "cycle number", y = "Q"[discharge]~"/ mAh g"^"-1") +
  theme_Lacey2()

More complicated plots

Let's take a voltage profile (potential vs charge) as an example, since this used to take me a good 20 minutes per plot before I learned to script these. In this data, we have one column for the charge on the discharge half-cycle, and one column for the charge on the charge half-cycle. So, the first thought might be to plot one line for each column, filtering out only the cycle numbers we need, and making each cycle a different colour:

imported.data %>%
  filter(cyc.n %in% c(1, 10, 20, 50, 100)) %>%
  ggplot(aes(y = E)) +
  geom_path(aes(x = Q.c, color = cyc.n)) +
  geom_path(aes(x = Q.d, color = cyc.n))

This plot is pretty horrible. A big problem here is that, for example, at the end of the discharge cycle, the Q.d value remains constant during the charge, while the voltage increases again - so at the end of each discharge the plot is drawing a straight vertical line, while it's also drawing the charge profile. Another problem - by default, ggplot interprets cycle number as a continuous variable (i.e., it thinks you could have, say, cycle 1.63 between cycles 1 and 2), so it plots a continuous scale for the colour. And, for this specific case, we have a lot of current interruptions (part of the technique used in this work), which show up as distracting little spikes. So, I would like to do a few things:

  • Remove those distracting spikes by filtering out parts of the data where current is zero
  • Fix the 'two charge values' problem, by setting charge capacity to a blank value for discharges, and vice versa
  • Plot only one geom (or one set of lines) for E vs Q
  • Scale properly, make colours nice, properly format labels, add nice theme etc...

Again, all of these things we can do in one piped command with functions from several tidyverse packages (introducing here pivot_longer() from tidyr, which converts multiple columns into key-value pairs):

imported.data %>%
  # filter out specific cycles, and any data where current is zero
  filter(cyc.n %in% c(1, 10, 20, 50, 100), I != 0) %>%
  # if I > 0, then discharge capacity is blank, and vice versa.
  mutate(Q.c = ifelse(I > 0, Q.c, NA), Q.d = ifelse(I < 0, Q.d, NA)) %>%
  # select only what's needed for the plot
  select(cyc.n, E, Q.c, Q.d) %>%
  # convert to "long" format, i.e., columns of cyc.n, E, Q, mode, where
  # mode has values of Q.d or Q.c - so there is only one column of Q values
  pivot_longer(cols = starts_with("Q"), names_to = "mode", values_to = "Q") %>%
  # now plot
  ggplot(aes(x = Q, y = E)) +
  # use factor(cyc.n) so that cyc.n is not interpreted as a continuous variable
  geom_path(aes(color = factor(cyc.n), group = mode), size = 0.8) +
  scale_y_continuous(limits = c(1.75, 2.65)) +
  scale_x_continuous(breaks = seq(0, 1400, 200)) +
  # change the colour scale
  scale_color_brewer("cycle number", palette = "Set1") +
  theme_Lacey2() +
  labs(x = "Q / mAh g"^-1~"", y = "E / V")

12 lines (plus some comments), and a plot that would look decent in any article.

Batch importing and plotting

Batch processing of repetitive data processing and analysis, like this, is where scripting really saves you time. And when it comes to battery materials science, in almost every publication you'll see plots of, say, capacity vs cycle number for several different materials.

What I'll usually do here is have a table of all the information I need. With the import function I wrote earlier, I want a filename, I want an active material mass, and I want an identifying label. Now, the tibble package has a neat function called tribble() (transposed tibble) which is handy for directly writing in small tables in the code, like this:

files <- tribble(
  # file | mass | ident
  # -----|------|------
  ~filename, ~mass, ~ident,
  "YC64A.xlsx", 2.9705, "Celgard",
  "YC26A.xlsx", 2.977, "CCC",
  "YC25A.xlsx", 2.9445, "Cellulose"
)

files
# A tibble: 3 x 3
  filename    mass ident    
             
1 YC64A.xlsx  2.97 Celgard  
2 YC26A.xlsx  2.98 CCC      
3 YC25A.xlsx  2.94 Cellulose

With this, I can do a batch import with lapply() much like I used for writing the import function before — and this is usually what I'd do since I'm most familiar with it:

separators <- lapply(1:nrow(files), function(i) {
  import_arbin_raw(files$filename[i], mass = files$mass[i], ident = files$ident[i])
}) %>%
  do.call(rbind, .)

However, there is also the purrr package in tidyverse, which aims to do a lot of the same things that the apply series of functions do but in a more consistent and readable way. So I could instead write something that does the same thing using the pmap_dfr() function, which goes through a tibble row by row, runs a function for each row, and returns a single tibble bound by rows, which is the same as what do.call(rbind, .) does. So this would look like:

separators <- files %>%
  pmap_dfr(function(filename, mass, ident) {
    import_arbin_raw(filename, mass, ident)
  })

which is pretty neat really. So now, to make those plots we made earlier, I can summarise the data using very nearly the same code I already wrote:

separators_stats <- separators %>%
  # Add ident in the group_by line
  group_by(ident, cyc.n) %>%
  summarise(Q.c = last(Q.c), Q.d = last(Q.d)) %>%
  mutate(CE = Q.d / Q.c)

And using some of the same tools I showed earlier, plus some other tricks from my blog archive, I can make some useful plots without too much effort:

separators_stats %>%
  filter(cyc.n <= 100) %>%
  select(cyc.n, ident, Q.d, CE) %>%
  rename(`Q[discharge]~"/ mAh g"^-1~""` = Q.d) %>%
  pivot_longer(cols = -c("cyc.n", "ident"), names_to = "key", values_to = "value") %>%
  ggplot(aes(x = cyc.n, y = value)) +
  geom_point(aes(color = ident)) +
  scale_color_brewer("", palette = "Set1") +
  facet_grid(key ~ ., scales = "free_y",
             labeller = label_parsed,
             switch = "y") +
  theme_Lacey2() +
  theme(strip.background = element_blank(),
        axis.title.y = element_blank(),
        strip.text = element_text(size = rel(1)),
        strip.placement = "outside") +
  labs(x = "cycle number")

Or, with only minor modifications to the voltage profile code from before, I can do this:

separators %>%
  # Filtering and sorting data
  filter(cyc.n %in% c(1, 10, 20, 50, 100), I != 0) %>%
  mutate(Q.c = ifelse(I > 0, Q.c, NA), Q.d = ifelse(I < 0, Q.d, NA)) %>%
  # Include ident in the selection here
  select(cyc.n, ident, E, Q.c, Q.d) %>%
  pivot_longer(cols = starts_with("Q"), names_to = "mode", values_to = "Q") %>%
  # Plot
  ggplot(aes(x = Q, y = E)) +
  geom_path(aes(color = factor(cyc.n), group = mode), size = 0.8) +
  scale_y_continuous(limits = c(1.75, 2.65)) +
  scale_x_continuous(breaks = seq(0, 1800, 200)) +
  scale_color_brewer("cycle number", palette = "Set1") +
  # Facet by ident
  facet_grid(ident ~ .) +
  theme_Lacey2() +
  labs(x = "Q / mAh g"^-1~"", y = "E / V")

Now, this sort of work becomes largely a copy/paste exercise. Let's say I want to plot all six data files, which are in fact two sets of three - one with an electrolyte additive, and one without:

# Table of filenames, masses and identifiers
files2 <- tribble(
  # file | mass | ident
  ~filename, ~mass, ~ident,
  "YC64A.xlsx", 2.9705, "Celgard-LiNO3",
  "YC26A.xlsx", 2.977, "CCC-LiNO3",
  "YC25A.xlsx", 2.9445, "Cellulose-LiNO3",
  "YC71B.xlsx", 3.2305, "Celgard-No LiNO3",
  "YC66E.xlsx", 3.146, "CCC-No LiNO3",
  "YC66D.xlsx", 3.2955, "Cellulose-No LiNO3"
)

# Import the data
separators2 <- files2 %>%
  pmap_dfr(function(filename, mass, ident) {
    import_arbin_raw(filename, mass, ident) %>%
      # Split ident into two new columns
      mutate(separator = str_split(ident, "-")[[1]][1],
             additive = str_split(ident, "-")[[1]][2])
  })

# Sort the data and plot it
separators2 %>%
  # Filtering and sorting data
  filter(cyc.n %in% c(1, 10, 20, 50, 100), I != 0) %>%
  mutate(Q.c = ifelse(I > 0, Q.c, NA), Q.d = ifelse(I < 0, Q.d, NA)) %>%
  # Include separator and additive in the selection here
  select(cyc.n, separator, additive, E, Q.c, Q.d) %>%
  pivot_longer(cols = starts_with("Q"), names_to = "mode", values_to = "Q") %>%
  # Plot
  ggplot(aes(x = Q, y = E)) +
  geom_path(aes(color = factor(cyc.n), group = mode), size = 0.8) +
  scale_y_continuous(limits = c(1.75, 2.65)) +
  scale_x_continuous(breaks = seq(0, 1800, 200)) +
  scale_color_brewer("cycle number", palette = "Set1") +
  # Facet by separator and additive
  facet_grid(separator ~ additive) +
  theme_Lacey2() +
  labs(x = "Q / mAh g"^-1~"", y = "E / V")

In summary

  • If you're in science or engineering, doing any sort of routine data analysis and you aren't already familiar with a programming language like R or Python, I really recommend learning one — it's really worth the effort.
  • Perhaps the biggest advantage of all of this is that the code is a record of your data analysis, and what you do to the raw data before making the plot. This is something that's often lost when using Origin or other such programs. If you make a mistake, it's easy to correct, and there are plenty of ways you can share raw data and code, if you want others to be able to reproduce what you've done.
  • The tidyverse set of packages, which is pretty much a 'dialect' within R, is pretty great. I think I used every one of the "core" packages (except forcats) in this post in some way or another. I've found it more intuitive to get started with and learn than, for example, Python, if you're on the fence about what to learn...
  • Even though ggplot2 has its (often intentional) limits in places, it's pretty powerful overall. I don't think I've used anything else to make a plot for any serious purpose in over 5 years.

Follow this link to comment...


Visualising statistics for the COVID-19 outbreak in Sweden

I've found that the current situation we find ourselves in presents a good opportunity for practicing skills... like programming... at home. To that end, I've set up a new page to scrape publicly available statistics on the COVID-19 outbreak in Sweden and visualise them in a number of different ways. The page updates itself daily, and the code used to to the data scraping, analysis and visualisation is provided if anyone would like to follow it.

Go to the page itself to see more.

Confirmed cases per 100k inhabitants, by county

Follow this link to comment...


The Braga/Goodenough glass battery, part III: please, don't just take their word for it

tl;dr: The ninth research paper in Braga and Goodenough's "glass battery" work regrettably shows many of the hallmarks of pathological science. Here I dig into some of the problems which have been a theme of some of the previous papers too: ad hoc theory, violations of the laws of thermodynamics, basic mistakes, disregard for established knowledge, absent or invalid chemical characterisation and, when all is said and done, devices that don't work the way they're said to. Hopefully, this will provoke some thoughts on why science needs to be a skeptical enterprise.

Introduction

If you have found your way here, you are probably already aware of the research into so-called “glass batteries” led by Maria Helena Braga and recent Nobel laureate John Goodenough.

What this group have presented over the last few years has been widely touted in technology magazines and by the team themselves as the long awaited game-changer in battery technology: safe, high energy density, high power, wide operating temperature window, long cycle life, and constructed using only cheap, environmentally friendly materials. However, this work has been met with deep criticism in the battery research community, for wildly exaggerated claims, weak analysis, and highly questionable research practices.

When this work first reached public attention, I laid out some of my technical criticisms of two of their research papers in two previous posts, here and here. It has now been close to two years since my last post, but Braga, Goodenough and other co-authors have published a number of new papers since then — many of them now open access, where previously they were largely paywalled — and now and again this work finds fresh attention in the press.

In this post, I am most interested in their most recent paper, “Performance of a ferroelectric glass electrolyte in a self-charging electrochemical cell with negative capacitance and resistance”. There’s much to say about this paper, as we will see, but I will also try and connect the dots from my earlier posts. As with my previous two, I hope to dig into some of the technical details to try and make sense of what is being presented, and where some of the deeper problems lie.

I know that many readers of my earlier posts are not trained in the dark art of batteries and that this can be difficult to follow. For that, I am sorry - this work is difficult to unpack even for those familiar with the field. My hope is to cut down on the field-specific jargon and write as accessibly as possible (and for where I don’t, you are welcome to ask questions in the comments). But that said, I consider it important to try and engage with this work on a deeper level, and try and explain both what it is trying to argue and where the thinking goes astray. For those with the patience to make it to the end, thank you in advance for reading.

An evolving body of work

Let’s recap. This thread of research on “glass batteries”, headed by Braga, Goodenough and other researchers (hereafter referred to as Braga et al. or “the authors”), now spans no less than nine research papers (plus several related patents). For the reader's interest, here those papers:

  1. Novel Li3ClO based glasses with superionic properties for lithium batteries, J. Mater. Chem. A (2014)
  2. Glass-amorphous alkali-ion solid electrolytes and their performance in symmetrical cells, Energy Environ. Sci. (2016)
  3. Electric Dipoles and Ionic Conductivity in a Na+ Glass Electrolyte, J. Electrochem. Soc. (2016)
  4. Alternative strategy for a safe rechargeable battery, Energy Environ. Sci. (2017)
  5. Nontraditional, Safe, High Voltage Rechargeable Cells of Long Cycle Life, J. Am. Chem. Soc. (2018)
  6. Extraordinary Dielectric Properties at Heterojunctions of Amorphous Ferroelectrics, J. Am. Chem. Soc. (2018)
  7. Low-Temperature Performance of a Ferroelectric Glass Electrolyte, ACS Appl. Energy Mater. (2019)
  8. Thermodynamic considerations of same-metal electrodes in an asymmetric cell, Materials Theory (2019)
  9. Performance of a ferroelectric glass electrolyte in a self-charging electrochemical cell with negative capacitance and resistance, Appl. Phys. Reviews (2020)

Since my last post on this almost two years ago, the authors have published a further four (6 — 9), including the one we will discuss here.

By now, therefore, this is a rather developed body of work, with some common themes running through the work in general. But more than that — we have had a little bit of related work done by other scientists which shed some light on the situation.

What is this latest paper about?

This is, as best I can work out, how it goes:

Braga et al. state that they have developed a new battery which charges itself — something that would be revolutionary in energy storage. They have done this before, of course, but this time the electrodes are not made with typical battery materials — only base metals (copper, aluminium, zinc, and so on). Not only do they “self-charge”, but they do so in semi-regular bursts (they oscillate). The authors explain that this behaviour is due to effects of “negative resistance and capacitance”, not previously observed simultaneously, supposedly arising from the ferroelectric (more on this word later) properties of their electrolyte — the same glass electrolyte they have been working with for the last few years. To explain this, they borrow from current theory on so-called negative capacitance in ferroelectric materials relevant to semiconductor physics (i.e., ferroelectric field effect transistors). Therefore, it would seem, bringing the understanding of this field to energy storage devices.

I am not especially familiar with field-effect transistors, but I will touch on this briefly to try and put it in to some sort of context. However, as we will see in the remainder of this post, I think the whole exercise is like trying to analyse a Jackson Pollock painting from the perspective of the realist movement (sorry for the art metaphor, I couldn’t think of a better one).

The electrolyte is almost certainly not a ferroelectric glass of extraordinary properties, it is a wet mush of different salts

One of the big problems I highlighted in my first “glass battery” post is that the composition and structure of the electrolyte, from which many of its supposedly remarkable properties are derived, is never shown in any convincing sense — and that it was very unlikely to be what the authors claimed the composition to be, namely a Li3OCl glass with a small amount of Li exchanged for Ba. I reasoned that such a material, if it could be prepared at all, should be extremely susceptible to reaction with water, and would have to be kept exceptionally dry in order to have the properties they were claiming. However, they use water as a solvent for the synthesis, and do most of the material handling in air, where traces of water are unavoidable. So the dryness and purity of the material was doubtful from the start.

Since then, researchers at the Technical University of Graz in Austria have repeated the preparation of this material and carried out impressively rigorous and careful characterisation work to identify its true composition and structure — repeating Braga et al’s exact method. As expected, they found, convincingly, that the material they synthesise contains a substantial amount of hydrogen — from the water — forming instead something closer to Li2(OH)Cl. Even then, they found this compound begins decomposing immediately in contact with air, forming compounds such as lithium carbonate (Li2CO3) and, crucially, hydrated lithium chloride (LiCl.xH2O) - a compound which can form a glass with a very high conductivity. On this basis, then, it would seem that the conductive component in the “Braga glass” is LiCl.xH2O, mixed in with other impurities, and not the Ba-doped Li3OCl they assumed it to be.

This is crucial: high conductivity aside, the presence of free chloride ions and water means a lot of unwanted side reactions of the electrolyte in any lithium battery system — the sort of side reactions I previously argued were almost certainly taking place and were responsible for much of the supposedly unprecedented device performance. Removing traces of water is crucial in today’s Li-ion batteries.

Not only this — it also makes the interpretation of results in terms of the “ferroelectric” character of the electrolyte essentially unsupportable, since none of the compounds that are actually found in it are ferroelectric. And the authors have not, as far as I know, presented any results that actually show that the material behaves as a ferroelectric. But there’s more...

Some record-breaking properties are thanks to basic errors of calculation

This paper states, as do the previous papers, that the “glass” has an enormous dielectric constant - 106 - 107, which would be at least an order of magnitude larger than any other known material. How can it be so large? Well, because they have been calculated wrongly. Thankfully, in this paper, as with some of the others, the calculation is presented clearly enough that the error is easy to spot. But to explain the error, I need to give a bit of background information.

The problem stems from the fact that the material, whatever it’s actually made of, is not a pure dielectric - it is an ionic conductor. The distinction is crucial. If you take a thin layer of a dielectric or an ionic conductor, and sandwich it between two electrodes, you have a capacitor. You can pass current through it and it will store (or release some amount of energy through the accumulation of charges on the surfaces of the electrodes. Dielectrics allow this to happen through the alignment of dipoles within the material. The more the dipoles can be polarised, the higher the dielectric constant (a measure of how far a material can be polarised), and the more charge can be stored on an electrode at a given voltage. Ionic conductors are also dielectrics (they have dipoles), but they also have mobile charges — ions like Li+ which move freely independently of the dipoles in the host material.

Now, this gives rise to a phenomenon called electric double layer capacitance (EDLC) — a result of a process where an electrode and electrolyte with different surface energy level are brought into contact, and there is an energy difference that needs to be equalised — this energy difference is stored in the EDLC.

The distinction is crucial, because this EDLC adds to the dielectric capacitance, and it is very large by comparison. Supercapacitors make use of EDLC to reach very high capacitances, but at the cost of a limited voltage range — because at too high voltages, the electrolyte breaks down.

This all means that it is very difficult to correctly measure the dielectric constant of an ionic conductor. If you don’t take it into account, and just try and calculate the dielectric constant from the total capacitance, you end up with unrealistically huge numbers. And that’s what Braga et al. have done.

Let’s take for example the size and thickness of the electrolyte Braga et al report - a 6.25 cm2 electrode and 1.5 mm thick. If we imagine that we have an EDLC with 20 µF/cm2 (a very typical number), for such a cell with two of the same electrodes has a total capacitance of 62.5 µF (it’s two EDLCs in series). Plugging these into the parallel plate equation gives us 1.7 x 107 - one order of magnitude lower than the 2 x 108 Braga et al. find. Add in any effects of surface roughness (increasing area), plus additional side reactions providing extra current (which can be seen in the data), can easily explain the difference.

The mistake here, if it wasn’t already clear, is that to get such huge numbers requires treating something that is exclusively a surface phenomenon as a bulk property of a material - and treating a process that isn’t dielectric capacitance as if it is. The only thing more remarkable than such a basic error being made is said error slipping through peer review not just once, but several times in journals of high “prestige”. If they get this aspect of capacitance wrong, then what else?

Negative capacitance?

Ferroelectric materials are a little out of my expertise, but the word has come up so often already I can’t really avoid it — so here goes. A ferroelectric is a material which exhibits a spontaneous polarisation (alignment of dipoles), which can be switched by application of an appropriate electric field (i.e., a voltage). After the applied voltage/electric field is removed, the material stays in the state it’s in. For this reason, they’re useful for devices such as transistors, because they can keep one or the other state (or — on, or off) without needing to consume any energy to stay in a particular state.

Now, as far as I understand it — bear with me — because of the spontaneous nature of the polarisation, as they are switched from one polarisation direction to the other, they are in an unstable state during the transition. This gives rise to a phenomenon where the local electric field in the material is negative with respect to the external electric field, and a plot of polarisation vs voltage for the system becomes S-shaped (i.e., a region appears where the slope becomes negative, when ordinarily this is positive), indicating the so-called “negative capacitance” (NC) phenomenon. However, the overall capacitance in the system is still positive, otherwise it would violate the laws of thermodynamics. The material is unstable while it is in this negative capacitance region and will prefer to settle in one configuration or the other after a very short time. Nonetheless, this characteristic is of current interest in physics and ultimately electrical engineering since it gives rise to some useful electrical characteristics (not self-charge or self-cycling though…).

How this is connected to Braga et al.’s cell construction is difficult to follow. First: they present a diagram of how charges and dipoles supposedly align at the surface of a metal in contact with their electrolyte. The argument goes that everything necessarily aligns so that a layer with a (permanent?) negative capacitance forms near the surface of the metal.

The authors then connect this, somehow, to two stable states for the electrode itself (either the state of being Al metal, or being Li metal). How this connection between stable states of the electrolyte and stable states of the electrode comes about is not clear — I would say they are not related at all. The clearest indication I have as to where this falls down is here, where they describe how “self-cycling” starts:

As the self-cycling process starts, triggered by a critical thickness of the new plated phase on the negative electrode, the electrode’s Fermi level switches from µAl to µLi, forcing the movement of additional cations to the interface Fig. 1(b). Two electric fields with opposite directions are then applied to the mobile cations and, as the electric field due to the trapped negative charges E0 surpasses the electric field due to the electrode electrons E, the current reverses and discharge starts (|E’| > |E| => Ceq < 0)

The authors seem to suggest here that a situation arises where the total capacitance in the system becomes negative (Ceq < 0), which in theory should charge itself when it is nominally discharged. Even if this was the case, this provides no explanation at all for any conversion of energy from outside the cell. Any ‘self-charge’ would have to be a very local effect (i.e., at the interface only, for very short times). To have the overall cell charging itself, with no external input of energy, clearly breaks the laws of thermodynamics. I find it very challenging to follow the precise thinking here, but I believe it is grounded in much the same issue of believing that energy levels are fixed in certain positions that I discussed in my Part II post. But at some point it becomes too hard work to try and dig into yet more contrived theory, so let’s look at some experimental issues instead.

How not to detect Li metal plating

Now then. As we just saw, Braga et al argue that a self-cycling process arises because lithium metal (Li) is plated onto an aluminium (Al) electrode - and the cell finds itself in a situation where atoms or whatever suddenly snap back like a spring, charging itself to revert to an earlier state. The authors try to support this interpretation by proving the existence of Li metal deposition by x-ray photoelectron spectroscopy (XPS). XPS is a technique which allows identification of elements and, importantly, their local chemical environment (depending on the precise detected energy of emitted electrons) existing at the very extreme surfaces of materials (i.e., the top few nm of a sample - this is key, as we will see shortly). With this technique, they find a peak in the XPS spectrum which can be clearly identified as coming from Li, and comparing the position (binding energy) of this peak with a previously reported value, identify it as Li deposited on Al:

Li deposited on Al, Li/Al, was observed to show a Li 1s peak equal to 55.40 eV (Ref. 52) which coincides with the lithium peak observed in this study and shown in Fig. 3.

There are some problems with this. The first — and somewhat important — problem is that Li does not deposit as the metal on Al, it alloys with it, forming a range of Li-Al alloy compounds. The reaction of the two metals is spontaneous, so any sort of Li film deposited on Al would not last long.

The second is that Li metal and most of its compounds give peaks in the XPS spectra which fall in a very narrow range of binding energies. Braga et al. actually demonstrate this by listing a range of common Li-containing compounds, all falling between 54 and 56 eV. Immediately below though, is Figure 3, which shows the Li 1s XPS spectrum with seemingly only one broad peak between around 53 and 57 eV.

So what does this mean? Well, it means that with the peaks from all the possible compounds being so close to each other, it is impossible to know in this case if the experimentally-obtained peak is a single compound or a mix of different ones — they all overlap, so it is effectively impossible to identify any particular Li compound (including the metal itself). This is not just an issue with this specific example - it is well known to XPS spectroscopists that so-called deconvolution of Li XPS spectra is difficult and generally unreliable for this reason. In effect, it just says that Li is at the surface, in some form or another. This is not surprising, because any number of reactions may have taken place to deposit Li compounds on the electrode.

What strikes me though, is that the authors considered that they would be able to detect metallic Li at all, given the description of the sample preparation. For reasons unclear to me — baffling in fact — the authors decided to seal their cells in epoxy resin, which they then had to “crush open”, in air, to get the electrodes out — which they then took into an Ar-filled glove box, to protect it from the air (?!). I would expect that most who have studied chemistry even at high school or similar know that Li metal reacts in air, forming a number of different compounds on the surface. Braga et al. even acknowledge this themselves, stating shortly after the previous quote:

The oxygen immediately reacted with the plated lithium on the surfaces of the electrodes

I don’t understand, then, why they even bothered with the glove box. Given that XPS only gives information from the very outer surfaces of samples, how can they expect to measure pure metallic Li? There is no chance that if a Li metal sample is exposed to air for any length of time, that any of the signal observed in the XPS comes from metallic Li itself. Yet, the authors conclude that:

It was shown by chemical analysis that Li plates on the negative electrode of an Al/90 wt. % Li -glass + 10 wt. % Li2S/Cu while charging. The amount of plated Li is sufficiently large to be clearly detected in XPS measurements.

Once again, it is hard to believe how such a basic and significant error makes it through peer review unchallenged.

Why should a small, low power electrochemical cell overpower an expensive, purpose built scientific instrument’s primary function?

An important point I want to pick up on is the question of how the authors provide evidence that their cells are “self-charging”, or “self-cycling”, and what we can read into this. I will start by focusing on the “self-charge”, since this is easiest to understand, and which we have already touched on.

The ability of an electrochemical cell to supposedly “self-charge” is central to this paper, and has been a theme in a few of the previous papers by these authors. Although it is rarely spelled out explicitly, the implication in “self-charge” is that it is exactly what it sounds like — that a cell has an ability to extract energy from its surroundings, convert it into electricity, and recharge itself. Therefore, even if the cell could only do this slowly and while in operation, if a battery could really do this it could extend its discharge time and, in effect, its energy density.

Self-charge, unsurprisingly, is rather unprecedented in batteries. But I can start to explain what self-charge could conceivably look like by discussing its opposite, better-understood phenomenon of self-discharge. Self-discharge is seen in many battery systems, where an unwanted, so-called “parasitic” reaction takes place between an electrode and something in the electrolyte, using the stored charge and energy in the cell to drive itself. Prominent examples include shuttling reactions from oxygen or ions such as nitrate in lead-acid, nickel-cadmium and nickel-metal hydride batteries, as well as the polysulfide redox shuttle in lithium-sulfur batteries.

Regardless of the precise chemical cause, a self-discharge reaction effectively means that a small electrical current is passing inside the battery instead of through the external circuit — and so the cell gradually loses its stored energy, which is wasted largely as heat. How much energy and how fast depends on the precise chemical cause.

We can see the effect of self-discharge in a number of ways. Often, the self-discharge means that a battery requires passing more charge (charge = current x time) to fully charge it than to discharge it — the ratio of discharge to charge capacity is called the coulombic efficiency, and gives us an indication of how much of a self-discharge effect there might be. We can also charge the battery, wait a while, and then discharge it and see how much charge it has lost in the meantime.

We might then expect that a self-charge process should look similar, but opposite: some process converts energy from outside the cell — say, heat — into electricity, recharging one or both electrodes in the process, without extra current having to flow from an electrical power source. We could expect to see a cell that takes less time to fully recharge, than it does to discharge at the same current.

Do Braga et al.’s results show this? Well, no. In both tests where cells are tested with a constant current discharge (by convention, a negative current), the authors point out that self-charge is evidenced by measuring a positive (charging) current. That is, a different current, flowing in the external circuit.

We highlighted that the overall measured current remains positive even if the cell is set to discharge by setting a negative control current, Icon.

This is, then, not at all what we should expect. The authors argue that the processes happening inside the cell in some way force their testing instrument (a potentiostat/galvanostat — an instrument designed to apply controlled potentials or currents to electrochemical cell) to pass a different current than the one the instrument is trying to apply. The instrument is supposed to discharge the cell with a constant current of -1 µA, but instead measures a current of a few µA flowing in the other direction (i.e., charging), which is also fluctuating up and down (that’s the "self-cycling" part).

…the potentiostat cannot reverse the control current to positive when set to a discharge. The measured current is positive (in the opposite direction to the control current) and is entirely due to phenomena occurring in the cell that overcomes the control current.

The authors argue that it is physically not possible for the instrument to apply a positive (charging) current when it is programmed to apply a negative (discharge) current, because the instrument needs to be fixed in a configuration where it can only do one or the other. Therefore, the argument goes, if you tell the instrument to discharge, but it measures a charging current, it is the cell that is forcing this charging current through the circuit somehow. This is extraordinarily unlikely. The instruments which the authors are using here, which test one cell at a time and cost in the range of 10,000 — 20,000 EUR each, are purpose built to pass whatever current or apply whatever potential the operator requires with a very good accuracy, and to very quickly react to any changes in the cell that might happen that might affect this — so long as it is used correctly, and the operator understands its limits. They have a compliance well in excess of the voltage of these cells, so that they have the capability to apply whatever voltages they need to to make sure that the current measured through the circuit is what is asked for, to within a fraction of a percent.

What they are not, however, is foolproof. In the hands of an inexperienced operator, with the wrong instrument settings, connected up incorrectly, bad contacts, or just if some component on the instrument is broken, they can behave unpredictably and give spurious results. There is one particularly strong indication that this is the case here, in the Supporting Information: the authors discharge a commercially available button cell battery with the same low current, and although it does not appear to oscillate significantly, the instrument measures a significantly different current than the applied current, outside of the specification of the instrument.

It would be easy to test whether this behaviour is a true effect of the cell or not. Try the same thing with different settings, perhaps, and different instruments. To be fair to the authors, they have shown results from different instruments and argue this is not an effect of a broken instrument. However, every experiment is very different, there is never any consistency or systematic experimental design - and so this proves nothing. For all we know, the reason the experiments are so different is because they are not repeatable, and these are just those which gave the “self-charging” result. It is certainly the case that the experiments do not look as if they were designed to test for “self-charging”.

Have oscillating systems in electrochemical systems really never been seen before?

I want to highlight one final, technical point, which I find truly astonishing. It has been a common theme in this wider “glass battery” development to make sweeping and lofty claims, but I was startled all the same to see the clear statement in this paper that oscillations in electrochemical systems have not previously been reported “or anticipated”.

The reality is that electrochemical systems which oscillate — for whatever specific reason — are well known and have been studied for decades. Anyone can learn this by making a basic Google Scholar search. Really, the only way Braga et al. could be under the impression that they have not been seen before is because they did not look.

Considering the authors seem to want to give the impression that they are breaking new ground by forging new links between different disciplines, it is unbelievable to me that they seem not to have tried to learn from the field of electrochemistry — ironically, the field that gave birth to batteries in the first place.

What if I’m wrong, and the theory’s right?

Ok — let’s say for the sake of argument that their interpretation of the results turns out to be right, and they really have this remarkable electrolyte which makes self-charging batteries possible. How significant would these results be, in terms of the technology we have now?

If you compare where battery technology is today — I’m afraid the answer is “not at all”. Modern Li-ion battery electrodes can store a charge equivalent to around 3 mAh per sq cm of electrode area (mAh/cm2), can be easily charged in 1 hour (at a current of 3 mA/cm2), and cycled hundreds if not thousands of times. Even many academic researchers in the battery field, I believe, are unaware of how high this bar actually is.

In many of the experiments here, Braga et al. are discharging their cells, which have an area of 6.25 cm2, with a current of 1 µA - that’s 0.00016 mA/cm2. If you tried to charge a modern Li-ion battery, like the sort you find in almost any device now, with that level of current (scaled appropriately), it would take you more than two years to charge it from empty to full. These are miniscule currents, miles away from any current energy storage technology. The supposed self-charging effect, even if it were to exist, is only shown to be up to a few µA, and since we have no evidence that it scales, I have no reason to believe it could ever be put to practical use. None of the performance figures are remotely useful if you look at them objectively — none at all.

And what about the energy which is converted in order to drive this self-charging reaction — where does that come from? In this paper, the authors never directly say so. We get a hint early in the introduction:

A high-priority technical target today is to harvest energy coming to earth daily from sunlight and wind energy, but there is no known way to harvest, clearly and efficiently, energy from waste heat.

This is not expanded on in the rest of the paper. Only in comments made to the media, and in this patent entitled “Heat energy-powered electrochemical cells” do the authors reveal that are claiming that their cells are self-charging by directly converting heat to electricity. The theory does not explain this. But more to the point, even if it existed, I don’t believe the effect could even be measurable in these experiments.

If the glass is converting heat into electricity — say, its own heat — I would expect that it should cool down. I estimate, based on the size of the cells given, that the authors have about 1.4 g of their electrolyte in each cell. The maximum “self-charging” current in these experiments we see is around 5 µA. At a cell voltage around 3 V, that’s a power of 15 µW. Now, we can assume that the heat capacity of the electrolyte is around 1 J per gram per °C — that’s about what it is for most batteries. This says that it takes 1 J (1 W for 1 s) to raise the temperature of 1 g of material by 1 °C. In this case, that means converting heat and completely turning it into 5 µA of current at 3 V would need 1.4 J/1.5 µJ/s = 93000 seconds, or about 26 hours, to cool the glass by 1 °C. Without it being warmed back up by anything in the surroundings in the meantime. If this was at all detectable, I would be amazed.

Numbers are important.

Summary

I have followed this line of work for three years now. Initially, I found the whole thing curious — instinctively I knew that the eye-popping claims of an 8,000 Wh/kg battery that simply pulled Li metal off one side of an esoteric electrolyte and deposited essentially the same Li metal back on the other side had to be impossible.

But with so much vague and conflicting information, it was hard to put together a coherent explanation for what was really happening. Others preferred to focus on pointing out where the theory that the authors were proposing was violating the laws of thermodynamics. However, I got the feeling that many readers who are not experts in this scientific area felt they probably couldn’t judge this for themselves, took the main performance claims at face value and concluded that “it looks like it works, but we don’t know why”. My aim with these posts has been to convey the message that at first glance it might look like it works, but it wouldn’t in practice — because there’s no way you can trust these results.

I figured, perhaps naively, that the results soon prove to be unreplicable, and a strongly skeptical response from the battery community would put the brakes on this work a little bit. That didn’t happen, at least straight away, and it is yet to be seen if it will happen. So more work got published, I was interested in where this was going, thinking that this had to collide with reality fairly quickly, and people asked me questions about it — so I read, and sometimes wrote…

Over three years some of my initial hunches have been borne out, though not necessarily in the way I had thought. Some of this is thanks to excellent work on the part of other scientists, such as the TU Graz team mentioned above — but some of this has come from the authors themselves. With nine papers to read, as well as numerous media interviews, some of the key arguments have popped up in enough places and in enough forms that one can start to follow the thinking a lot more clearly, and spot the mistakes a lot more clearly.

And after three years I feel quite comfortable in making my conclusion: this body of work bears all the hallmarks of pathological science. From the beginning, the clear objective has been the development of supposedly groundbreaking battery materials that would change how we think about energy storage research. These are all very hard papers to read — they do not give the impression that they putting much focus on explaining their theory and results so that readers understand as best as possible.

Every result is viewed from the perspective that the glass electrolyte material is essentially perfect — defect free, super-conductive, stable in all relevant conditions, with no unwanted side-reactions to speak of — even though there’s no real evidence they know what the electrolyte is. Materials are assumed to be free of well-known havoc-wreaking impurities like water and oxygen, even though they are handled extensively in air. For every unexpected result, a previously unanticipated aspect of materials physics theory has to be constructed to explain it, and no unwanted degradation reaction — generally the bane of any other new battery material — or some artefact of the experimental method can be considered as a sufficent explanation instead.

Experiments are presented without any sense that they constitute a logical experimental matrix to test for the effect of a particular variable — rather that they are expediently presented to “confirm” a particular aspect of contrived theory. Several key claims, such as the observation of the supposed “self charge” phenomenon, were first communicated to the world through media interviews rather than a scientific paper. In several cases, some of these papers made references to results in unpublished work, and in one case even in a review paper — which is certainly the first time I ever saw a review of work that hadn’t yet been published. Isolated performance characteristics, measured in tiny devices passing tiny currents, are linearly extrapolated into completely unrealistic figures for hypothetical real devices, and passed on entirely uncritically to an unsuspecting general public, which has an increasing interest in new battery breakthroughs. I could go on.

I don’t think I have met that many people in the battery field in the last three years who didn’t immediately recognise that this was flawed work that is, in most respects, a dead end, although most would not say so openly. So it is with some bemusement when I read about Hydro-Québec’s recent announcement that they intend to start work on commercialising the patents related to this work. I do not know how deep Hydro-Québec’s interest really is in this. But much of the press that accompanied this press release (e.g., here, here) repeat some the aforementioned performance claims — claims that are completely unsupportable. I don’t know what the result will be of Hydro-Québec’s project in the two years they will supposedly work on this, but I will happily take bets on it not being a super-high-energy, super-fast-charging Li3OCl-based glass battery with a lifetime of 23,000 cycles. I may renew my earlier guess that it will quietly disappear.

A final note

I of course realise that with this post I am strongly criticising work which has a recent Nobel Prize winner in John Goodenough as a major contributor, something which might be thought politically unwise. I, naturally, hold the achievements for which he won the Prize in the highest regard. But it should not be controversial to make the scientific observation that in this case, he and his co-authors are way off the mark.

Some readers may be of the opinion that in light of John Goodenough’s achievements and track record, that he must be onto something and we’re all missing something. First, I will point out that argumentum ab auctoritate is a logical fallacy; smarter men than me have been wrong before. Second, I have made every effort to explain the authors’ results as honestly as I can and my perspective as clearly as I can. Comments are open here, and if someone thinks I have made a mistake and can show it clearly — then by all means, please do.

And lastly — some readers may consider it inappropriate to place such criticism “in the open” such as this. I don’t have an academic job anymore: I write this in a private capacity, purely out of personal interest. I have no incentive to bury this in some paywalled “peer-reviewed” journal. But more to the point — in my view, any scientific research which receives and seeks public exposure in the way this has — never mind public funding — does not have a valid claim to immunity from public criticism. I have written this post precisely because of the public exposure this work has had. I don't think it's fair to argue that we can have praises sung of research papers in the press but criticism must be quietly and tediously submitted in the "proper" channels. I have written this post precisely because I see it, in part, as a unusually prominent and extreme manifestation of many issues we have with scientific integrity in the battery research field (but that's another discussion, and maybe another post). We scientists need to be able to say openly “we can and should do better than this”. Especially of our role models with Nobel Prizes.

Follow this link to comment...


Two new web apps

I'm happy to add two new Shiny web apps to these pages, which might be of use and/or interest to those working or otherwise interested in electrochemistry and battery science.

First is an app for simulating impedance of equivalent circuits - this is part of my introduction to impedance spectroscopy, and provides a simple text-based interface for simulating a huge variety of different equivalent circuit model, to help build an understanding of how combinations of different elements contribute to the total spectrum.

The second is an app for estimating the energy density of Li-ion batteries. This is a development of an earlier app I'd made to do a similar job, now improved to consider the geometry of common cylindrical Li-ion cells.

Check them out if you're interested.

Follow this link to comment...


My week @realscientists

From 18 - 24 November 2018 I had the opportunity to curate the @realscientists Twitter account, and talk about batteries and our research at Uppsala University to that account's almost 70,000 followers.

It was quite a week - intensely busy, but fun to do, and the response from those who followed my tweets was great to see.

I've preserved my activity for posterity in the links to all of my major talking points (as threads) in the links below.

Sunday

Intro · Fundamentals of battery chemistry · A brief history of rechargeable batteries

Monday

A look at the ÅABC lab · How to make an electrode · Small-scale pilot line · How to build a pouch cell battery · Applications of electrolysis

Tuesday

Research in real-time: electrode preparation in the glove box · Operando vs post mortem analysis · Battery storage masterclass ·

Wednesday

SEC and academia-industry collaboration in Sweden · History of Li-ion batteries · Future battery chemistries?

Thursday

Our research on Li-S · ÅABC diffraction lab · Na-ion development in the lab · What's the situation with solid-state batteries?

Friday

Where does the lemon battery get its energy from? · The chemistry of thermal runaway

Many thanks as well to Rickard Eriksson, Nataliia Mozhzhukina, Robin Lundström, Erik Berg, Yu-Chuan Chien, Yutaro Sashiyama, Ashok Menon, Reza Younesi, Le Anh Ma, Ronnie Mogensen and Yonas Tesfamhret for helping me with providing content and setting up demos throughout the week!

Follow this link to comment...