New software and advancements in lifecycle assessment establish limits

Published Nov. 18, 2019
Screenshot of FELCI software

The screenshot shows the output results of the FELCI software. The x-axis refers to the cost of testing, with the vertical line drawn at an example input available budget of $40K. The y-axis shows the mean error calculated of the impact score from the USEtox Lifecycle Assessment method. The plot shows the Pareto front, where the optimal portfolio of tests are presented in red, based on the environmental tests described in the Army’s Developmental, Environment, Safety and Health Evaluation document. The plot shows the degree of error reduction in the LCA impact score achieved with each increment of additional money for testing.

Vicksburg, Miss. ⸺ Customers will likely never know how much research went into the new software called Forecasting Environmental Lifecycle Impacts, or FELCI, that Dr. Michael Mayo, a research physical scientist with the U.S. Army Engineer Research and Development Center-Environmental Laboratory, helped develop.

That’s because FELCI is the result of a six-year research journey Mayo undertook to redesign the entire lifecycle assessment methodology for the Army.  

“When we started looking into it, we realized the LCA method that was being used as a basis for making huge financial decisions was glossing over important, complex aspects of fate and transport, and our suspicion was that the LCA framework at the time was not providing answers at a fidelity level necessary to inspire confidence,” Mayo said. “We thought if we could make the model better, we could improve prediction accuracy for the Army.”

“The project came out of the Environmental Quality and Installations business area,” said Dr. Mark Chappell, a research physical scientist with ERDC-EL and the project team lead. “When Assistant Technical Director Amy Borman called us to tell us about the project, I viewed it as something that could be useful to us.”

Chappell knew that the EL had all the capabilities for the LCA elements, adding that frequently engineers approach scientists wanting a tool that helps them decide between two courses of action, and it can be challenging for scientists to provide exactly what the engineers are looking for.

Mayo described how when the Army produces materiel, such as munitions, some of the waste from that production process escapes into the air, water, soil and sediment, potentially creating future liability for the Army. The LCA methodology accounts for what these emission effects will do to ecosystems in terms of certain environmental indicators, such as the mortality of a population of animals exposed to industrial waste products. The organization producing that technology or product could be responsible for these and other adverse effects in the future.

Sensitivity analysis and the fate and transport dimension

“If you run an LCA calculation, you describe the interaction between the waste and the organisms, and that information is rolled up in this gigantic model, and the model gives you an impact score,” said Chappell. “And one of the first things Michael did was to program these models and run what we call a sensitivity analysis. And we were astonished at the result.

“It turned out that the LCA impact score was often incredibly inaccurate, and the score could come out inappropriately low, yet people were pinning all their decisions on it.”

Mayo agreed. “Nobody had ever looked at the LCA method at the level of detail that we were examining it. Even being very careful to put roughly similar values in, you would get wildly dissimilar values coming out ⸺ you could get an impact score of one out of the model, and then you might put in slightly different parameters, and you would get a result of 1,000,000.”

The first patent from this project involves a process to incorporate this uncertainty into LCA decisions. The team asked the question: What is the inherent error in these highly utilized, very valuable LCA models?

“Once we had a window into how bad the situation was, we increased the fidelity of the ecotoxicological modeling,” Mayo said. “It was in the fate and transport aspect of the model where you could get that crazy range of answers. Fate and transport pertains to, for example, the underlying physical processes for how a chemical emitted from a smokestack accumulates downwind in the environment.”

“That value, the concentration in the sediment, matters a lot, because an organism will behave differently if it’s exposed to a small value versus a larger one. Under the old LCA methodology, accounting for air and water were pretty accurate, but soil and sediment were way off, and that’s where a lot of the accumulation occurs,” Mayo continued. “If I don’t even know how much the organism is being exposed to, how can I accurately predict what the organism is going to do?”

The team’s algorithms aim to correct these and other deficiencies on the fate and transport side of the LCA methodology, with regard to how chemicals are calculated to be distributed throughout the environment.

Toxic equivalency

“The second patent is focused on the toxic equivalency aspect of the LCA methodology ⸺ how you relate the toxicity of the exposure to other things that you know,” Chappell said. “When you get an impact score from the LCA, you have to interpret that score. The way you do that is to relate it back to the properties of a chemical you know.”

“In the Army, we’re usually talking about munitions,” Chappell continued. “We picked Trinitrotoluene, or TNT, as the base chemical to which we compared everything else. Michael figured out a way to get really tight relationships using a more science-based methodology grounded in data; we developed a non-linear toxic equivalency method in order to be able to relate the toxicity of two chemicals based on the how the individual chemicals affect an identical population of organisms.”  

The Developmental, Environment, Safety and Health Evaluation

Chappell described how he, Mayo, Dr. Jonathan Brame, an ERDC liaison officer, and another former ERDC-EL employee, Matthew Brondum, developed FELCI as a tool that could be used by LCA practitioners. “Now that we had the LCA that rolls all kinds of information together into an impact number, we took the LCA impact number and superimposed it on top of the environmental risk protocol that the Army uses for munitions compounds, called the DESHE, or Developmental, Environment, Safety and Health Evaluation.”

FELCI makes a prediction about the minimum amount of information needed to get a certain confidence level in the impact score. Then based on the DESHE, FELCI prescribes a list of experiments that the Army needs to be performed in order to assess the environmental risk.  

“Every experiment has cost, but the DESHE has hundreds of experiments, and you’d burn through all your budget just performing all the experiments,” Mayo said. “You need some strategy for picking and choosing which experiments will give you the most bang for your buck.”  

Chappell mentions a crucial aspect of the entire effort. “At some point, if you add more experiments and get more data, it doesn’t change the error in your LCA impact score. Without any environmental performance metrics to evaluate whether an impact score is good or bad, all we can say is that the number has a lot of error in it, and you need more data.

“We can calculate how many data points you need to try to minimize that error, so the DESHE portion of FELCI is predictive. Based on the information we have in an inventory ⸺ what this chemical is, where it comes from ⸺ we can make a calculation for the number of experiments to run and provide a certain level of confidence in that number.

“If you run those experiments and enter that data into the FELCI software, FELCI will then run the calculation and tell you whether there was an actual increase in confidence that we projected, or you can iterate through this process to get to this level of confidence.”

Chappell said the beauty of the software and the research behind it is that now the Army can communicate with regulators on the value added for doing additional experiments; that is, at what point is an impact score unaffected by additional data? “Regulators often say, ‘Well, I don’t see enough data; we need more data.’ But we know now that if you keep rolling data perpetually, you’re not going to get any more improvement in the overall score, so there’s no point in getting more.

“The Army now has the ability to push back a little against regulators calling for ever-increasing amounts of data; we can also set a clear budget for a full-scale environmental evaluation,” he added.