Skip to main content

How AI is Shaping Weather Research and Forecasting: An Interview with Amy McGovern

Feature Story

Artificial Intelligence
High-Performance Computing
Human–Computer Interaction
Weather

By Sara Frueh

Last update January 18, 2024

This is the first in a series of interviews exploring how artificial intelligence is affecting a range of fields in science, engineering, and medicine. 

Amy McGovern leads the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography at the University of Oklahoma, where she is also Lloyd G. and Joyce Austin Presidential Professor with a dual appointment in the School of Computer Science and the School of Meteorology. Her research focuses on developing and applying AI techniques to a variety of real-world applications, with a special emphasis on severe weather. McGovern is a member of the National Academies’ Board on Atmospheric Sciences and Climate and was among the speakers at last fall’s workshop on AI for Scientific Discovery.  

How is AI changing and improving weather forecasting so far?

McGovern: Weather forecasting has generally used physics-based models that are mostly very computationally intensive. They take a long time to run, and they’re always behind if you're trying to do real-time forecasting. And they’re not as accurate as you go out in time, to the 10-day or 14-day scale. So, there are a variety of ways that AI can be used to try to improve that.  

For example, AI is being used to improve ‘nowcasting,’ which is forecasting on a super-short time horizon — usually zero to 60 minutes. The current physics-based models are always behind, but AI is really perfect at ingesting data quickly and giving you an answer quickly. It necessarily takes a long time to train AI, but running forward, it is really fast. We’ve done that with hail nowcasting, which tries to provide a forecast for the probability of severe hail over the next hour.  

At the weather scale — which is beyond nowcasting and into the next couple of days — AI can also be used to improve forecasts. This started with bias correction; every weather model that exists has some bias in it — a model might tend to predict hot, for example, or tend to predict that storms go more easterly than they do. Or European models may trend a particular way, and American models another way. AI can help correct those biases by conglomerating all of the models to come up with something that is very reliable; when it says there’s a probability of X happening 80 percent of the time, it happens 80 percent of the time.  

Now scientists are using AI in lots of other ways to improve the models themselves — like putting AI pieces into those models so that they can be a lot faster and more accurate. We can take in larger amounts of data in the same amount of time, for example. They are working on making pure AI forecasts, but those are not ready for primetime yet. They will be, but they’re not yet.

Do you see AI actually making forecasting more accurate? Is it working in the real world?

McGovern: AI is being used primarily by private industry right now. NOAA is getting there, but they are necessarily cautious in adopting new technologies, because they are the government entity charged with operational forecasting, and they don’t want to lose the public trust. They are currently testing a bunch of different AI technologies. I know that NOAA’s Storm Prediction Center is using some AI predictions to help them make their four- to eight-day forecasts. NOAA’s Weather Prediction Center in Silver Spring is using one of the products we developed that is identifying where cold fronts, warm fronts, and stationary fronts are; it’s not making forecasts, but it helps to save forecasters time, so they can spend their time working on other things. NOAA also has a real-time riptide forecasting system in place that uses AI, and there are other AI products already in use throughout NOAA. 

Within private industry, there’s a lot of AI-based forecasting happening. For example, Google is using their AI precipitation forecasting when you search for weather. And the weather app on your phone is using AI already; for example, when you pull out your Weather Channel app, they use AI for taking their large-scale global forecasts and trying to pinpoint it down, to give you a very precise forecast for where you are. Industry is also selling AI forecasts to larger-scale customers, like airlines or farming companies.

You study high-impact weather events like hurricanes and tornadoes. How is AI shaping research in that? What is it revealing to you?  

McGovern: We’re trying to do a lot of understanding of the foundational science related to these storms. So, it’s enabling us to do things like look at very, very large datasets — collections of all the tornado reports, or simulations of tornadoes at high resolution — and be able to look for patterns that humans can’t see, because humans just can’t deal with that amount of data without being overwhelmed. 

To study the interior of tropical cyclones, we have used real-world satellite imagery to train AI to generate simulated radar images of tropical cyclones. We’re going to have a huge database of these images, which is going to enable new science that we couldn’t have done otherwise.  

People ask, “Don’t you have any worries about looking at simulated data?” Yes, I think there are some big caveats there. But the simulated data is based on the ground-truth data that we do have — so it’s not hallucinatory. And it’s giving us a dataset that we don’t have any other way.  

Are there ethical questions that you grapple with in using AI for weather forecasting or research?

McGovern: People originally didn’t think that weather had any bias, because it’s all objective sensors, right? It’s just a satellite, it’s just a temperature sensor, it’s just an air pollution sensor. But there’s a lot of bias to those sensors in ways that people didn’t necessarily think about.  

For example, in a paper that my colleagues and I recently published, we include a graph depicting weather radar coverage in the Southeastern U.S., and it shows that the best low-level radar coverage — where you see the tornadoes because they are close to the ground — misses a lot of rural counties with a heavy Black population. This wasn’t done on purpose — the National Weather Service was aiming for coverage on the major cities, where there are the most people. But something like that could cause a significant bias. What if you trained an AI model without realizing that it was only relying on this low-level data? And now you’ve not called any tornado warnings for areas with a heavily Black population. So that’s an example of unintentional bias.  

Another example is air pollution. Air pollution measurements are often crowdsourced. Where do you have more air pollution sensors? It’s where there are more affluent people, because it’s easier for them to afford the sensors used in crowdsourcing. So now you’re not getting air pollution measured in the places that probably really need it the most, like the inner cities. Then you’re training a model to do predictions about air pollution, and you might be giving wrong predictions not because you meant to, but because you didn’t understand the inherent biases in the data. 

Is the weather community dealing with these bias issues? Have they figured out how to address them?

McGovern: I think a lot of people are just like, yeah, we see that, but we’re going to keep doing our thing. And I find that a little frustrating. NIST put out a categorization, in general, of all AI biases. And we just put out a new categorization of AI biases for weather. Our goal in the paper is to categorize biases so people can understand that there are different kinds of bias for earth science data. We’re also working on another paper that shows how to address some of those biases, and offering some standard ways to do that.  

Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.