Trends in Technology

Weather Forecasting Gets Real, Thanks to High-Performance Computing

Weather Forecasting Gets Real, Thanks to High-Performance Computing

WWeather and climate matter, a lot. The number one use of smartphones – those ubiquitous, fashionable and seemingly infinitely powerful devices – is checking the weather.

Every year in the U.S., over $5 billion is spent on forecasting. People get over 300 billion weather forecasts, and the overall economic benefits top $31 billion. According to the World Bank, investments in early warning can return 35-fold benefits in disaster response and save an estimated 23,000 lives every year.

Forecasting the weather and climate, however, is not easy.

The practice of meteorology can be traced as far back as 3000 BC in India. Predicting the weather via computation, though comparatively recent, pre-dates electronic computers.

In 1922, British mathematician Lewis Fry Richardson posited employing a whopping 64,000 human computers (people performing computations by hand) to predict the weather – 64,000 “computers” were needed to per form enough calculations, quickly enough, to predict the weather in “real time” (humans calculate at about 0.01 FLOPS, or FLoating-point Operations Per Second).

It took another 30 years for electronic computers to produce the first real forecast: In 1950, ENIAC computed at a speed of about 400 FLOPS and was able to produce a forecast 24 hours in advance – in just under 24 hours – making it a match for Richardson’s 64,000 human computers in both compute speed and timeliness of result.

Forecasts are created using a model of the earth’s systems by computing changes based on fluid flow, physics and chemistry. The precision and accuracy of a forecast depend on the fidelity of the model and the algorithms, and especially on how many data points are represented. In 1950, the model represented only a single layer of atmosphere above North America with a total of 304 data points.

Since 1950, forecasting has improved to deliver about one additional day of useful forecast per decade (a four-day forecast today is more accurate than a one-day forecast in 1980). This increase in precision and accuracy has required huge increases in data model sizes (today’s forecasts use upwards of 100 million data points) and a commensurate, almost insatiable, need for more processing power.

Exploiting Peak Performance

Weather forecasts focus on short-term conditions while climate predictions focus onlong-term trends. You dress for the weather in the morning; you plan your winter vacation in the Virgin Islands for the warm, sunny climate.

Delivering weather forecasts multiple times per day demands a robust computing infrastructure and a 24x7x365 focus on operational resilience. Computing a weather forecast requires scheduling a complex ensemble of pre-processing jobs, solver jobs and post-processing jobs. Since there is no use in a forecast for yesterday, the prediction must be delivered on time, every time.

The best practice to deliver forecasts is to deploy two identical supercomputers, each capable of producing weather forecasts by itself. This ensures a backup is available if one system goes down.

Running climate predictions are longer term jobs requiring bigger computations.There is no immediate risk if a climate computation takes a bit longer to run.

Generally, climate prediction shares access to spare cycles on high-performance computing (HPC) systems whose first priority is weather. Fitting in and sharing access without impacting weather forecasting are the priorities.

This complex prioritization and scheduling of weather simulations and climate predictions on HPC system sis where Altair’s PBS Professional® (PBS Pro) workload management technology plays a keyrole. It is a natural fit for HPC applications such as weather and climate forecasting. It enables users to address operational challenges such as resource conflicts due to more concurrent high-priority jobs, complexity of mixed operational and research workload, and unpredictability of emergency or other high-priority jobs.

PBS Pro supports advance and recurring reservations for regular activities such as forecasts. It also provides automatic fail over and a 100% health check framework to detect and mitigate faults before they cause problems.

Flexible scheduling policies mean top priority jobs (forecasts) finish on time while secondary-priority jobs (climate predictions) are fit in to maximize HPC resource utilization. In addition, the PBS Plugin Framework offers an open architecture to support unique requirements. Users can plug in third-party tools and even change the behavior of PBS Pro.

Better Predictions

Today, weather centers across the globe are investing in petascale HPC to produce higher resolution and more accurate globaland regional weather predictions.

One such organization is the Met Office, the British government’s national weather service. It is implementing multiple Cray XC supercomputers and Cray Sonexion storage systems. When fully implemented over multiple phases, the systems will be able to perform more than 23,000 trillion FLOPS.

Headquartered in Exeter, England, the Met Office was formed in 1854 to protect lives at sea. It has evolved into a cutting edge science organization with the broader mandate to protect public and commercial interests in the event of potentially dangerous weather events.

NCAR’s Cheyenne Supercomputer

The National Center for Atmospheric Research recently installed a high-performance computing (HPC) system for advancing atmospheric and earth science applications. The new machine will help scientists lay the groundwork for improved predictions ranging from the hour-by-hour risks associated with thunderstorm outbreaks to the timing of the 11-year solar cycle and its potential impacts on GPS and other sensitive technologies.

Key features of the new Cheyenne supercomputer system include:

  • 5.34-petaflop SGI ICE XA Cluster with a future Intel Xeon processor product family;
  • More than 4,000 compute nodes;
  • Twenty percent of the compute nodes have 128 GB of memory; approximately 80 percent have 64GB of memory;
  • 313 terabytes of total memory;
  • Mellanox EDR InfiniBand high-speedinter connect;
  • Partial 9D Enhanced Hypercube interconnect topology;
  • SUSE Linex Enterprise Server operating system;
  • Altair PBS Professional® Workload Manager;
  • Intel Parallel Studio XE compiler suite;
  • SGI Management enter and SGI Development Suite
  • Mellanox Unified Fabric Manager

The new Cheyenne supercomputer and the existing file system are complemented by a new centralized parallel file system and data storage components.

The Met Office uses more than 10 million weather observations and an advanced atmospheric model to create more than 4.5 million forecasts and briefings daily, delivered to a broad range of constituents including government entities, the privatesector, the general public, branches of thearmed forces and other organizations.

In addition to weather centers, researcha nd development (R&D) organizations also draw on the power of HPC to advance atmospheric knowledge. The University Corporation for Atmospheric Research (UCAR), a nonprofit consortium of morethan 100 North American member colleges and universities, is one such organization.

UCAR manages the National Center for Atmospheric Research (NCAR) on behalf of the National Science Foundation. NCAR conducts collaborative research in atmospheric and earth system science and provides a broad array of tools and technologies to the scientific community. Its Computational & Information Systems Laborator y (CISL), which manages a l lHPC for NCAR, recently took delivery of a Silicon Graphics Inc. (SGI) supercomputer. The new system, named Cheyenne, has been installed at the NCAR-Wyoming Supercomputing Center.

virtual library

A Virtual Laboratory

Irfan Elahi, The National Center for Atmospheric Research’s project manager of Cheyenne and the Computational & Information Systems Laboratory’s manager for high-end supercomputing services, explains that supercomputers have become a critical tool for studying a range of weather and geo-science topics. The new Cheyenne system will be capable of more than three times the amount of scientific computing performed by the current NCAR system, yet be three times more energy efficient in terms of floating-point operations per second per watt.

What is possible with HPC that would not have been achievable 10 or 20 years ago? From NCAR’s perspective, the models that scientists have studied have increased in both scale and detail. Elahi says, “The models are larger, and scientists are looking at more fine-grained visualization. What that means is that they need more computational power, and that requirement will not be going away in the foreseeable future.”

The models are larger, and scientists are looking at more fine-grained visualization

– Irfan Elahi, NCAR, project manager

With the acquisition of Cheyenne, NCAR also has adopted a more data-centric architecture as its paradigm for supercomputing. Elahi explains that as the size of the data grows, the cost of moving it around can become prohibitive. However, with a data-centric approach, data is not moved around. Instead, data analysis and visualization resources are connected to the same file system, so there is no movement of data. “Throughout the workflow,” remarks Elahi,“ data is accessible at each stage because it’s centralized, shared and available to internal and external stakeholders.”

NCAR’s Cheyenne supercomputer has 144,900 processors (a bit more than twice Richardson’s proposed human computing system). Of course, each of Cheyenne’s processor cores is capable of far more computation than a single person by many orders of magnitude.

Cheyenne’s total performance of more than 5 petaflops makes it the 20th fastest supercomputer in the world (as per the Nov. 2016 TOP500 list), and the Richardson equivalent of 70 million earths’ worth of human computation. The HPC system is expected to accelerate research in areas including air quality, decadal prediction, regional climate change, severe weather and subsurface flows, among others.

Applications such as weather forecasting and research call for finer granularity of data, leading to increased number crunching and better predictions – as well as huge amounts of data. HPC systems are up for such challenges and are constantly improving in capabilities.

To learn more about Altair PBS Works and HPC solutions, visit www.pbsworks.com/weather.

Bill Nitzberg is CTO of PBS Works.

Table of Contents Contact Us

High-performance computing provides the processing power needed for researchers and scientists to deliver high-fidelity, accurate and timely forecasts.