Sponsor Content Top 5 Ways Supercomputing Is Impacting Scientific Research

Governments, businesses, and industries are coming together to discuss urgent investments in artificial intelligence (AI) that will help them stay competitive in the supercomputing race and maintain leadership in scientific research. A key component of modern supercomputing is NVIDIA’s graphic processing units (GPU). Initially used as a graphics accelerator for video games and films, GPUs have become the cornerstone to deploy AI and supercomputing infrastructure. The GPU can now help simulate human intelligence, run deep learning algorithms, and even enable SpaceX to deploy a supercomputer in the International Space Station. AI extends the impact of traditional HPC computing by enabling researchers to analyze, model, and simulate massive amounts of data much faster than previously possible. From monitoring the climate to curing cancer, supercomputing systems integrated with AI are making huge impacts in these five areas:

1. Monitoring Changes in Earth’s Climate

Earth’s recent climate temperature increases can be correlated to a growing amount of carbon in the atmosphere. To monitor changing climate landscapes at the highest possible resolution, NASA developed DeepSat, a deep learning framework for satellite image classification and segmentation. With the computing power of NVIDIA GPUs, testing and training performance for DeepSat saw marked performance across the board. Specifically, the DeepSat team increased the data input size and enabled a gradient descent with less noise. Because of this increase in compute, larger images with more context can be classified and segmentation accuracy can be improved. High-resolution imagery showing changes to our planet’s vital signs will better prepare societies to plan for natural disasters in areas prone to forest fires, flooding, avalanches, volcanos, and more.

2. Mapping Global Populations

With the Earth’s population pushing 7 billion people, understanding population distribution is essential to meeting societal needs for infrastructure and resources. Scientists at Oak Ridge National Laboratory (ORNL) created LandSan to build a more complete picture of Earth’s residents through analysis of large-scale geographic data. The data will be used to develop solutions for human settlement mapping, building sizing, energy usage, transportation, and more. However, to predict and identify quick shifts in population — such as migration or changes due to a natural disaster — scientists will require more complex models and simulations. These computing scenarios will require state of the art GPU computing. With NVIDIA GPUs and ORNL’s LandScan high-definition global population data, the ORNL team can now quickly process high-resolution satellite imagery such as the Addis Ababa (the capital of Ethiopia) in less than 20 seconds.

3. Cancer Moonshot and Drug Discovery

In the pharmaceutical industry, bringing a new drug to market is an expensive and lengthy process. It takes an average of 12 years and $2.6 billion of investment to introduce a single drug. The key to speeding up the process is more accurate molecular dynamics and quantum mechanical dynamics (QM) simulation. These techniques work together to screen millions of potential drug combinations. To solve the issue of QM simulation time and cost, University of Florida (UFL) and University of North Carolina (UNC) are collaborating in the development of a new simulation procedure and methodology called ANAKIN-ME (ANI). ANI uses deep learning with NVIDIA GPUs to predict molecular energy surfaces as accurately as methods that are six orders of magnitude more computationally expensive. GPU deep learning is also being used to accelerate cancer research in the initiative known as Cancer Moonshot. Multiple teams from ORNL, DOE, NIH, and NVIDIA are collaborating to create an exascale AI supercomputer for cancer research to better understand how cancer grows, create effective therapies, and understand key drivers of therapy effectiveness outside of clinical trial settings.

4. Speeding the Path to Fusion Energy

Finding sustainable clean energy to power our future needs is one of today’s greatest challenges. Researchers at the International Thermonuclear Experimental Reactor (ITER) facility — the world’s largest experimental nuclear fusion reactor — are working to build what will be the first fusion system. To be successful, they will use AI to predict disruptions with very high accuracy, and then respond and minimize them for a stable fusion reaction. To build upon this project, researchers at Princeton University developed the Fusion Recurrent Neural Network (FRNN) predictive code that uses deep learning methods and NVIDIA GPUs to predict the onset of highly deleterious disruption events. With modern GPU supercomputing, researchers can now exceed the predictive capability of conventional simulations by achieving 90 percent accuracy with less than 5 percent false positive rates.

5. Oil and Gas Exploration

Producing hydrocarbons is a costly and difficult process, despite the large number of techniques that have been established to find new reserves. Eni developed a large number of these techniques, including anisotropic reverse time migration (RTM). Although RTM is useful for exploring what lies beneath the Earth’s surface, it has limits for deep water exploration and subsalt environments. It’s also computationally expensive and slow, which impacts image resolution and the accuracy of results. To tackle these issues, Eni developed the HPC4 supercomputer, powered by 3,200 NVIDIA(R) Tesla(R) P100 GPU accelerators. Now, Eni can turn around advanced seismic imaging tasks in a shorter time with a higher accuracy.

By using techniques like machine learning and deep learning alongside more powerful supercomputers, scientists can achieve breakthroughs that can boost our economy, improve our healthcare, deliver limitless energy, and much more. For more information on the evolution of exascale supercomputers and their integration with AI, watch Nvidia’s on-demand webinar here.

To learn more about NVIDIA GPU supercomputing, visit https://nvda.ws/2JQXFas.