High Performance Computing (HPC) has been around in one form or another for decades. Remember Seymour Cray who is credited with creating the supercomputer industry?[i] His company Cray Research, formed nearly 50 years ago, was acquired by Hewlett Packard Enterprise last year to bolster its HPC and artificial intelligence (AI) offering.[ii]
Accelerate Digital Transformation Through AI: The Why, What and How?
Learn why enterprises are moving from centralized to distributed AI models, best practices for implementing hybrid cloud architectures that support AI, and much more.Download Now
The aggregated compute power of HPC systems means tasks that would have taken years or decades on a stand-alone system can now be accomplished in days, hours or minutes.
A quick primer on HPC
HPC systems vary based on the industry and their purpose, but they are typically multiple purpose-built, interconnected computers that are used to work concurrently on the same task. HPC is often leveraged to help solve complex business, science or engineering problems that involve processing and analyzing extremely large data sets to draw conclusions. Tying these systems together, writing software that takes advantage of HPC architecture and having them interact can be a challenge. However the scale afforded by this aggregated compute power means tasks that would have taken years or decades on a stand-alone system can now be accomplished in days, hours, or minutes. This enables businesses to gain insight into correlations they’d otherwise never notice, or to iterate upon hypothesis more quickly so research and development can be accelerated. Some example HPC workloads include:
- Large scale manufacturing process and design
- Genomic research
- Imaging and simulations
- Weather prediction models
- Processing data from vast numbers of internet of things (IoT) devices for product development, improvement, usage patterns, predictive maintenance, etc.
- Autonomous vehicle and smart device design
- Geophysical simulations
- Engineering model simulations
As large, global enterprises digitally transform, they need to glean more insights from data about their customers and operations, ecosystems and “smart” products and services to inform their business strategy. More often than not, they are leveraging machine learning (ML) and artificial intelligence (AI) to improve and accelerate innovation, market insight and development of new products and services. But sifting through large volumes of data to derive the right intelligence depends on having substantial compute power. For the use cases outlined above and others, next generation on-premises and cloud HPC solutions are becoming easier to deploy, more common and driving business value.
Forecasts for the HPC market growth vary, but all indicators point to increased investment in HPC technology as demand for high performance data analytics (HPDA), AI and ML continues to surge. HPC analyst firm Hyperion Research estimates the HPC market will grow to $44 billion by 2023. If the projections are borne out, the market will have doubled in size since 2013 revenues of $22 billion. [iii] Investments in AI, ML and data visualization – all key components of an HPC platform – are expected to grow as well.
Interconnecting high performance
The growth in HPC points to the need for a robust interconnected ecosystem that brings together all HPC components including core “on-premises” compute, cloud offerings and the plethora of data sources for ingest, processing and analysis at the digital edge.
Platform Equinix® provides support for these burgeoning ecosystems as investment in HPC grows over time – not to mention extensive physical support for the significant compute, cooling and integration requirements of massively parallel HPC environments. An example HPC deployment on Platform Equinix shown below, depicts private interconnection between on-premises private and cloud-adjacent HPC deployments, along with data integrated from edge sources. This deployment ecosystem bridges together high-volume edge data with HPC processing and analytics in a private interconnection design, incorporating Equinix Cloud Exchange Fabric™ (ECX Fabric™) for seamless hybrid/multicloud connectivity on a global basis.
No longer just a siloed supercomputer, HPC systems today depend on robust interconnected ecosystems that bring together on-premises compute, cloud and data at the digital edge.
With 200+ interconnected Equinix International Business Exchange™ (IBX®) data centers around the world, Platform Equinix enables you to put your HPC workloads close to your sources of data and your users. A wide range of high-performance, low-cost edge connectivity options (including 5G for IoT and mobile use cases) for data exchange makes it easy to iterate and train ML models so they can deliver business value faster. And, since HPC systems need to run constantly to derive maximum value, Equinix’s IBX data centers provide an ideal home with a 99.9999% uptime record. Consistent power is critical for HPC deployments since a power-failure could result in the need to restart scheduled jobs causing days of lost productivity. And as an added bonus, Platform Equinix is also backed with a 100% renewable power pledge for sustainability.
One example of an HPC application is the NVIDIA® DGXTM deep learning system that provides AI compute as-a-service. This is an attractive option for many companies who do not have the infrastructure needed to power deep learning models. Equinix is a certified NVIDIA DGX-Ready Data Center partner for DGX cluster-based deployments supporting AI-based deep learning and visualization designs.[iv]
Watch this webinar featuring experts from IDC, Nvidia, NetApp and Equinix to learn more about how to accelerate digital transformation through AI.
You may also be interested in learning more about Platform Equinix.