How the Cloud and Big Compute are Reshaping HPC

How the Cloud and Big Compute are Reshaping HPC

High-performance computing projects require large amounts of computing resources. The coupling of simulation and specialized hardware with the cloud enables the breakthroughs of the future. […]

About 25 years ago, a few open source technologies were combined to create a robust, commercial internet that was finally ready to do business and take your money. Under the name LAMP-Stack (Linux, Apache HTTP Server, MySQL and PHP/Perl/Python), this open source combination became the standard development stack for an entire generation of developers.

Don’t look now, but we could well be on the cusp of another LAMP stack moment.

This time, however, it’s not about creating a new online way to sell dog food. Instead, a new technology renaissance is underway to address algorithmically complex, large-scale workloads that consume large amounts of computing resources. Think of vaccinations for COVID-19, building new supersonic jets or driving autonomous vehicles. The world of science and technology is delivering new innovations at a dizzying pace never seen before.

How to do this? cloud. But not only cloud.

The beginnings of “Big Compute” or “Deep Tech”

Cloud is perhaps too simple a description for what is happening here. We lack a clever shortcut for this transformation, like a LAMP stack for the Internet. Something has suddenly freed graduate students to develop calculators of immense complexity that drive algorithmically driven workloads that change our lives in much more profound ways than an early friendster or Pets.com promised.

“High-performance computing” (HPC) is the most common term for these workloads. But that was before public clouds became viable platforms for these new applications. Check out the Top500 list of the fastest supercomputers in the world, and you’ll find that a growing number are based on public clouds. This is no coincidence: on-premises supercomputers and massive Linux clusters have been around for decades (before the commercial Internet), but this new trend-sometimes referred to as “big compute” or “deep tech” – depends heavily on the cloud.

Consulting firm BCG puts it this way: “The rising performance and falling costs of computers, as well as the rise of technology platforms, are the most important factors contributing to this. Cloud computing is constantly improving performance and expanding the range of applications.“

But this new “stack” is not just made up of the cloud. Instead, it depends on three megatrends in technology: rapidly growing breadth and depth of simulation software, specialized hardware, and the cloud. These are the technological building blocks that every fast – paced research and science team uses today, and why hundreds of startups have emerged to shake up long-rotting industries that consolidated a decade or more ago.

Help engineers to move faster

Just like the magic moment of the LAMP stack, today’s Big Compute/deep tech moment is all about increasing engineer productivity. The cloud is crucial, although it is not enough on its own.

Take aerospace, for example. An aerospace engineer would traditionally rely on an on-site HPC cluster to simulate all the necessary variables related to takeoff and landing to develop a new supersonic jet. Aerospace startup companies, on the other hand, have gone straight to the cloud, with an elastic infrastructure that allows them to model and simulate applications without having to queue behind peers for highly specialized HPC hardware. Less time to build and maintain hardware. More time to experiment and develop. That’s the beauty of the Big Compute Cloud approach.

Combined with a variety of simulation software that makes it possible to model new innovations before complex physical things are actually built and prototyped. Specialized hardware, since Moore’s Law runs out of gas, drives these algorithmically complicated simulations. And the cloud decouples all of this from on-premises supercomputers and clusters, making it orders of magnitude easier to build and run models, iterate and improve and re-run them before moving on to physical prototypes. (To be clear: much of this big compute/deep tech is about developing physical things, not software).

The tricky thing in this area is the custom hardware and software configurations needed to get them up and running and the sophisticated workflows needed to optimize their performance. These types of algorithm-intensive workloads increasingly require specialized GPUs and other newer chip architectures. Companies that pay expensive graduate students to develop the next big turbine or the secret sauce for nozzle drives do not want to hinder them by forcing them to learn how to set up machines with simulation software and hardware combinations.

“Fifteen years ago, every company in this HPC space differentiated themselves by how well they operated their hardware on the ground and basically bet that Moore’s law would continue to deliver consistently better performance on x86 architectures year after year,” Joris Poort, CEO of Rescale, said in an interview. “Today, it’s all about speed and flexibility-making sure your PhD students are using the best simulation software for their work, and freeing them to become specialists in specialized big compute infrastructures so they can deliver new innovations faster.“

Specialized supercomputers

Will every company eventually use simulation and specialized hardware in the cloud? Probably not. Today, this is the domain of rockets, propulsion, computational biology, transportation systems, and the top 1% of the world’s most difficult computing challenges. But while Big Compute is being used today to crack the geekiest problems, we’re sure to see a new wave of netflixes taking down the world’s blockbusters with this LAMP-stack combination of cloud, simulation software, and specialized hardware.

Ready to see us in action:

More To Explore

IWanta.tech
Logo
Enable registration in settings - general
Have any project in mind?

Contact us:

small_c_popup.png