High-performance computing

High-performance computing(HPC) usessupercomputersandcomputer clustersto solve advanced computation problems.

The Center for Nanoscale Materials at theAdvanced Photon Source

Overview

edit

HPC integratessystems administration(including network and security knowledge) andparallel programminginto a multidisciplinary field that combinesdigital electronics,computer architecture,system software,programming languages,algorithmsand computational techniques.[1] HPC technologies are the tools and systems used to implement and create high performance computing systems.[2]Recently[when?],HPC systems have shifted from supercomputing to computingclustersandgrids.[1]Because of the need of networking in clusters and grids, High Performance Computing Technologies are being promoted[by whom?]by the use of acollapsed network backbone,because the collapsed backbone architecture is simple to troubleshoot and upgrades can be applied to a single router as opposed to multiple ones.

The term is most commonly associated with computing used for scientific research orcomputational science.A related term,high-performance technical computing(HPTC), generally refers to the engineering applications of cluster-based computing (such ascomputational fluid dynamicsand the building and testing ofvirtual prototypes). HPC has also been applied tobusinessuses such asdata warehouses,line of business(LOB) applications, andtransaction processing.

High-performance computing (HPC) as a term arose after the term "supercomputing".[3]HPC is sometimes used as a synonym for supercomputing; but, in other contexts, "supercomputer" is used to refer to a more powerful subset of "high-performance computers", and the term "supercomputing" becomes a subset of "high-performance computing". The potential for confusion over the use of these terms is apparent.

Because most current applications are not designed for HPC technologies but are retrofitted, they are not designed or tested for scaling to more powerful processors or machines.[2]Since networking clusters and grids usemultiple processorsand computers, these scaling problems can cripple critical systems in future supercomputing systems. Therefore, either the existing tools do not address the needs of the high performance computing community or the HPC community is unaware of these tools.[2]A few examples of commercial HPC technologies include:

  • the simulation of car crashes for structural design
  • molecular interaction for new drug design
  • the airflow over automobiles or airplanes

In government and research institutions, scientists simulate galaxy creation, fusion energy, and global warming, as well as work to create more accurate short- and long-term weather forecasts.[4]The world's tenth most powerful supercomputer in 2008,IBM Roadrunner(located at theUnited States Department of Energy'sLos Alamos National Laboratory)[5]simulated the performance, safety, and reliability of nuclear weapons and certifies their functionality.[6]

TOP500

edit

TOP500 ranks the world's 500 fastest high-performance computers, as measured by theHigh Performance LINPACK(HPL) benchmark. Not all existing computers are ranked, either because they are ineligible (e.g., they cannot run the HPL benchmark) or because their owners have not submitted an HPL score (e.g., because they do not wish the size of their system to become public information, for defense reasons). In addition, the use of the single LINPACK benchmark is controversial, in that no single measure can test all aspects of a high-performance computer. To help overcome the limitations of the LINPACK test, the U.S. government commissioned one of its originators,Jack Dongarraof the University of Tennessee, to create a suite of benchmark tests that includes LINPACK and others, called the HPC Challenge benchmark suite. This evolving suite has been used in some HPC procurements, but, because it is not reducible to a single number, it has been unable to overcome the publicity advantage of the less useful TOP500 LINPACK test. The TOP500 list is updated twice a year, once in June at the ISC European Supercomputing Conference and again at a US Supercomputing Conference in November.

Many ideas for the new wave ofgrid computingwere originally borrowed from HPC.

High performance computing in the cloud

edit

Traditionally, HPC has involved anon-premisesinfrastructure, investing in supercomputers or computer clusters. Over the last decade,cloud computinghas grown in popularity for offering computer resources in the commercial sector regardless of their investment capabilities.[7]Some characteristics like scalability andcontainerizationalso have raised interest in academia.[8]Howeversecurity in the cloudconcerns such as data confidentiality are still considered when deciding between cloud or on-premise HPC resources.[7]

See also

edit

References

edit
  1. ^abBrazell, Jim; Bettersworth, Michael (2005).High Performance Computing(Report). Texas State Technical College. Archived fromthe originalon 2010-07-31.
  2. ^abcCollette, Michael; Corey, Bob; Johnson, John (December 2004).High Performance Tools & Technologies(PDF)(Report). Lawrence Livermore National Laboratory, U.S. Department of Energy. Archived fromthe original(PDF)on 2017-08-30.
  3. ^"supercomputing".Oxford English Dictionary(Online ed.).Oxford University Press.(Subscription orparticipating institution membershiprequired.)"Supercomputing" is attested from 1944.
  4. ^Schulman, Michael."High Performance Computing: RAM vs CPU".Dr. Dobbs High Performance Computing, April 30, 2007.
  5. ^"Launching a New Class of U.S. Supercomputing".Department of Energy. 17 November 2022.
  6. ^"High Performance Computing".US Department of Energy. Archived fromthe originalon 30 July 2009.
  7. ^abMorgan Eldred; Dr. Alice Good; Carl Adams (24 January 2018)."A case study on data protection and security decisions in cloud HPC"(PDF).School of Computing, University of Portsmouth, Portsmouth, U.K.
  8. ^Sebastian von Alfthan (2016)."High-performance computing in the cloud?"(PDF).CSC – IT Center for Science.
edit