Jump to content

IBM Blue Gene

From Wikipedia, the free encyclopedia
(Redirected fromBlue Gene)
IBM Blue Gene
A Blue Gene/P supercomputer atArgonne National Laboratory
DeveloperIBM
TypeSupercomputerplatform
Release dateBG/L: Feb 1999(Feb 1999)
BG/P: June 2007
BG/Q: Nov 2011
Discontinued2015(2015)
CPUBG/L:PowerPC 440
BG/P:PowerPC 450
BG/Q:PowerPC A2
PredecessorIBM RS/6000 SP;
QCDOC
SuccessorSummit,Sierra
Hierarchy of Blue Gene processing units

Blue Genewas anIBMproject aimed at designing supercomputers that can reach operating speeds in thepetaFLOPS (PFLOPS)range, with low power consumption.

The project created three generations of supercomputers,Blue Gene/L,Blue Gene/P,andBlue Gene/Q.During their deployment, Blue Gene systems often led theTOP500[1]andGreen500[2]rankings of the most powerful and most power-efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in theGraph500list.[3]The project was awarded the 2009National Medal of Technology and Innovation.[4]

After Blue Gene/Q, IBM focused its supercomputer efforts on theOpenPowerplatform, using accelerators such asFPGAsandGPUsto address the diminishing returns ofMoore's law.[5][6]

History[edit]

A video presentation of the history and technology of the Blue Gene project was given at the Supercomputing 2020 conference.[7]

In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massivelyparallel computer,to be applied to the study of biomolecular phenomena such asprotein folding.[8]The research and development was pursued by a large multi-disciplinary team at theIBM T. J. Watson Research Center,initially led byWilliam R. Pulleyblank.[9] The project had two main goals: to advance understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures.

The initial design for Blue Gene was based on an early version of theCyclops64architecture, designed byMonty Denneau.In parallel, Alan Gara had started working on an extension of theQCDOCarchitecture into a more general-purpose supercomputer. TheUS Department of Energystarted funding the development of this system and it became known as Blue Gene/L (L for Light). Development of the original Blue Gene architecture continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

Architecture and chip logic design for the Blue Gene systems was done at theIBM T. J. Watson Research Center,chip design was completed and chips were manufactured byIBM Microelectronics,and the systems were built atIBM Rochester, MN.

In November 2004 a 16-racksystem, with each rack holding 1,024 compute nodes, achieved first place in theTOP500list, with aLINPACK benchmarksperformance of 70.72 TFLOPS.[1]It thereby overtook NEC'sEarth Simulator,which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL[10]gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-basedRoadrunnersystem atLos Alamos National Laboratory,which was the first system to surpass the 1 PetaFLOPS mark.

While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. The November 2006TOP500list showed 27 computers with theeServer Blue Gene Solutionarchitecture. For example, three racks of Blue Gene/L were housed at theSan Diego Supercomputer Center.

While theTOP500measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100TFLOPSsustained on a real-world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005Gordon Bell Prize.

In June 2006,NNSAand IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[11]At Supercomputing 2006,[12]Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[13]In 2007, a team from theIBM Almaden Research Centerand theUniversity of Nevadaran anartificial neural networkalmost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds).[14]

The name[edit]

The name Blue Gene comes from what it was originally designed to do, help biologists understand the processes ofprotein foldingandgene development.[15]"Blue" is a traditional moniker that IBM uses for many of its products andthe company itself.The original Blue Gene design was renamed "Blue Gene/C" and eventuallyCyclops64.The "L" in Blue Gene/L comes from "Light" as that design's original name was "Blue Light". The "P" version was designed to be apetascaledesign. "Q" is just the letter after "P".[16]

Major features[edit]

The Blue Gene/L supercomputer was unique in the following aspects:[17]

  • Trading the speed of processors for lower power consumption. Blue Gene/L used low frequency and low power embedded PowerPC cores with floating-point accelerators. While the performance of each chip was relatively low, the system could achieve better power efficiency for applications that could use large numbers of nodes.
  • Dual processors per node with two working modes: co-processor mode where one processor handles computation and the other handles communication; and virtual-node mode, where both processors are available to run user code, but the processors share both the computation and the communication load.
  • System-on-a-chip design. Components were embedded on a single chip for each node, with the exception of 512 MB external DRAM.
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536).
  • Three-dimensionaltorus interconnectwith auxiliary networks for global communications (broadcast and reductions), I/O, and management.
  • Lightweight OS per node for minimum system overhead (system noise).

Architecture[edit]

The Blue Gene/L architecture was an evolution of the QCDSP andQCDOCarchitectures. Each Blue Gene/L Compute or I/O node was a singleASICwith associatedDRAMmemory chips. The ASIC integrated two 700 MHzPowerPC 440embedded processors, each with a double-pipeline-double-precisionFloating-Point Unit(FPU), acachesub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6GFLOPS (gigaFLOPS).The two CPUs were notcache coherentwith one another.

Compute nodes were packaged two per compute card, with 16 compute cards (thus 32 nodes) plus up to 2 I/O nodes per node board. A cabinet/rack contained 32 node boards.[18]By the integration of all essential sub-systems on a single chip, and the use of low-power logic, each Compute or I/O node dissipated about 17 watts (including DRAMs). The low power per node allowed aggressive packaging of up to 1024 compute nodes, plus additional I/O nodes, in a standard19-inch rack,within reasonable limits on electrical power supply and air cooling. The system performance metrics, in terms ofFLOPS per watt,FLOPS per m2of floorspace and FLOPS per unit cost, allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate faulty components, down to a granularity of half a rack (512 compute nodes), to allow the machine to continue to run.

Each Blue Gene/L node was attached to three parallel communications networks: a3Dtoroidal networkfor peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network forfast barriers.The I/O nodes, which run theLinuxoperating system,provided communication to storage and external hosts via anEthernetnetwork. The I/O nodes handled filesystem operations on behalf of the compute nodes. A separate and privateEthernetmanagement network provided access to any node for configuration,bootingand diagnostics.

To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positiveintegerpower of 2, with at least 25= 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use.

Blue Gene/L compute nodes used a minimaloperating systemsupporting a single user program. Only a subset ofPOSIXcalls was supported, and only one process could run at a time on a node in co-processor mode—or one process per CPU in virtual mode. Programmers needed to implementgreen threadsin order to simulate local concurrency. Application development was usually performed inC,C++,orFortranusingMPIfor communication. However, some scripting languages such asRuby[19]andPython[20]have been ported to the compute nodes.

IBM published BlueMatter, the application developed to exercise Blue Gene/L, as open source.[21]This serves to document how the torus and collective interfaces were used by applications, and may serve as a base for others to exercise the current generation of supercomputers.

Blue Gene/P[edit]

A Blue Gene/P node card
A schematic overview of a Blue Gene/P supercomputer

In June 2007, IBM unveiledBlue Gene/P,the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, andArgonne National Laboratory'sLeadership Computing Facility.[22]

Design[edit]

The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains fourPowerPC 450processor cores, running at 850 MHz. The cores arecache coherentand the chip can operate as a 4-waysymmetric multiprocessor(SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dualDDR2memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a "compute node". A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. Arackcontains 32 node boards (thus 1024 nodes, 4096 processor cores).[23] By using many small, low-power, densely packaged chips, Blue Gene/P exceeded thepower efficiencyof other supercomputers of its generation, and at 371MFLOPS/WBlue Gene/P installations ranked at or near the top of theGreen500lists in 2007–2008.[2]

Installations[edit]

The following is an incomplete list of Blue Gene/P installations. Per November 2009, theTOP500list contained 15 Blue Gene/P installations of 2-racks (2048 nodes, 8192 processor cores, 23.86TFLOPSLinpack) and larger.[1]

  • On November 12, 2007, the first Blue Gene/P installation,JUGENE,with 16 racks (16,384 nodes, 65,536 processors) was running atForschungszentrum JülichinGermanywith a performance of 167 TFLOPS.[24]When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world. In 2009, JUGENE was upgraded to 72 racks (73,728 nodes, 294,912 processor cores) with 144 terabytes of memory and 6 petabytes of storage, and achieved a peak performance of 1 PetaFLOPS. This configuration incorporated new air-to-water heat exchangers between the racks, reducing the cooling cost substantially.[25]JUGENE was shut down in July 2012 and replaced by the Blue Gene/Q systemJUQUEEN.
  • The 40-rack (40960 nodes, 163840 processor cores) "Intrepid" system atArgonne National Laboratorywas ranked #3 on the June 2008 Top 500 list.[26]The Intrepid system is one of the major resources of the INCITE program, in which processor hours are awarded to "grand challenge" science and engineering projects in a peer-reviewed competition.
  • Lawrence Livermore National Laboratoryinstalled a 36-rack Blue Gene/P installation, "Dawn", in 2009.
  • TheKing Abdullah University of Science and Technology(KAUST) installed a 16-rack Blue Gene/P installation, "Shaheen",in 2009.
  • In 2012, a 6-rack Blue Gene/P was installed atRice Universityand will be jointly administered with theUniversity of São Paulo.[27]
  • A 2.5 rack Blue Gene/P system is the central processor for the Low Frequency Array for Radio astronomy (LOFAR) project in the Netherlands and surrounding European countries. This application uses the streaming data capabilities of the machine.
  • A 2-rack Blue Gene/P was installed in September 2008 inSofia,Bulgaria,and is operated by theBulgarian Academy of SciencesandSofia University.[28]
  • In 2010, a 2-rack (8192-core) Blue Gene/P was installed at theUniversity of Melbournefor theVictorian Life Sciences Computation Initiative.[29]
  • In 2011, a 2-rack Blue Gene/P was installed atUniversity of Canterburyin Christchurch, New Zealand.
  • In 2012, a 2-rack Blue Gene/P was installed atRutgers Universityin Piscataway, New Jersey. It was dubbed "Excalibur" as an homage to the Rutgers mascot, the Scarlet Knight.[30]
  • In 2008, a 1-rack (1024 nodes) Blue Gene/P with 180 TB of storage was installed at theUniversity of RochesterinRochester, New York.[31]
  • The first Blue Gene/P in the ASEAN region was installed in 2010 at theUniversiti of Brunei Darussalam’s research centre, theUBD-IBM Centre.The installation has prompted research collaboration between the university and IBM research on climate modeling that will investigate theimpact of climate changeon flood forecasting, crop yields, renewable energy and the health of rainforests in the region among others.[32]
  • In 2013, a 1-rack Blue Gene/P was donated to the Department of Science and Technology for weather forecasts, disaster management, precision agriculture, and health it is housed in the National Computer Center, Diliman, Quezon City, under the auspices of Philippine Genome Center (PGC) Core Facility for Bioinformatics (CFB) at UP Diliman, Quezon City.[33]

Applications[edit]

  • Veselin Topalov,the challenger to theWorld Chess Championtitle in 2010, confirmed in an interview that he had used a Blue Gene/P supercomputer during his preparation for the match.[34]
  • The Blue Gene/P computer has been used to simulate approximately one percent of a human cerebral cortex, containing 1.6 billionneuronswith approximately 9 trillion connections.[35]
  • TheIBM Kittyhawkproject team has ported Linux to the compute nodes and demonstrated generic Web 2.0 workloads running at scale on a Blue Gene/P. Their paper, published in the ACM Operating Systems Review, describes a kernel driver that tunnels Ethernet over the tree network, which results in all-to-allTCP/IPconnectivity.[36][37]Running standard Linux software likeMySQL,their performance results on SpecJBB rank among the highest on record.[citation needed]
  • In 2011, a Rutgers University / IBM / University of Texas team linked theKAUSTShaheeninstallation together with a Blue Gene/P installation at theIBM Watson Research Centerinto a "federated high performance computing cloud", winning the IEEE SCALE 2011 challenge with an oil reservoir optimization application.[38]

Blue Gene/Q[edit]

The IBM Blue Gene/Q installationMiraat theArgonne National Laboratory,near Chicago, Illinois

The third design in the Blue Gene series,Blue Gene/Q,significantly expanded and enhanced on the Blue Gene/L and /P architectures.

Design[edit]

The Blue Gene/Q "compute chip" is based on the64-bitIBM A2processor core. The A2 processor core is 4-waysimultaneously multithreadedand was augmented with aSIMDquad-vectordouble-precisionfloating-pointunit (IBM QPX). Each Blue Gene/Q compute chip contains 18 such A2 processor cores, running at 1.6 GHz. 16 Cores are used for application computing and a 17th core is used for handling operating system assist functions such asinterrupts,asynchronous I/O,MPIpacing, andRAS.The 18th core is aredundantmanufacturing spare, used to increase yield. The spared-out core is disabled prior to system operation. The chip's processor cores are linked by a crossbar switch to a 32 MBeDRAML2 cache, operating at half core speed. The L2 cache is multi-versioned—supportingtransactional memoryandspeculative execution—and has hardware support foratomic operations.[39]L2 cache misses are handled by two built-inDDR3memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5Dtorusconfiguration, with 2 GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers a peak performance of 204.8 GFLOPS while drawing approximately 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. Completing the compute node, the chip is mounted on a compute card along with 16 GBDDR3DRAM(i.e., 1 GB for each user processor core).[40]

A Q32[41]"compute drawer" contains 32 compute nodes, each water cooled.[42] A "midplane" (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores, and 16 TB RAM.[42]

Separate I/O drawers, placed at the top of a rack or in a separate rack, are air cooled and contain 8 compute cards and 8 PCIe expansion slots forInfiniBandor10 Gigabit Ethernetnetworking.[42]

Performance[edit]

At the time of the Blue Gene/Q system announcement in November 2011,[43]an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in theTOP500list[1]with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in theGraph500list[3]with over 250 GTEPS (gigatraversed edges per second). Blue Gene/Q systems also topped theGreen500list of most energy efficient supercomputers with up to 2.1GFLOPS/W.[2]

In June 2012, Blue Gene/Q installations took the top positions in all three lists:TOP500,[1]Graph500[3]andGreen500.[2]

Installations[edit]

The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1]At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012Green 500list.[2]

  • A Blue Gene/Q system calledSequoiawas delivered to theLawrence Livermore National Laboratory(LLNL) beginning in 2011 and was fully deployed in June 2012. It is part of theAdvanced Simulation and Computing Programrunning nuclear simulations and advanced scientific research. It consists of 96 racks (comprising 98,304 compute nodes with 1.6 million processor cores and 1.6PBof memory) covering an area of about 3,000 square feet (280 m2).[44]In June 2012, the system was ranked as the world's fastest supercomputer.[45][46]at 20.1PFLOPSpeak, 16.32PFLOPSsustained (Linpack), drawing up to 7.9megawattsof power.[1]In June 2013, its performance is listed at 17.17PFLOPSsustained (Linpack).[1]
  • A 10 PFLOPS (peak) Blue Gene/Q system calledMirawas installed atArgonne National Laboratoryin theArgonne Leadership Computing Facilityin 2012. It consist of 48 racks (49,152 compute nodes), with 70PBof disk storage (470 GB/s I/O bandwidth).[47][48]
  • JUQUEENat theForschungzentrum Jülichis a 28-rack Blue Gene/Q system, and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500.[1]
  • VulcanatLawrence Livermore National Laboratory(LLNL) is a 24-rack, 5 PFLOPS (peak), Blue Gene/Q system that was commissioned in 2012 and decommissioned in 2019.[49]Vulcan served Lab-industry projects through Livermore's High Performance Computing (HPC) Innovation Center[50]as well as academic collaborations in support of DOE/National Nuclear Security Administration (NNSA) missions.[51]
  • Fermiat theCINECASupercomputing facility, Bologna, Italy,[52]is a 10-rack, 2 PFLOPS (peak), Blue Gene/Q system.
  • As part ofDiRAC,theEPCChosts a 6 rack (6144-node) Blue Gene/Q system at theUniversity of Edinburgh[53]
  • A five rack Blue Gene/Q system with additional compute hardware calledAMOSwas installed at Rensselaer Polytechnic Institute in 2013.[54]The system was rated at 1048.6 teraflops, the most powerful supercomputer at any private university, and third most powerful supercomputer among all universities in 2014.[55]
  • An 838 TFLOPS (peak) Blue Gene/Q system calledAvocawas installed at theVictorian Life Sciences Computation Initiativein June, 2012.[56]This system is part of a collaboration between IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases.[57]The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM.[58]
  • A 209 TFLOPS (peak) Blue Gene/Q system was installed at theUniversity of Rochesterin July, 2012.[59]This system is part of theHealth Sciences Center for Computational InnovationArchived2012-10-19 at theWayback Machine,which is dedicated to the application ofhigh-performance computingto research programs in thehealth sciences.The system consists of a single rack (1,024 compute nodes) with 400TBof high-performance storage.[60]
  • A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue Gene/Q system calledLemanicuswas installed at theEPFLin March 2013.[61]This system belongs to the Center for Advanced Modeling Science CADMOS ([62]) which is a collaboration between the three main research institutions on the shore of theLake Genevain the French speaking part of Switzerland:University of Lausanne,University of GenevaandEPFL.The system consists of a single rack (1,024 compute nodes) with 2.1PBof IBM GPFS-GSS storage.
  • A half-rack Blue Gene/Q system, with about 100 TFLOPS (peak), calledCumuluswas installed at A*STAR Computational Resource Centre, Singapore, at early 2011.[63]

Applications[edit]

Record-breaking science applications have been run on the BG/Q, the first to cross 10petaflopsof sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,[64]while the Cardioid code,[65][66]which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both onSequoia.A fully compressible flow solver has also achieved 14.4 PFLOP/s (originally 11 PFLOP/s) on Sequoia, 72% of the machine's nominal peak performance.[67]

See also[edit]

References[edit]

  1. ^abcdefghi"November 2004 - TOP500 Supercomputer Sites".Top500.org.Retrieved13 December2019.
  2. ^abcde"Green500 - TOP500 Supercomputer Sites".Green500.org.Archived fromthe originalon 26 August 2016.Retrieved13 October2017.
  3. ^abc"The Graph500 List".Archived fromthe originalon 2011-12-27.
  4. ^Harris, Mark (September 18, 2009)."Obama honours IBM supercomputer".Techradar.Retrieved2009-09-18.
  5. ^"Supercomputing Strategy Shifts in a World Without BlueGene".Nextplatform.14 April 2015.Retrieved13 October2017.
  6. ^"IBM to Build DoE's Next-Gen Coral Supercomputers - EE Times".EETimes.Archived fromthe originalon 30 April 2017.Retrieved13 October2017.
  7. ^Supercomputing 2020 conference, Test of Time award video presentation
  8. ^"Blue Gene: A Vision for Protein Science using a Petaflop Supercomputer"(PDF).IBM Systems Journal.40(2). 2017-10-23.
  9. ^"A Talk with the Brain behind Blue Gene",BusinessWeek,November 6, 2001, archived fromthe originalon December 11, 2014
  10. ^"BlueGene/L".Archived fromthe originalon 2011-07-18.Retrieved2007-10-05.
  11. ^"hpcwire".Archived fromthe originalon September 28, 2007.
  12. ^"SC06".sc06.supercomputing.org.Retrieved13 October2017.
  13. ^"HPC Challenge Award Competition".Archived fromthe originalon 2006-12-11.Retrieved2006-12-03.
  14. ^"Mouse brain simulated on computer".BBC News. April 27, 2007. Archived fromthe originalon 2007-05-25.
  15. ^"IBM100 - Blue Gene".03.ibm.7 March 2012.Retrieved13 October2017.
  16. ^Kunkel, Julian M.; Ludwig, Thomas; Meuer, Hans (12 June 2013).Supercomputing: 28th International Supercomputing Conference, ISC 2013, Leipzig, Germany, June 16-20, 2013. Proceedings.Springer.ISBN9783642387500.Retrieved13 October2017– via Google Books.
  17. ^"Blue Gene".IBM Journal of Research and Development.49(2/3). 2005.
  18. ^Kissel, Lynn."BlueGene/L Configuration".asc.llnl.gov.Archived fromthe originalon 17 February 2013.Retrieved13 October2017.
  19. ^"Compute Node Ruby for Bluegene/L".ece.iastate.edu.Archived fromthe originalon February 11, 2009.
  20. ^William Scullin (March 12, 2011).Python for High Performance Computing.Atlanta, GA.
  21. ^Blue Matter source code, retrieved February 28, 2020
  22. ^"IBM Triples Performance of World's Fastest, Most Energy-Efficient Supercomputer".2007-06-27.Retrieved2011-12-24.
  23. ^"Overview of the IBM Blue Gene/P project".IBM Journal of Research and Development.52:199–220. Jan 2008.doi:10.1147/rd.521.0199.
  24. ^"Supercomputing: Jülich Amongst World Leaders Again".IDG News Service. 2007-11-12.
  25. ^"IBM Press room - 2009-02-10 New IBM Petaflop Supercomputer at German Forschungszentrum Juelich to Be Europe's Most Powerful".03.ibm. 2009-02-10.Retrieved2011-03-11.
  26. ^"Argonne's Supercomputer Named World's Fastest for Open Science, Third Overall".Mcs.anl.gov.Archived fromthe originalon 8 February 2009.Retrieved13 October2017.
  27. ^"Rice University, IBM partner to bring first Blue Gene supercomputer to Texas".news.rice.edu.Archived fromthe originalon 2012-04-05.Retrieved2012-04-01.
  28. ^Вече си имаме и суперкомпютърArchived2009-12-23 at theWayback Machine,Dir.bg, 9 September 2008
  29. ^"IBM Press room - 2010-02-11 IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Research - Australia".03.ibm. 2010-02-11.Retrieved2011-03-11.
  30. ^"Rutgers Gets Big Data Weapon in IBM Supercomputer - Hardware -".Archived fromthe originalon 2013-03-06.Retrieved2013-09-07.
  31. ^"University of Rochester and IBM Expand Partnership in Pursuit of New Frontiers in Health".University of Rochester Medical Center. May 11, 2012. Archived fromthe originalon 2012-05-11.
  32. ^"IBM and Universiti Brunei Darussalam to Collaborate on Climate Modeling Research".IBM News Room. 2010-10-13.Retrieved18 October2012.
  33. ^Ronda, Rainier Allan."DOST's supercomputer for scientists now operational".Philstar.Retrieved13 October2017.
  34. ^"Topalov training with super computer Blue Gene P".Players.chessdo.Archived fromthe originalon 19 May 2013.Retrieved13 October2017.
  35. ^Kaku, Michio.Physics of the Future(New York: Doubleday, 2011), 91.
  36. ^"Project Kittyhawk: A Global-Scale Computer".Research.ibm.Retrieved13 October2017.
  37. ^Appavoo, Jonathan; Uhlig, Volkmar; Waterland, Amos."Project Kittyhawk: Building a Global-Scale Computer"(PDF).Yorktown Heights, NY: IBM T.J. Watson Research Center. Archived from the original on 2008-10-31.Retrieved2018-03-13.{{cite web}}:CS1 maint: bot: original URL status unknown (link)
  38. ^"Rutgers-led Experts Assemble Globe-Spanning Supercomputer Cloud".News.rutgers.edu.2011-07-06. Archived fromthe originalon 2011-11-10.Retrieved2011-12-24.
  39. ^"Memory Speculation of the Blue Gene/Q Compute Chip".Retrieved2011-12-23.
  40. ^"The Blue Gene/Q Compute chip"(PDF).Archived fromthe original(PDF)on 2015-04-29.Retrieved2011-12-23.
  41. ^"IBM Blue Gene/Q supercomputer delivers petascale computing for high-performance computing applications"(PDF).01.ibm.Retrieved13 October2017.
  42. ^abc"IBM uncloaks 20 petaflops BlueGene/Q super".The Register.2010-11-22.Retrieved2010-11-25.
  43. ^"IBM announces 20-petaflops supercomputer".Kurzweil. 18 November 2011.Retrieved13 November2012.IBM has announced the Blue Gene/Q supercomputer, with peak performance of 20 petaflops
  44. ^Feldman, Michael (2009-02-03)."Lawrence Livermore Prepares for 20 Petaflop Blue Gene/Q".HPCwire. Archived fromthe originalon 2009-02-12.Retrieved2011-03-11.
  45. ^B Johnston, Donald (2012-06-18)."NNSA's Sequoia supercomputer ranked as world's fastest".Archived fromthe originalon 2014-09-02.Retrieved2012-06-23.
  46. ^"TOP500 Press Release".Archived fromthe originalon June 24, 2012.
  47. ^"MIRA: World's fastest supercomputer - Argonne Leadership Computing Facility".Alcf.anl.gov.Retrieved13 October2017.
  48. ^"Mira - Argonne Leadership Computing Facility".Alcf.anl.gov.Retrieved13 October2017.
  49. ^"Vulcan—decommissioned".hpc.llnl.gov.Retrieved10 April2019.
  50. ^"HPC Innovation Center".hpcinnovationcenter.llnl.gov.Retrieved13 October2017.
  51. ^"Lawrence Livermore's Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology".Llnl.gov.11 June 2013. Archived fromthe originalon 9 December 2013.Retrieved13 October2017.
  52. ^"Ibm-Fermi | Scai".Archived fromthe originalon 2013-10-30.Retrieved2013-05-13.
  53. ^"DiRAC BlueGene/Q".epcc.ed.ac.uk.
  54. ^"Rensselaer at Petascale: AMOS Among the World's Fastest and Most Powerful Supercomputers".News.rpi.edu.Retrieved13 October2017.
  55. ^Michael Mullaneyvar."AMOS Ranks 1st Among Supercomputers at Private American Universities".News.rpi.edi.Retrieved13 October2017.
  56. ^"World's greenest supercomputer comes to Melbourne - The Melbourne Engineer".Themelbourneengineer.eng.unimelb.edu.au/.16 February 2012. Archived fromthe originalon 2 October 2017.Retrieved13 October2017.
  57. ^"Melbourne Bioinformatics - For all researchers and students based in Melbourne's biomedical and bioscience research precinct".Melbourne Bioinformatics.Retrieved13 October2017.
  58. ^"Access to High-end Systems - Melbourne Bioinformatics".Vlsci.org.au.Retrieved13 October2017.
  59. ^"University of Rochester Inaugurates New Era of Health Care Research".Rochester.edu.Retrieved13 October2017.
  60. ^"Resources - Center for Integrated Research Computing".Circ.rochester.edu.Retrieved13 October2017.
  61. ^"EPFL BlueGene/L Homepage".Archived fromthe originalon 2007-12-10.Retrieved2021-03-10.
  62. ^Utilisateur, Super."À propos".Cadmos.org.Archived fromthe originalon 10 January 2016.Retrieved13 October2017.
  63. ^"A*STAR Computational Resource Centre".Acrc.a-star.edu.sg.Archived fromthe originalon 2016-12-20.Retrieved2016-08-24.
  64. ^S. Habib; V. Morozov; H. Finkel; A. Pope;K. Heitmann;K. Kumaran; T. Peterka; J. Insley; D. Daniel; P. Fasel; N. Frontiere & Z. Lukic (2012). "The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q".arXiv:1211.4864[cs.DC].
  65. ^"Cardioid Cardiac Modeling Project".Researcher.watson.ibm.25 July 2016. Archived fromthe originalon 21 May 2013.Retrieved13 October2017.
  66. ^"Venturing into the Heart of High-Performance Computing Simulations".Str.llnl.gov.Archived fromthe originalon 14 February 2013.Retrieved13 October2017.
  67. ^Rossinelli, Diego; Hejazialhosseini, Babak; Hadjidoukas, Panagiotis; Bekas, Costas; Curioni, Alessandro; Bertsch, Adam; Futral, Scott; Schmidt, Steffen J.; Adams, Nikolaus A.; Koumoutsakos, Petros (17 November 2013)."11 PFLOP/S simulations of cloud cavitation collapse".Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis.SC '13. pp. 1–13.doi:10.1145/2503210.2504565.ISBN9781450323789.S2CID12651650.

External links[edit]

Records
Preceded by
NEC Earth Simulator
35.86 teraflops
World's most powerful supercomputer
Blue Gene/L
70.72 - 478.20 teraflops

November 2004 – November 2007
Succeeded by
IBM Roadrunner
1.026 petaflops
Preceded by
Fujitsu K computer
10.51 petaflops
Blue Gene/Q
16.32 petaflops

June 2012 – November 2012
Succeeded by
Cray Titan
17.59 petaflops