Cycle Computing http://www.cyclecomputing.com Leader in Utility HPC Thu, 31 Jul 2014 17:19:23 +0000 en-US hourly 1 http://wordpress.org/?v=3.9.1 A BRIEF HISTORY OF SUPERCOMPUTINGhttp://www.cyclecomputing.com/a-brief-history-of-supercomputing/ http://www.cyclecomputing.com/a-brief-history-of-supercomputing/#comments Tue, 24 Jun 2014 04:21:26 +0000 http://www.cyclecomputing.com/?p=1860 .timeline-btns-div{ text-align:right; }.post-content h2{ border-top: solid 3px #ccc; border-bottom: solid 1px #ccc; padding-top: 15px; }#infographic-timeline{ border: solid 1px #ccc; background-color: #fff; margin-bottom:90px; }#pdf-wrapper-div{ -webkit-box-shadow: 4px 5px 9px 0px rgba(50, 50, 50, 0.5); -moz-box-shadow: 4px 5px 9px 0px rgba(50, 50, 50, 0.5); box-shadow: 4px 5px 9px 0px rgba(50, 50, 50, 0.5); }#pdf-div, #pdf-div-two{ height:600px; overflow-y:scroll; margin-bottom:90px; border: solid 1px #ccc; }#pdf-div-two{ display:none; }#pdf-div tr, #pdf-div tr td, #pdf-div tr td img, #pdf-div-two tr, #pdf-div-two tr td, #pdf-div-two tr td img{ padding: 0px !important; margin: 0px !important; font-size: 0px !important; border-spacing: 0px !important; line-height: 0px !important; }.sub-title{ text-align:center; }#start-img{ margin-bottom:60px; }.start-up-btn-wrapper, .start-up-btn-wrapper-two{ width:10%; height:75%; position:absolute; right:0px; bottom:0px; }.start-up-btn-span{ width:100%; height:30px; }.start-up-cell{ position:relative; }.hidden-rollover{ display:none; background-color:#fff; border:solid 1px #ccc; }#bm-early-supercomputer-history, .tbm-early-supercomputer-history{ cursor: pointer; } .start-off{ display: none; } .hover-click{ cursor: pointer; } .arrow-wrapper{ position: relative; } .arrow-box{ width: 0px; height: 0px; border-style: solid; border-width: 21.7px 12.5px 0 12.5px; border-color: #7BF transparent transparent transparent; position: absolute; right: 30px; top: -29px; }


Infographic Timeline

The Evolution of the Supercomputer

Computers arose from the need to perform calculations at a pace faster than is possible by hand. Once that problem was solved, the race was on to pit computers against themselves and meet ever-increasing demands for processing power.

At first, the race was all about improving raw calculation speed and capabilities. Then, the challenge of solving more difficult problems led to improvements in programming models and software. Eventually, supercomputers were born, enabling scientists and engineers to solve highly complex technical problems.

Whereas once supercomputers were simply mammoth machines full of expensive processors, supercomputing today takes advantage of improvements in processor and network technology. Clusters, and now even clusters on the cloud, pool the power of thousands of commodity off-the-shelf (COTS) microprocessors into machines that are among the fastest in the world. Understanding how we got here requires a look back at the evolution of computing.

Early computing

The earliest computers were mechanical and electro-mechanical devices, but the first high-speed computers used tube technology. Tubes were then replaced by transistors to create more reliable, general-purpose computers. The need for increased ease-of-use and the ability to solve a broader set of problems led to breakthroughs in programming models and languages, and eventually, to third-party application software solutions.

By 1954, IBM offered the IBM 650, the first mass-produced computer. FORTRAN, an important language for numeric or computational programs, was developed at this time by IBM’s John Backus. In the early 1960s, general-purpose computers appeared from several suppliers. The next step was to design systems to support parallel operations, in which calculations are performed independently of one another, improving performance for computationally intensive tasks. In 1964, Control Data Corp. (CDC) unveiled the 6600, considered by many to be the first supercomputer. It was capable of performing approximately 9 megaflops (Mflops), or 9 million floating point operations per second. In 1969, CDC offered the 7600, which performed at approximately 40 Mflops. In 1973, Burroughs offered the Illiac IV, an early parallel computer.

Birth of vector supercomputing

Many problems in science and engineering are based on solving a large set of similar equations, known as a vector. In the mid-1970s, vector processing provided the ability to calculate vectors in a fraction of the time previously required.

At this time, Cray, Inc. introduced very high-performance vector supercomputers that had strong processors, high-speed memory systems and powerful I/O systems. In 1976, the Cray-1 became the supercomputer industry standard.

A short time later, Convex Computer Corp. combined the lower cost of minicomputers with the performance of vector processors to create the mini-supercomputer market. Vector processing became the supercomputer standard for the next 20 years.

In 1982, the Cray X-MP added the ability to have multiple processors in one computer share memory and I/O resources. In 1988, the Cray Y-MP computer provided 2.6 Gflops using 8 processors. In 1992, the Cray C90 followed, becoming the first computer built with individual processors, each attaining speeds of 1 gigaflops (Gflops). A single Cray C90 could have up to 16 processors, for a total processing capacity of 16 Gflops.

In 1994, Cray shipped the first T90 vector supercomputer, with a peak performance of 58 Gflops. The T90 used up to 32 vector processors, the high point for a single vector computer. Constructing a 32-processor shared memory vector computer required an expensive memory subsystem, however, resulting in a system price of $35 million in the mid-1990s.

Highly scalable computers debut

Besides being very expensive, vector computers are difficult to scale much beyond the T90’s 32 processors. Yet, many computations require hundreds and even thousands of processors. Massively parallel processing (MPP) computers addressed this need by using a large number of relatively low-cost processors connected to one another and to memory via lower cost network connections. This approach enabled suppliers to build computers that could scale to any size the buyer could afford. It also led to the birth of the “grand challenge” problems, which are science and engineering problems with widespread impact, such as weather prediction. Early MPPs were based on a simple design, dubbed Single Instruction Multiple Data (SIMD), that called for the same set of instructions or calculations to be performed on multiple data sets simultaneously. These systems fit certain applications very well, but the SIMD design was not flexible enough to address more than the easiest of parallel applications.

The introduction of Multiple Instruction stream Multiple Data (MIMD) machines, which could address more complex problems by performing different calculations simultaneously on multiple data sets, led the way to more general-purpose highly scalable systems. MIMD machines retained the advantages of distributed memory architectures and nearly unlimited memory scaling. A MIMD supercomputer could also be built out of networks of workstations. Clusters of hundreds of Sun Microsystems, Inc., Silicon Graphics, Inc. and IBM workstations soon appeared, sporting each vendor’s flavor of Unix. Interoperability and maintenance of the clusters, however, led to issues in scalability.

Interactive Timeline

]]>
http://www.cyclecomputing.com/a-brief-history-of-supercomputing/feed/ 0