The era of micro miniaturization had begun. But it was still a substantial step from SLT to integrated circuits in which all of the same elements -- resistors, capacitors and diodes -- were fabricated on a single slice of silicon. The resulting monolithic technology was an industry wide development spanning the 1970s that opened the door to large-scale integration. In 1970, IBM rolled out a 128-bit bipolar chip that was used in the industry's first all-monolithic main memory. Introduced in the IBM System/370 Model 145 that year, the chip measured less than 1/8-inch square. It launched IBM into a promising new technology.
Barely had the computer lexicon digested "monolithic" than it was embellished by "RAM" -- an expression coined some years before for random access memory. The low power requirement and low cost of the chip helped make it the choice for main memory where data is constantly in motion.
With the whole computer world trying to cram more and more circuits onto a chip, IBM led the industry when in 1978, it became the first to mass produce and use a RAM chip storing more than 64,000 bits of data.
The 64K chip was the first of a whole family of progressively larger capacity chips produced by a unique process known as SAMOS -- short for Silicon and Aluminum Metal Oxide Semiconductor. In 1982, IBM announced an experimental chip capable of storing more than 288,000 bits of information -- equivalent to four copies of the Declaration of Independence.
In April 1984, the most recent addition to the SAMOS family -- a one-megabit chip -- arrived. "Mega" means million but the chip actually held 1,048,576 bits of information in a space smaller than a child's fingernail.
Improvements over the years in the electronic devices used in mainframes could not have been realized without equally ingenious advancements in packaging. Because increasing the number of logic circuits on a chip increases the number of connections that must be made between them, IBM devised new multilayer ceramic packaging technology to create fine three-dimensional networks linking thousands of devices.
In the IBM 3081 processor, for example, the length of wiring between chips was about one-eighth that in the previous large-scale mainframe -- the IBM 3033 processor, reducing the time it took for electric pulses to pass between components. The result was a twofold decrease in process cycle time.
But greater densities created another challenge. Components jammed together give off a fair amount of heat. If not dissipated, the heat is enough to destroy the chips. The solution in the 3081 was to draw off the heat through a plunger surrounded by helium gas into a "hat," which, in turn, was cooled by chilled water circulating inside an attached conduit. The whole assembly was called the Thermal Conduction Module (below).
Over the years, advances in mainframe performance have rested not only on revolutionary developments in microelectronics. Innovations in processor architecture and in programming have also played a significant role.
For example, computer scientists long sought to break down complicated tasks into simpler ones so that different parts of a problem could be worked on in parallel. A common name for the technique is "pipelining." The pioneering Stretch computer was among the first to overlap operations so that it could start processing a second set of numbers while the first was still in the "pipeline."
Employment of more than one processor was another departure to make mainframes more productive. Guided by a sophisticated operating system, the IBM 3084 processor complex, for example, could keep four processors busy at work -- all able to dip into the same pool of data and instructions. If one or more processors were shut down for maintenance, the rest could stay on the job.