Sunday, May 31, 2009

Coronary artery bypass surgery




The invention: The most widely used procedure of its type, coronary
bypass surgery uses veins from legs to improve circulation
to the heart.
The people behind the invention:
Rene Favaloro (1923-2000), a heart surgeon
Donald B. Effler (1915- ), a member of the surgical team
that performed the first coronary artery bypass operation
F. Mason Sones (1918- ), a physician who developed an
improved technique of X-raying the heart’s arteries
Fighting Heart Disease
In the mid-1960’s, the leading cause of death in the United States
was coronary artery disease, claiming nearly 250 deaths per 100,000
people. Because this number was so alarming, much research was
being conducted on the heart. Most of the public’s attention was focused
on heart transplants performed separately by the famous surgeons
Christiaan Barnard and Michael DeBakey. Yet other, less dramatic
procedures were being developed and studied.
Amajor problem with coronary artery disease, besides the threat
of death, is chest pain, or angina. Individuals whose arteries are
clogged with fat and cholesterol are frequently unable to deliver
enough oxygen to their heart muscles. This may result in angina,
which causes enough pain to limit their physical activities. Some of
the heart research in the mid-1960’s was an attempt to find a surgical
procedure that would eliminate angina in heart patients. The
various surgical procedures had varying success rates.
In the late 1950’s and early 1960’s, a team of physicians in Cleveland
was studying surgical procedures that would eliminate angina.
The team was composed of Rene Favaloro, Donald B. Effler, F.
Mason Sones, and Laurence Groves. They were working on the concept,
proposed by Dr. Arthur M. Vineberg from McGill University
in Montreal, of implanting a healthy artery from the chest into the
heart. This bypass procedure would provide the heart with another source of blood, resulting
in enough oxygen to overcome
the angina. Yet Vineberg’s
surgery was often
ineffective because it was
hard to determine exactly
where to implant the new
artery.
New Techniques
In order to make Vineberg’s
proposed operation
successful, better diagnostic
tools were needed. This was
accomplished by the work
of Sones. He developed a diagnostic procedure, called “arteriography,”
whereby a catheter was inserted into an artery in the arm,
which he ran all the way into the heart. He then injected a dye into the
coronary arteries and photographed them with a high-speed motionpicture
camera. This provided an image of the heart, which made it
easy to determine where the blockages were in the coronary arteries.
Using this tool, the team tried several new techniques. First, the
surgeons tried to ream out the deposits found in the narrow portion
of the artery. They found, however, that this actually reduced
blood flow. Second, they tried slitting the length of the blocked
area of the artery and suturing a strip of tissue that would increase
the diameter of the opening. This was also ineffective because it often
resulted in turbulent blood flow. Finally, the team attempted to
reroute the flow of blood around the blockage by suturing in other
tissue, such as a portion of a vein from the upper leg. This bypass
procedure removed that part of the artery that was clogged and replaced
it with a clear vessel, thereby restoring blood flow through
the artery. This new method was introduced by Favaloro in 1967.
In order for Favaloro and other heart surgeons to perform coronary
artery surgery successfully, several other medical techniques
had to be developed. These included extracorporeal circulation and
microsurgical techniques.Extracorporeal circulation is the process of diverting the patient’s
blood flow from the heart and into a heart-lung machine.
This procedure was developed in 1953 by U.S. surgeon John H.
Gibbon, Jr. Since the blood does not flow through the heart, the
heart can be temporarily stopped so that the surgeons can isolate
the artery and perform the surgery on motionless tissue.
Microsurgery is necessary because some of the coronary arteries
are less than 1.5 millimeters in diameter. Since these arteries
had to be sutured, optical magnification and very delicate and sophisticated
surgical tools were required. After performing this surgery
on numerous patients, follow-up studieswere able to determine
the surgery’s effectiveness. Only then was the value of coronary artery
bypass surgery recognized as an effective procedure for reducing angina
in heart patients.
Consequences
According to the American Heart Association, approximately
332,000 bypass surgeries were performed in the United States in
1987, an increase of 48,000 from 1986. These figures show that the
work by Favaloro and others has had a major impact on the
health of United States citizens. The future outlook is also positive.
It has been estimated that five million people had coronary
artery disease in 1987. Of this group, an estimated 1.5 million had
heart attacks and 500,000 died. Of those living, many experienced
angina. Research has developed new surgical procedures and
new drugs to help fight coronary artery disease. Yet coronary artery
bypass surgery is still a major form of treatment.

Thursday, May 28, 2009

Contact lenses






The invention: Small plastic devices that fit under the eyelids, contact
lenses, or “contacts,” frequently replace the more familiar
eyeglasses that many people wear to correct vision problems.
The people behind the invention:
Leonardo da Vinci (1452-1519), an Italian artist and scientist
Adolf Eugen Fick (1829-1901), a German glassblower
Kevin Tuohy, an American optician
Otto Wichterle (1913- ), a Czech chemist
William Feinbloom (1904-1985), an American optometrist
An Old Idea
There are two main types of contact lenses: hard and soft. Both
types are made of synthetic polymers (plastics). The basic concept of
the contact lens was conceived by Leonardo da Vinci in 1508. He
proposed that vision could be improved if small glass ampules
filled with water were placed in front of each eye. Nothing came of
the idea until glass scleral lenses were invented by the German
glassblower Adolf Fick. Fick’s large, heavy lenses covered the pupil
of the eye, its colored iris, and part of the sclera (the white of the
eye). Fick’s lenses were not useful, since they were painful to wear.
In the mid-1930’s, however, plastic scleral lenses were developed
by various organizations and people, including the German company
I. G. Farben and the American optometrist William Feinbloom.
These lenses were light and relatively comfortable; they
could be worn for several hours at a time.
In 1945, the American optician Kevin Tuohy developed corneal
lenses, which covered only the cornea of the eye. Reportedly,
Tuohy’s invention was inspired by the fact that his nearsighted wife
could not bear scleral lenses but hated to wear eyeglasses. Tuohy’s
lenses were hard contact lenses made of rigid plastic, but they were
much more comfortable than scleral lenses and could be worn for
longer periods of time. Soon after, other people developed soft contact
lenses, which cover both the cornea and the iris. At present,many kinds of contact lenses are available. Both hard and soft contact
lenses have advantages for particular uses.
Eyes, Tears, and Contact Lenses
The camera-like human eye automatically focuses itself and adjusts
to the prevailing light intensity. In addition, it never runs out of
“film” and makes a continuous series of visual images. In the process
of seeing, light enters the eye and passes through the clear,
dome-shaped cornea, through the hole (the pupil) in the colored
iris, and through the clear eye lens, which can change shape by
means of muscle contraction. The lens focuses the light, which next
passes across the jellylike “vitreous humor” and hits the retina.
There, light-sensitive retinal cells send visual images to the optic
nerve, which transmits them to the brain for interpretation.
Many people have 20/20 (normal) vision, which means that they
can clearly see letters on a designated line of a standard eye chart
placed 20 feet away. Nearsighted (myopic) people have vision of
20/40 or worse. This means that, 20 feet from the eye chart, they see
clearly what people with 20/20 vision can see clearly at a greater
distance.
Myopia (nearsightedness) is one of the four most common visual
defects. The others are hyperopia, astigmatism, and presbyopia. All
are called “refractive errors” and are corrected with appropriate
eyeglasses or contact lenses. Myopia, which occurs in 30 percent of
humans, occurs when the eyeball is too long for the lens’s focusing
ability and images of distant objects focus before they reach the retina,
causing blurry vision. Hyperopia, or farsightedness, occurs
when the eyeballs are too short. In hyperopia, the eye’s lenses cannot
focus images of nearby objects by the time those images reach
the retina, resulting in blurry vision. A more common condition is
astigmatism, in which incorrectly shaped corneas make all objects
appear blurred. Finally, presbyopia, part of the aging process,
causes the lens of the eye to lose its elasticity. It causes progressive
difficulty in seeing nearby objects. In myopic, hyperopic, or astigmatic
people, bifocal (two-lens) systems are used to correct presbyopia,
whereas monofocal systems are used to correct presbyopia in
people whose vision is otherwise normal.Modern contact lenses, which many people prefer to eyeglasses,
are used to correct all common eye defects as well as many others
not mentioned here. The lenses float on the layer of tears that is
made continuously to nourish the eye and keep it moist. They fit under
the eyelids and either over the cornea or over both the cornea
and the iris, and they correct visual errors by altering the eye’s focal
length enough to produce 20/20 vision. In addition to being more attractive
than eyeglasses, contact lenses correct visual defects more effectively
than eyeglasses can. Some soft contact lenses (all are made
of flexible plastics) can be worn almost continuously. Hard lenses are made of more rigid plastic and last longer, though they can usually be
worn only for six to nine hours at a time. The choice of hard or soft
lenses must be made on an individual basis.
The disadvantages of contact lenses include the fact that they must
be cleaned frequently to prevent eye irritation. Furthermore, people
who do not produce adequate amounts of tears (a condition called
“dry eyes”) cannot wear them. Also, arthritis, many allergies, and
poor manual dexterity caused by old age or physical problems make
many people poor candidates for contact lenses.Impact
The invention of Plexiglas hard scleral contact lenses set the stage
for the development of the widely used corneal hard lenses by Tuohy.
The development of soft contact lenses available to the general public
began in Czechoslovakia in the 1960’s. It led to the sale, starting in the
1970’s, of the popular, soft
contact lenses pioneered by
Otto Wichterle. The Wichterle
lenses, which cover
both the cornea and the iris,
are made of a plastic called
HEMA (short for hydroxyethylmethylmethacrylate).
These very thin lenses
have disadvantages that include
the requirement of
disinfection between uses,
incomplete astigmatism correction,
low durability, and
the possibility of chemical
combination with some
medications, which can
damage the eyes. Therefore,
much research is being
carried out to improve
them. For this reason, and
because of the continued popularity of hard lenses, new kinds of soft and hard lenses are continually
coming on the market.

Sunday, May 24, 2009

Computer chips






The invention: Also known as a microprocessor, a computer chip
combines the basic logic circuits of a computer on a single silicon
chip.
The people behind the invention:
Robert Norton Noyce (1927-1990), an American physicist
William Shockley (1910-1989), an American coinventor of the
transistor who was a cowinner of the 1956 Nobel Prize in
Physics
Marcian Edward Hoff, Jr. (1937- ), an American engineer
Jack St. Clair Kilby (1923- ), an American researcher and
assistant vice president of Texas Instruments
The Shockley Eight
The microelectronics industry began shortly after World War II
with the invention of the transistor. While radar was being developed
during the war, it was discovered that certain crystalline substances,
such as germanium and silicon, possess unique electrical
properties that make them excellent signal detectors. This class of
materials became known as “semiconductors,” because they are
neither conductors nor insulators of electricity.
Immediately after the war, scientists at Bell Telephone Laboratories
began to conduct research on semiconductors in the hope that
they might yield some benefits for communications. The Bell physicists
learned to control the electrical properties of semiconductor
crystals by “doping” (treating) them with minute impurities. When
two thin wires for current were attached to this material, a crude device
was obtained that could amplify the voice. The transistor, as
this device was called, was developed late in 1947. The transistor
duplicated many functions of vacuum tubes; it was also smaller, required
less power, and generated less heat. The three Bell Laboratories
scientists who guided its development—William Shockley,
Walter H. Brattain, and John Bardeen—won the 1956 Nobel Prize in
Physics for their work.Shockley left Bell Laboratories and went to Palo Alto, California,
where he formed his own company, Shockley Semiconductor Laboratories,
which was a subsidiary of Beckman Instruments. Palo Alto
is the home of Stanford University, which, in 1954, set aside 655
acres of land for a high-technology industrial area known as Stanford
Research Park. One of the first small companies to lease a site
there was Hewlett-Packard. Many others followed, and the surrounding
area of Santa Clara County gave rise in the 1960’s and
1970’s to a booming community of electronics firms that became
known as “Silicon Valley.” On the strength of his prestige, Shockley
recruited eight young scientists from the eastern United States to
work for him. One was Robert Norton Noyce, an Iowa-bred physicist
with a doctorate from the Massachusetts Institute of Technology.
Noyce came to Shockley’s company in 1956.
The “Shockley Eight,” as they became known in the industry,
soon found themselves at odds with their boss over issues of research
and development. Seven of the dissenting scientists negotiated
with industrialist Sherman Fairchild, and they convinced the
remaining holdout, Noyce, to join them as their leader. The Shockley Eight defected in 1957 to form a new company, Fairchild Semiconductor,
in nearby Mountain View, California. Shockley’s company,
which never recovered from the loss of these scientists, soon
went out of business.Integrating Circuits
Research efforts at Fairchild Semiconductor and Texas Instruments,
in Dallas, Texas, focused on putting several transistors on
one piece, or “chip,” of silicon. The first step involved making miniaturized
electrical circuits. Jack St. Clair Kilby, a researcher at Texas
Instruments, succeeded in making a circuit on a chip that consisted
of tiny resistors, transistors, and capacitors, all of which were connected
with gold wires. He and his company filed for a patent on
this “integrated circuit” in February, 1959. Noyce and his associates
at Fairchild Semiconductor followed in July of that year with an integrated
circuit manufactured by means of a “planar process,”
which involved laying down several layers of semiconductor that
were isolated by layers of insulating material. Although Kilby and
Noyce are generally recognized as coinventors of the integrated circuit,
Kilby alone received a membership in the National Inventors
Hall of Fame for his efforts.
Consequences
By 1968, Fairchild Semiconductor had grown to a point at which
many of its key Silicon Valley managers had major philosophical
differences with the East Coast management of their parent company.
This led to a major exodus of top-level management and engineers.
Many started their own companies. Noyce, Gordon E. Moore,
and Andrew Grove left Fairchild to form a new company in Santa
Clara called Intel with $2 million that had been provided by venture
capitalist Arthur Rock. Intel’s main business was the manufacture
of computer memory integrated circuit chips. By 1970, Intel was
able to develop and bring to market a random-access memory
(RAM) chip that was subsequently purchased in large quantities by
several major computer manufacturers, providing large profits for
Intel.
In 1969, Marcian Edward Hoff, Jr., an Intel research and development
engineer, met with engineers from Busicom, a Japanese firm.
These engineers wanted Intel to design a set of integrated circuits for
Busicom’s desktop calculators, but Hoff told them their specifications
were too complex. Nevertheless, Hoff began to think about the possibility of incorporating all the logic circuits of a computer central processing
unit (CPU) into one chip. He began to design a chip called a
“microprocessor,” which, when combined with a chip that would
hold a program and one that would hold data, would become a small,
general-purpose computer. Noyce encouraged Hoff and his associates
to continue his work on the microprocessor, and Busicom contracted
with Intel to produce the chip. Frederico Faggin, who was hired from
Fairchild, did the chip layout and circuit drawings.
In January, 1971, the Intel team finished its first working microprocessor,
the 4004. The following year, Intel made a higher-capacity
microprocessor, the 8008, for Computer Terminals Corporation.
That company contracted with Texas Instruments to produce a chip
with the same specifications as the 8008, which was produced in
June, 1972. Other manufacturers soon produced their own microprocessors.
The Intel microprocessor became the most widely used computer
chip in the budding personal computer industry and may
take significant credit for the PC “revolution” that soon followed.
Microprocessors have become so common that people use them every
day without realizing it. In addition to being used in computers,the microprocessor has found its way into automobiles, microwave
ovens, wristwatches, telephones, and many other ordinary items.

Thursday, May 21, 2009

Compressed-air-accumulating power plant





The invention:



Plants that can be used to store energy in the form

of compressed air when electric power demand is low and use it

to produce energy when power demand is high.



The organization behind the invention:



Nordwestdeutsche Kraftwerke, a Germany company









Power, Energy Storage, and Compressed Air



Energy, which can be defined as the capacity to do work, is essential

to all aspects of modern life. One familiar kind of energy, which

is produced in huge amounts by power companies, is electrical energy,

or electricity. Most electricity is produced in a process that consists

of two steps. First, a fossil fuel such as coal is burned and the resulting

heat is used to make steam. Then, the steam is used to

operate a turbine system that produces electricity. Electricity has

myriad applications, including the operation of heaters, home appliances

of many kinds, industrial machinery, computers, and artificial

illumination systems.

An essential feature of electricity manufacture is the production

of the particular amount of electricity that is needed at a given time.

If moment-to-moment energy requirements are not met, the city or

locality involved will experience a “blackout,” the most obvious

feature of which is the loss of electrical lighting. To prevent blackouts,

it is essential to store extra electricity at times when power production

exceeds power demands. Then, when power demands exceed

the capacity to make energy by normal means, stored energy

can be used to make up the difference.

One successful modern procedure for such storage is the compressed-

air-accumulation process, pioneered by the Nordwestdeutsche

Kraftwerke company’s compressed-air-accumulating power

plant, which opened in December, 1978. The plant, which is

located in Huntorf, Germany (at the time, West Germany), makes

compressed air during periods of low electricity demand, stores the

air in an underground cavern, and uses it to produce extra electricity

during periods of high demand.



Plant Operation and Components



The German 300-megawatt compressed-air-accumulating power

plant in Huntorf produces extra electricity from stored compressed

air that will provide up to four hours per day of local peak electricity

needs. The energy-storage process, which is vital to meeting very

high peak electric power demands, is viable for electric power

plants whose total usual electric outputs range from 25 megawatts

to the 300 megawatts produced at Huntorf. It has been suggested,

however, that the process is most suitable for 25- to 50-megawatt

plants.

The energy-storage procedure used at Huntorf is quite simple.

All the surplus electricity that is made in nonpeak-demand periods

is utilized to drive an air compressor. The compressor pumps air

from the surrounding atmosphere into an airtight underground

storage cavern. When extra electricity is required, the stored compressed

air is released and passed through a heating unit to be

warmed, after which it is used to run gas-turbine systems that produce

electricity. This sequence of events is the same as that used in

any gas-turbine generating system; the only difference is that the

compressed air can be stored for any desired period of time rather

than having to be used immediately.

One requirement of any compressed-air-accumulating power

plant is an underground storage chamber. The Huntorf plant utilizes

a cavern that was hollowed out some 450 meters below the surface

of the earth. The cavern was created by drilling a hole into an

underground salt deposit and pumping in water. The water dissolved

the salt, and the resultant saltwater solution (brine) was

pumped out of the deposit. The process of pumping in water and removing

brine was continued until the cavern reached the desired

size. This type of storage cavern is virtually leak-free. The preparation

of such underwater salt-dome caverns has been performed

roughly since the middle of the twentieth century. Until the Huntorf

endeavor, such caves were used to stockpile petroleum and natural

gas for later use. It is also possible to use mined, hard-rock caverns

for compressed-air accumulation when it is necessary to compress

air to pressures higher than those that can be maintained effectively

in a salt-dome cavern.

The essential machinery that must be added to conventional

power plants to turn them into compressed-air-accumulating power

plants are motor-driven air compressors and gas turbine generating

systems. This equipment must be connected appropriately so that

in the storage mode, the overall system will compress air for storage

in the underground cavern, and in the power-production mode, the

system will produce electricity from the stored compressed air.

Large compressed-air-accumulating power plants require specially

constructed machinery. For example, the compressors that

are used at Huntorf were developed specifically for that plant by

Sulzer, a Swiss company. When the capacity of such plants is no

higher than 50 megawatts, however, standard, readily available

components can be used. This means that relatively small compressed-

air-accumulating power plants can be constructed for a reasonable

cost.





Consequences



The development of compressed-air-accumulating power plants

has had a significant impact on the electric power industry, adding to

its capacity to store energy. The main storage methods available prior

to the development of compressed-air-accumulation methodology

were batteries and water that was pumped uphill (hydro-storage). Battery

technology is expensive, and its capacity is insufficient for major,

long-term power storage. Hydro-storage is a more viable technology.

Compressed-air energy-storage systems have several advantages

over hydro-storage. First, they can be used in areas where flat terrain

makes it impossible to use hydro-storage. Second, compressedair

storage is more efficient than hydro-storage. Finally, the fact that

standard plant components can be used, along with several other

factors, means that 25- to 50-megawatt compressed-air storage plants

can be constructed much more quickly and cheaply than comparable

hydro-storage plants.

The attractiveness of compressed-air-accumulating power plants

has motivated efforts to develop hard-rock cavern construction

techniques that cut costs and make it possible to use high-pressure

air storage. In addition, aquifers (underground strata of porous rock

that normally hold groundwater) have been used successfully for

compressed-air storage. It is expected that compressed-air-accumulating

power plants will be widely used in the future, which will

help to decrease pollution and cut the use of fossil fuels.





See also : Alkaline storage battery; Breeder reactor; Fuel cell; Geothermal

power; Heat pump; Nuclear power plant; Tidal power plant.

Saturday, May 16, 2009

Compact disc






Compact disc
The invention: A plastic disk on which digitized music or computer
data is stored.
The people behind the invention:
Akio Morita (1921- ), a Japanese physicist and engineer
who was a cofounder of Sony
Wisse Dekker (1924- ), a Dutch businessman who led the
Philips company
W. R. Bennett (1904-1983), an American engineer who was a
pioneer in digital communications and who played an
important part in the Bell Laboratories research program
Digital Recording
The digital system of sound recording, like the analog methods
that preceded it, was developed by the telephone companies to improve
the quality and speed of telephone transmissions. The system
of electrical recording introduced by Bell Laboratories in the 1920s
was part of this effort. Even Edison’s famous invention of the phonograph
in 1877 was originally conceived as an accompaniment to
the telephone. Although developed within the framework of telephone
communications, these innovations found wide applications
in the entertainment industry.
The basis of the digital recording system was a technique of sampling
the electrical waveforms of sound called PCM, or pulse code
modulation. PCM measures the characteristics of these waves and
converts them into numbers. This technique was developed at Bell
Laboratories in the 1930’s to transmit speech. At the end of World
War II, engineers of the Bell System began to adaptPCMtechnology
for ordinary telephone communications.
The problem of turning sound waves into numbers was that of
finding a method that could quickly and reliably manipulate millions
of them. The answer to this problem was found in electronic computers,
which used binary code to handle millions of computations in a
few seconds. The rapid advance of computer technology and the semiconductor circuits that gave computers the power to handle
complex calculations provided the means to bring digital sound technology
into commercial use. In the 1960’s, digital transmission and
switching systems were introduced to the telephone network.
Pulse coded modulation of audio signals into digital code achieved
standards of reproduction that exceeded even the best analog system,
creating an enormous dynamic range of sounds with no distortion
or background noise. The importance of digital recording went
beyond the transmission of sound because it could be applied to all
types of magnetic recording in which the source signal is transformed
into an electric current. There were numerous commercial
applications for such a system, and several companies began to explore
the possibilities of digital recording in the 1970’s.
Researchers at the Sony, Matsushita, and Mitsubishi electronics
companies in Japan produced experimental digital recording systems.
Each developed its own PCM processor, an integrated circuit
that changes audio signals into digital code. It does not continuously
transform sound but instead samples it by analyzing thousands
of minute slices of it per second. Sony’s PCM-F1 was the first
analog-to-digital conversion chip to be produced. This gave Sony a
lead in the research into and development of digital recording.
All three companies had strong interests in both audio and video
electronics equipment and saw digital recording as a key technology
because it could deal with both types of information simultaneously.
They devised recorders for use in their manufacturing operations.
After using PCM techniques to turn sound into digital code, they recorded
this information onto tape, using not magnetic audio tape but
the more advanced video tape, which could handle much more information.
The experiments with digital recording occurred simultaneously
with the accelerated development of video recording technology
and owed much to the enhanced capabilities of video recorders.
At this time, videocassette recorders were being developed in
several corporate laboratories in Japan and Europe. The Sony Corporation
was one of the companies developing video recorders at this
time. Its U-matic machines were successfully used to record digitally.
In 1972, the Nippon Columbia Company began to make its master recordings
digitally on an Ampex video recording machine.
Links Among New Technologies
There were powerful links between the new sound recording
systems and the emerging technologies of storing and retrieving
video images. The television had proved to be the most widely used
and profitable electronic product of the 1950’s, but with the market
for color television saturated by the end of the 1960’s, manufacturers
had to look for a replacement product.Amachine to save and replay
television images was seen as the ideal companion to the family
TV set. The great consumer electronics companies—General
Electric and RCAin the United States, Philips and Telefunken in Europe,
and Sony and Matsushita in Japan—began experimental programs
to find a way to save video images.
RCA’s experimental teams took the lead in developing an optical
videodisc system, called Selectavision, that used an electronic stylus
to read changes in capacitance on the disc. The greatest challenge to
them came from the Philips company of Holland. Its optical videodisc
used a laser beam to read information on a revolving disc, in
which a layer of plastic contained coded information. With the aid
of the engineering department of the Deutsche Grammophon record
company, Philips had an experimental laser disc in hand by
1964.
The Philips Laservision videodisc was not a commercial success,
but it carried forward an important idea. The research and engineering
work carried out in the laboratories at Eindhoven in Holland
proved that the laser reader could do the job. More important,
Philips engineers had found that this fragile device could be mass
produced as a cheap and reliable component of a commercial product.
The laser optical decoder was applied to reading the binary
codes of digital sound. By the end of the 1970’s, Philips engineers
had produced a working system.
Ten years of experimental work on the Laservision system proved
to be a valuable investment for the Philips corporation. Around
1979, it started to work on a digital audio disc (DAD) playback system.
This involved more than the basic idea of converting the output
of the PCM conversion chip onto a disc. The lines of pits on the
compact disc carry a great amount of information: the left- and
right-hand tracks of the stereo system are identified, and a sequence of pits also controls the motor speed and corrects any error in the laser
reading of the binary codes.
This research was carried out jointly with the Sony Corporation
of Japan, which had produced a superior method of encoding digital
sound with its PCM chips. The binary codes that carried the information
were manipulated by Sony’s sixteen-bit microprocessor.
Its PCM chip for analog-to-digital conversion was also employed.
Together, Philips and Sony produced a commercial digital playback
record that they named the compact disc. The name is significant, as
it does more than indicate the size of the disc—it indicates family
ties with the highly successful compact cassette. Philips and Sony
had already worked to establish this standard in the magnetic tape
format and aimed to make their compact disc the standard for digital
sound reproduction.Philips and Sony began to demonstrate their compact digital disc
(CD) system to representatives of the audio industry in 1981. They
were not alone in digital recording. The Japanese Victor Company, a
subsidiary of Matsushita, had developed a version of digital recording
from its VHD video disc design. It was called audio high density
disc (AHD). Instead of
the small CD disc, the AHD
system used a ten-inch vinyl
disc. Each digital recording
system used a different
PCM chip with a
different rate of sampling
the audio signal.The recording and electronics
industries’ decision
to standardize on the Philips/
Sony CD system was
therefore a major victory for
these companies and an important
event in the digital
era of sound recording.
Sony had found out the
hard way that the technical
performance of an innovation is irrelevant when compared with the politics of turning it into
an industrywide standard. Although the pioneer in videocassette
recorders, Sony had been beaten by its rival, Matsushita, in establishing
the video recording standard. This mistake was not repeated
in the digital standards negotiations, and many companies were
persuaded to license the new technology. In 1982, the technology
was announced to the public. The following year, the compact disc
was on the market.
The Apex of Sound Technology
The compact disc represented the apex of recorded sound technology.
Simply put, here at last was a system of recording in which
there was no extraneous noise—no surface noise of scratches and
pops, no tape hiss, no background hum—and no damage was done
to the recording as it was played. In principle, a digital recording
will last forever, and each play will sound as pure as the first. The
compact disc could also play much longer than the vinyl record or
long-playing cassette tape.
Despite these obvious technical advantages, the commercial success
of digital recording was not ensured. There had been several
other advanced systems that had not fared well in the marketplace,
and the conspicuous failure of quadrophonic sound in the 1970’s
had not been forgotten within the industry of recorded sound. Historically,
there were two key factors in the rapid acceptance of a new
system of sound recording and reproduction: a library of prerecorded
music to tempt the listener into adopting the system and a
continual decrease in the price of the playing units to bring them
within the budgets of more buyers.
By 1984, there were about a thousand titles available on compact
disc in the United States; that number had doubled by 1985. Although
many of these selections were classical music—it was naturally
assumed that audiophiles would be the first to buy digital
equipment—popular music was well represented. The firstCDavailable
for purchase was an album by popular entertainer Billy Joel.
The first CD-playing units cost more than $1,000, but Akio Morita
of Sony was determined that the company should reduce the
price of players even if it meant selling them below cost. Sony’s audio engineering department improved the performance of the
players while reducing size and cost. By 1984, Sony had a small CD
unit on the market for $300. Several of Sony’s competitors, including
Matsushita, had followed its lead into digital reproduction.
There were several compact disc players available in 1985 that cost
less than $500. Sony quickly applied digital technology to the popular
personal stereo and to automobile sound systems. Sales of CD
units increased roughly tenfold from 1983 to 1985.
Impact on Vinyl Recording
When the compact disc was announced in 1982, the vinyl record
was the leading form of recorded sound, with 273 million units sold
annually compared to 125 million prerecorded cassette tapes. The
compact disc sold slowly, beginning with 800,000 units shipped in
1983 and rising to 53 million in 1986. By that time, the cassette tape
had taken the lead, with slightly fewer than 350 million units. The
vinyl record was in decline, with only about 110 million units
shipped. Compact discs first outsold vinyl records in 1988. In the ten
years from 1979 to 1988, the sales of vinyl records dropped nearly 80
percent. In 1989, CDs accounted for more than 286 million sales, but
cassettes still led the field with total sales of 446 million. The compact
disc finally passed the cassette in total sales in 1992, when more
than 300 million CDs were shipped, an increase of 22 percent over
the figure for 1991.
The introduction of digital recording had an invigorating effect
on the industry of recorded sound, which had been unable to fully
recover from the slump of the late 1970’s. Sales of recorded music
had stagnated in the early 1980’s, and an industry accustomed to
steady increases in output became eager to find a new product or
style of music to boost its sales. The compact disc was the product to
revitalize the market for both recordings and players. During the
1980’s, worldwide sales of recorded music jumped from $12 billion
to $22 billion, with about half of the sales volume accounted for by
digital recordings by the end of the decade.
The success of digital recording served in the long run to undermine
the commercial viability of the compact disc. This was a playonly
technology, like the vinyl record before it. Once users had become accustomed to the pristine digital sound, they clamored for
digital recording capability. The alliance of Sony and Philips broke
down in the search for a digital tape technology for home use. Sony
produced a digital tape system calledDAT, while Philips responded
with a digital version of its compact audio tape called DCC. Sony
answered the challenge of DCC with its Mini Disc (MD) product,
which can record and replay digitally.
The versatility of digital recording has opened up a wide range of
consumer products. Compact disc technology has been incorporated
into the computer, in which CD-ROM readers convert the digital
code of the disc into sound and images. Many home computers have
the capability to record and replay sound digitally. Digital recording
is the basis for interactive audio/video computer programs in which
the user can interface with recorded sound and images. Philips has
established a strong foothold in interactive digital technology with its
CD-I (compact disc interactive) system, which was introduced in
1990. This acts as a multimedia entertainer, providing sound, moving
images, games, and interactive sound and image publications such as
encyclopedias. The future of digital recording will be broad-based
systems that can record and replay a wide variety of sounds and images
and that can be manipulated by users of home computers.

Wednesday, May 13, 2009

Community antenna television








The invention: 



Asystem for connecting households in isolated areas
to common antennas to improve television reception, community
antenna television was a forerunner of modern cabletelevision
systems.



The people behind the invention: 



Robert J. Tarlton, the founder of CATV in eastern Pennsylvania

Ed Parsons, the founder of CATV in Oregon

Ted Turner (1938- ), founder of the first cable superstation,WTBS










Growing Demand for Television



Television broadcasting in the United States began in the late

1930’s. After delays resulting from World War II, it exploded into

the American public’s consciousness. The new medium relied primarily

on existing broadcasting stations that quickly converted

fromradio to television formats. Consequently, the reception of television

signals was centralized in large cities. The demand for television

quickly swept across the country. Ownership of television receivers

increased dramatically, and those who could not afford their

own flocked to businesses, usually taverns, or to the homes of

friends with sets. People in urban areas had more opportunities to

view the new medium and had the advantage of more broadcasts

within the range of reception. Those in outlying regions were not so

fortunate, as they struggled to see fuzzy pictures and were, in some

cases, unable to receive a signal at all.

The situation for outlying areas worsened in 1948, when the Federal

Communications Commission (FCC) implemented a ban on all

new television stations while it considered how to expand the television

market and how to deal with a controversy over color reception.

This left areas without nearby stations in limbo, while people

in areas with established stations reaped the benefits of new programming.

The ban would remain in effect until 1952, when new

stations came under construction across the country.

Poor reception in some areas and the FCC ban on new station

construction together set the stage for the development of Community

Antenna Television (CATV). CATV did not have a glamorous

beginning. Late in 1949, two different men, frustrated by the slow

movement of television to outlying areas, set up what would become

the foundation of the multimillion-dollar cable industry.

Robert J. Tarlton was a radio salesman in Lansford, Pennsylvania,

about sixty-five miles from Philadelphia. He wanted to move

into television sales but lived in an area with poor reception. Together

with friends, he founded Panther Valley Television and set

up a master antenna in a mountain range that blocked the reception

of Philadelphia-based broadcasting. For an installation fee of $125

and a fee of $3 per month, Panther Valley Television offered residents

clear reception of the three Philadelphia stations via a coaxial

cable wired to their homes. At the same time, Ed Parsons, of KAST

radio in Astoria, Oregon, linked homes via coaxial cables to a master

antenna set up to receive remote broadcasts. Both systems offered

three channels, the major network affiliates, to subscribers. By

1952, when the FCC ban was lifted, some seventy CATV systems

provided small and rural communities with the wonders of television.

That same year, the National Cable Television Association was

formed to represent the interests of the young industry.

Early systems could carry only one to three channels. In 1953,

CATV began to use microwave relays, which could import distant

signals to add more variety and pushed system capability to twelve

channels. A system of towers began sprouting up across the country.

These towers could relay a television signal from a powerful

originating station to each cable system’s main antenna. This further

opened the reception available to subscribers.





Pay Television



The notion of pay television also began at this time. In 1951, the

FCC authorized a test of Zenith Radio Corporation’s Phonevision in

Chicago. Scrambled images could be sent as electronic impulses

over telephone lines, then unscrambled by devices placed in subscribers’

homes. Subscribers could order a film over the telephone

for a minimal cost, usually $1. Advertisers for the system promoted

the idea of films for the “sick, aged, and sitterless.” This early test

was a forerunner of the premium, or pay, channels of later decades.

Network opposition to CATV came in the late 1950’s. RCAchairman

David Sarnoff warned against a pay television system that

could soon fall under government regulation, as in the case of utilities.

In April, 1959, the FCC found no basis for asserting jurisdiction

or authority over CATV. This left the industry open to tremendous

growth.

By 1960, the industry included 640 systems with 700,000 subscribers.

Ten years later, 2,490 systems were in operation, serving

more than 4.5 million households. This accelerated growth came at

a price. In April, 1965, the FCC reversed itself and asserted authority

over microwave-fed CATV. A year later, the entire cable system

came under FCC control. The FCC quickly restricted the use of distant

signals in the largest hundred markets.

The FCC movement to control cable systems stemmed from the

agency’s desire to balance the television market. From the onset of

television broadcasting, the FCC strived to maintain a balanced programming

schedule. The goal was to create local markets in which

local affiliate stations prospered from advertising and other community

support and would not be unduly harmed by competition

from larger metropolitan stations. In addition, growth of the industry

ideally was to be uniform, with large and small cities receiving

equal consideration. Cable systems, particularly those that could receive

distant signals via microwave relay, upset the balance. For example,

a small Ohio town could receive New York channels as well

as Chicago channels via cable, as opposed to receiving only the

channels from one city.

The balance was further upset with the creation of a new communications

satellite, COMSAT, in 1963. This technology allowed a

signal to be sent to the satellite, retransmitted back to Earth, and

then picked up by a receiving station. This further increased the

range of cable offerings and moved the transmission of television

signals to a national scale, as microwave-relayed transmissions

worked best in a regional scope. These two factors led the FCC to

freeze the cable industry from new development and construction

in December, 1968. After 1972, when the cable freeze was lifted, the

greatest impact of CATV would be felt.



Impact



The founding of cable television had a two-tier effect on the

American public. The immediate impact of CATV was the opening

of television to areas cut off from network broadcasting as a result of

distance or topographical obstructions. Cable brought television to

those who would have otherwise missed the early years of the medium.

As technology furthered the capabilities of the industry, a second

impact emerged. Along with the 1972 lifting of the ban on cable expansion,

the FCC established strict guidelines for the advancement

of the industry. Issuing a 500-page blueprint for the expansion of cable,

the FCC included limits on the use of imported distant signals,

required the blacking out of some specific programs (films and serials,

for example), and limited pay cable to films that were more than

two years old and to sports.

Another component of the guidelines required all systems that

went into operation after March, 1972 (and all systems by March,

1977), to provide public access channels for education and local

government. In addition, channels were to be made available for

lease. These access channels opened information to subscribers that

would not normally be available. Local governments and school

boards began to broadcast meetings, and even high school athletics

soon appeared via public access channels. These channels also

provided space to local educational institutions for home-based

courses in a variety of disciplines.





Cable Communications Policy Act



Further FCC involvement came in the 1984 Cable Communications

Policy Act, which deregulated the industry and opened the

door for more expansion. This act removed local control over cable

service rates and virtually made monopolies out of local providers

by limiting competition. The late 1980’s brought a new technology,

fiber optics, which promised to further advance the industry by increasing

the quality of cable services and channel availability.

One area of the cable industry, pay television, took off in the

1970’s and early 1980’s. The first major pay channel was developed

by the media giant Time-Life. It inaugurated Home Box Office

(HBO) in 1975 as the first national satellite interconnected network.

Early HBO programming primarily featured films but included no

films less than two years old (meeting the 1972 FCC guidelines), no

serials, and no advertisements. Other premium movie channels followed,

including Showtime, Cinemax, and The Movie Channel. By

the late 1970’s, cable systems offered multiple premium channels to

their subscribers.

Superstations were another component of the cable industry that

boomed in the 1970’s and 1980’s. The first, WTBS, was owned and

operated by Ted Turner and broadcast from Atlanta, Georgia. It emphasized

films and reruns of old television series. Cable systems

that broadcast WTBS were asked to allocate the signal to channel 17,

thus creating uniformity across the country for the superstation.

Chicago’s WGN and New York City’s WOR soon followed, gaining

access to homes across the nation via cable. Both these superstations

emphasized sporting events in the early years and expanded to include

films and other entertainment in the 1980’s.

Both pay channels and superstations transmitted via satellites

(WTBS leased space from RCA, for example) and were picked up by

cable systems across the country. Other stations with broadcasts intended

solely for the cable industry opened in the 1980’s. Ted Turner

started the Cable News Network in 1980 and followed with the all news

network Headline News. He added another channel with the

Turner Network Television (TNT) in 1988. Other 1980’s additions

included The Disney Channel, ESPN, The Entertainment Channel,

The Discovery Channel, and Lifetime. The Cable-Satellite Public Affairs

Network (C-SPAN) enhanced the cable industry’s presence in

Washington, D.C., by broadcasting sessions of the House of Representatives.

Specialized networks for particular audiences also developed.

Music Television (MTV), featuring songs played along with video

sequences, premiered in 1981. Nickelodeon, a children’s channel,

and VH-1, a music channel aimed at baby boomers rather than

MTV’s teenage audience, reflected the movement toward specialization.

Other specialized channels, such as the Sci-Fi Channel and the

Comedy Channel, went even further in targeting specific audiences.





Cable and the Public



The impact on the American public was tremendous. Information

and entertainment became available around the clock. Cable

provided a new level of service, information, and entertainment unavailable

to non subscribers. One phenomenon that exploded in the

late 1980’s was home shopping. Via The Home Shopping Club and

QVC, two shopping channels offered through cable television, the

American public could order a full range of products. Everything

from jewelry to tools and home cleaning supplies to clothing and

electronics was available to anyone with a credit card. Americans

could now go shopping from home.

The cable industry was not without its competitors and critics. In

the 1980’s, the videocassette recorder (VCR) opened the viewing

market. Prerecorded cassettes of recent film releases as well as classics

were made available for purchase or for a small rental fee. National

chains of video rental outlets, such as Blockbuster Video and

Video Towne, offered thousands of titles for rent. Libraries also began

to stock films. This created competition for the cable industry, in

particular the premium movie channels. To combat this competition,

channels began to offer original productions unavailable on

videocassette. The combined effect of the cable industry and the

videocassette market was devastating to the motion picture industry.

The wide variety of programming available at home encouraged

the American public, especially baby boomers with children,

to stay home and watch cable or rented films instead of going to theaters.

Critics of the cable industry seized on the violence, sexual content,

and graphic language found in some of cable’s offerings. One

parent responded by developing a lockout device that could make

certain channels unavailable to children. Some premium channels

developed an after-hours programming schedule that aired adult theme

programming only late at night. Another criticism stemmed

from the repetition common on pay channels. As a result of the limited

supply of and large demand for films, pay channels were forced

to repeat programs several times within a month and to rebroadcast

films that were several years old. This led consumers to question the

value of the additional monthly fee paid for such channels. To com-

bat the problem, premium channels increased efforts aimed at original

production and added more films that had not been box-office hits.

By the early 1990’s, as some eleven thousand cable systems were

serving 56.2 million subscribers, a new cry for regulation began. Debates

over services and increasingly high rates led the FCC and

Congress to investigate the industry, opening the door for new

guidelines on the cable industry. The non-cable networks—American

Broadcasting Company (ABC), Columbia Broadcasting System

(CBS), National Broadcasting Company (NBC), and newcomer

Fox—stressed their concerns about the cable industry. These networks

provided free programming, and cable systems profited

from inclusion of network programming. Television industry representatives

expressed the opinion that cable providers should pay

for the privilege of retransmitting network broadcasts.

The impact on cable’s subscribers, especially concerning monthly

cable rates, came under heavy debate in public and government forums.

The administration in Washington, D.C., expressed concern

that cable rates had risen too quickly and for no obvious reason other

than profit-seeking by what were essentially monopolistic local cable

systems. What was clear was that the cable industry had transformed

the television experience and was going to remain a powerful force

within the medium. Regulators and television industry leaders were

left to determine how to maintain an equitable coexistence within the

medium.





See also :



Color television; Communications satellite; Fiber-optics;

Telephone switching; Television.

Friday, May 8, 2009

Communications satellite







The invention: Telstar I, the world’s first commercial communications
satellite, opened the age of live, worldwide television by
connecting the United States and Europe.
The people behind the invention:
Arthur C. Clarke (1917- ), a British science-fiction writer
who in 1945 first proposed the idea of using satellites as
communications relays
John R. Pierce (1910- ), an American engineer who worked
on the Echo and Telstar satellite communications projects
Science Fiction?
In 1945, Arthur C. Clarke suggested that a satellite orbiting high
above the earth could relay television signals between different stations
on the ground, making for a much wider range of transmission
than that of the usual ground-based systems. Writing in the
February, 1945, issue of Wireless World, Clarke said that satellites
“could give television and microwave coverage to the entire
planet.”
In 1956, John R. Pierce at the Bell Telephone Laboratories of the
American Telephone & Telegraph Company (AT&T) began to urge
the development of communications satellites. He saw these satellites
as a replacement for the ocean-bottom cables then being used to
carry transatlantic telephone calls. In 1950, about one-and-a-half
million transatlantic calls were made, and that number was expected
to grow to three million by 1960, straining the capacity of the
existing cables; in 1970, twenty-one million calls were made.
Communications satellites offered a good, cost-effective alternative
to building more transatlantic telephone cables. On January 19,
1961, the Federal Communications Commission (FCC) gave permission
for AT&T to begin Project Telstar, the first commercial communications
satellite bridging the Atlantic Ocean.AT&T reached an
agreement with the National Aeronautics and Space Administration
(NASA) in July, 1961, in which AT&T would pay $3 million for each Telstar launch. The Telstar project involved about four hundred
scientists, engineers, and technicians at the Bell Telephone
Laboratories, twenty more technical personnel at AT&T headquarters,
and the efforts of more than eight hundred other companies
that provided equipment or services.
Telstar 1 was shaped like a faceted sphere, was 88 centimeters in
diameter, and weighed 80 kilograms. Most of its exterior surface
(sixty of the seventy-four facets) was covered by 3,600 solar cells to
convert sunlight into 15 watts of electricity to power the satellite.
Each solar cell was covered with artificial sapphire to reduce the
damage caused by radiation. The main instrument was a two-way
radio able to handle six hundred telephone calls at a time or one
television channel.
The signal that the radio would send back to Earth was very
weak—less than one-thirtieth the energy used by a household light
bulb. Large ground antennas were needed to receive Telstar’s faint
signal. The main ground station was built by AT&T in Andover,
Maine, on a hilltop informally called “Space Hill.” A horn-shaped
antenna, weighing 380 tons, with a length of 54 meters and an open
end with an area of 1,097 square meters, was mounted so that it
could rotate to track Telstar across the sky. To protect it from wind
and weather, the antenna was built inside an inflated dome, 64 meters
in diameter and 49 meters tall. It was, at the time, the largest inflatable
structure ever built. A second, smaller horn antenna in
Holmdel, New Jersey, was also used.International Cooperation
In February, 1961, the governments of the United States and England
agreed to let the British Post Office and NASAwork together
to test experimental communications satellites. The British Post Office
built a 26-meter-diameter steerable dish antenna of its own design
at Goonhilly Downs, near Cornwall, England. Under a similar
agreement, the French National Center for Telecommunications
Studies constructed a ground station, almost identical to the Andover
station, at Pleumeur-Bodou, Brittany, France.
After testing, Telstar 1 was moved to Cape Canaveral, Florida,
and attached to the Thor-Delta launch vehicle built by the Douglas Aircraft Company. The Thor-Delta was launched at 3:35 a.m. eastern
standard time (EST) on July 10, 1962. Once in orbit, Telstar 1 took
157.8 minutes to circle the globe. The satellite came within range of
the Andover station on its sixth orbit, and a television test pattern
was transmitted to the satellite at 6:26 p.m. EST. At 6:30 p.m. EST, a
tape-recorded black-and-white image of the American flag with the
Andover station in the background, transmitted from Andover to
Holmdel, opened the first television show ever broadcast by satellite.
Live pictures of U.S. vice president Lyndon B. Johnson and
other officials gathered at Carnegie Institution inWashington, D.C.,
followed on the AT&T program carried live on all three American
networks.
Up to the moment of launch, it was uncertain if the French station
would be completed in time to participate in the initial test. At 6:47
p.m. EST, however, Telstar’s signal was picked up by the station in
Pleumeur-Bodou, and Johnson’s image became the first television
transmission to cross the Atlantic. Pictures received at the French
station were reported to be so clear that they looked like they had
been sent from only forty kilometers away. Because of technical difficulties,
the English station was unable to receive a clear signal.
The first formal exchange of programming between the United
States and Europe occurred on July 23, 1962. This special eighteenminute
program, produced by the European Broadcasting Union,
consisted of live scenes from major cities throughout Europe and
was transmitted from Goonhilly Downs, where the technical difficulties
had been corrected, to Andover via Telstar.
On the previous orbit, a program entitled “America, July 23,
1962,” showing scenes from fifty television cameras around the
United States, was beamed from Andover to Pleumeur-Bodou and
seen by an estimated one hundred million viewers throughout Europe.Consequences
Telstar 1 and the communications satellites that followed it revolutionized
the television news and sports industries. Before, television
networks had to ship film across the oceans, meaning delays of
hours or days between the time an event occurred and the broadcast of pictures of that event on television on another continent. Now,
news of major significance, as well as sporting events, can be viewed
live around the world. The impact on international relations also
was significant, with world opinion becoming able to influence the
actions of governments and individuals, since those actions could
be seen around the world as the events were still in progress.
More powerful launch vehicles allowed new satellites to be placed
in geosynchronous orbits, circling the earth at a speed the same as
the earth’s rotation rate. When viewed from the ground, these satellites
appeared to remain stationary in the sky. This allowed continuous
communications and greatly simplified the ground antenna
system. By the late 1970’s, private individuals had built small antennas
in their backyards to receive television signals directly from the
satellites.

Monday, May 4, 2009

Colossus computer




The invention: The first all-electronic calculating device, the Colossus
computer was built to decipher German military codes
during World War II.
The people behind the invention:
Thomas H. Flowers, an electronics expert
Max H. A. Newman (1897-1984), a mathematician
Alan Mathison Turing (1912-1954), a mathematician
C. E. Wynn-Williams, a member of the Telecommunications
Research Establishment
An Undercover Operation
In 1939, during World War II (1939-1945), a team of scientists,
mathematicians, and engineers met at Bletchley Park, outside London,
to discuss the development of machines that would break the
secret code used in Nazi military communications. The Germans
were using a machine called “Enigma” to communicate in code between
headquarters and field units. Polish scientists, however, had
been able to examine a German Enigma and between 1928 and 1938
were able to break the codes by using electromechanical codebreaking
machines called “bombas.” In 1938, the Germans made the
Enigma more complicated, and the Polish were no longer able to
break the codes. In 1939, the Polish machines and codebreaking
knowledge passed to the British.
Alan Mathison Turing was one of the mathematicians gathered
at Bletchley Park to work on codebreaking machines. Turing was
one of the first people to conceive of the universality of digital computers.
He first mentioned the “Turing machine” in 1936 in an article
published in the Proceedings of the London Mathematical Society.
The Turing machine, a hypothetical device that can solve any
problem that involves mathematical computation, is not restricted
to only one task—hence the universality feature.
Turing suggested an improvement to the Bletchley codebreaking
machine, the “Bombe,” which had been modeled on the Polish bomba. This improvement increased the computing power of the
machine. The new codebreaking machine replaced the tedious
method of decoding by hand, which in addition to being slow,
was ineffective in dealing with complicated encryptions that were
changed daily.
Building a Better Mousetrap
The Bombe was very useful. In 1942, when the Germans started
using a more sophisticated cipher machine known as the “Fish,”
Max H. A. Newman, who was in charge of one subunit at Bletchley
Park, believed that an automated device could be designed to break
the codes produced by the Fish. Thomas H. Flowers, who was in
charge of a switching group at the Post Office Research Station at
Dollis Hill, had been approached to build a special-purpose electromechanical
device for Bletchley Park in 1941. The device was not
useful, and Flowers was assigned to other problems.
Flowers began to work closely with Turing, Newman, and C. E.
Wynn-Williams of the Telecommunications Research Establishment
(TRE) to develop a machine that could break the Fish codes. The
Dollis Hill team worked on the tape driving and reading problems,
and Wynn-Williams’s team at TRE worked on electronic counters
and the necessary circuitry. Their efforts produced the “Heath Robinson,”
which could read two thousand characters per second. The
Heath Robinson used vacuum tubes, an uncommon component in
the early 1940’s. The vacuum tubes performed more reliably and
rapidly than the relays that had been used for counters. Heath Robinson
and the companion machines proved that high-speed electronic
devices could successfully do cryptoanalytic work (solve decoding
problems).
Entirely automatic in operation once started, the Heath Robinson
was put together at Bletchley Park in the spring of 1943. The Heath
Robinson became obsolete for codebreaking shortly after it was put
into use, so work began on a bigger, faster, and more powerful machine:
the Colossus.
Flowers led the team that designed and built the Colossus in
eleven months at Dollis Hill. The first Colossus (Mark I) was a bigger,
faster version of the Heath Robinson and read about five thousand characters per second. Colossus had approximately fifteen
hundred vacuum tubes, which was the largest number that had
ever been used at that time. Although Turing and Wynn-Williams
were not directly involved with the design of the Colossus, their
previous work on the Heath Robinson was crucial to the project,
since the first Colossus was based on the Heath Robinson.
Colossus became operational at Bletchley Park in December,
1943, and Flowers made arrangements for the manufacture of its
components in case other machines were required. The request for
additional machines came in March, 1944. The second Colossus, the
Mark II, was extensively redesigned and was able to read twentyfive
thousand characters per second because it was capable of performing
parallel operations (carrying out several different operations
at once, instead of one at a time); it also had a short-term
memory. The Mark II went into operation on June 1, 1944. More
machines were made, each with further modifications, until there
were ten. The Colossus machines were special-purpose, programcontrolled
electronic digital computers, the only known electronic
programmable computers in existence in 1944. The use of electronics
allowed for a tremendous increase in the internal speed of the
machine.
Impact
The Colossus machines gave Britain the best codebreaking machines
of World War II and provided information that was crucial
for the Allied victory. The information decoded by Colossus, the actual
messages, and their influence on military decisions would remain
classified for decades after the war.
The later work of several of the people involved with the Bletchley
Park projects was important in British computer development
after the war. Newman’s and Turing’s postwar careers were closely
tied to emerging computer advances. Newman, who was interested
in the impact of computers on mathematics, received a grant from
the Royal Society in 1946 to establish a calculating machine laboratory
at Manchester University. He was also involved with postwar
computer growth in Britain.
Several other members of the Bletchley Park team, including Turing, joined Newman at Manchester in 1948. Before going to Manchester
University, however, Turing joined Britain’s National Physical
Laboratory (NPL). At NPL, Turing worked on an advanced
computer known as the Pilot Automatic Computing Engine (Pilot
ACE). While at NPL, Turing proposed the concept of a stored program,
which was a controversial but extremely important idea in
computing. A“stored” program is one that remains in residence inside
the computer, making it possible for a particular program and
data to be fed through an input device simultaneously. (The Heath
Robinson and Colossus machines were limited by utilizing separate
input tapes, one for the program and one for the data to be analyzed.)
Turing was among the first to explain the stored-program
concept in print. He was also among the first to imagine how subroutines
could be included in a program. (Asubroutine allows separate
tasks within a large program to be done in distinct modules; in
effect, it is a detour within a program. After the completion of the
subroutine, the main program takes control again.)