Wednesday, December 9, 2009
Radar
The invention: An electronic system for detecting objects at great
distances, radar was a major factor in the Allied victory ofWorld
War II and now pervades modern life, including scientific research.
The people behind the invention:
Sir Robert Watson-Watt (1892-1973), the father of radar who
proposed the chain air-warning system
Arnold F. Wilkins, the person who first calculated the intensity
of a radio wave
William C. Curtis (1914-1976), an American engineer
Looking for Thunder
Sir RobertWatson-Watt, a scientist with twenty years of experience
in government, led the development of the first radar, an acronym
for radio detection and ranging. “Radar” refers to any instrument
that uses the reflection of radio waves to determine the
distance, direction, and speed of an object.
In 1915, during World War I (1914-1918), Watson-Watt joined
Great Britain’s Meteorological Office. He began work on the detection
and location of thunderstorms at the Royal Aircraft Establishment
in Farnborough and remained there throughout the
war. Thunderstorms were known to be a prolific source of “atmospherics”
(audible disturbances produced in radio receiving apparatus
by atmospheric electrical phenomena), andWatson-Watt
began the design of an elementary radio direction finder that
gave the general position of such storms.
Research continued after
the war and reached a high point in 1922 when sealed-off
cathode-ray tubes first became available. With assistance from
J. F. Herd, a fellow Scot who had joined him at Farnborough, he
constructed an instantaneous direction finder, using the new
cathode-ray tubes, that gave the direction of thunderstorm activity.
It was admittedly of low sensitivity, but it worked, and it was
the first of its kind.Watson-Watt did much of this work at a new site at Ditton Park,
near Slough, where the National Physical Laboratory had a field
station devoted to radio research. In 1927, the two endeavors were
combined as the Radio Research Station; it came under the general
supervision of the National Physical Laboratory, withWatson-Watt
as the first superintendent. This became a center with unrivaled expertise
in direction finding using the cathode-ray tube and in studying
the ionosphere using radio waves. No doubt these facilities
were a factor when Watson-Watt invented radar in 1935.
As radar developed, its practical uses expanded. Meteorological
services around the world, using ground-based radar, gave warning
of approaching rainstorms. Airborne radars proved to be a great
help to aircraft by allowing them to recognize potentially hazardous
storm areas. This type of radar was used also to assist research into
cloud and rain physics. In this type of research, radar-equipped research
aircraft observe the radar echoes inside a cloud as rain develops,
and then fly through the cloud, using on-board instruments to
measure the water content.
Aiming Radar at the Moon
The principles of radar were further developed through the discipline
of radio astronomy. This field began with certain observations
made by the American electrical engineer Karl Jansky in 1933
at the Bell Laboratories at Holmdell, New Jersey. Radio astronomers
learn about objects in space by intercepting the radio waves that
these objects emit.
Jansky found that radio signals were coming to Earth from space.
He called these mysterious pulses “cosmic noise.” In particular, there
was an unusual amount of radar noise when the radio antennas were
pointed at the Sun, which increased at the time of sun-spot activity.
All this information lay dormant until after World War II (1939-
1945), at which time many investigators turned their attention to interpreting
the cosmic noise. The pioneers were Sir Bernard Lovell at
Manchester, England, Sir Martin Ryle at Cambridge, England, and
Joseph Pawsey of the Commonwealth of Science Industrial Research
Organization, in Australia. The intensity of these radio waves was
first calculated by Arnold F.Wilkins.
As more powerful tools became available toward the end of
World War II, curiosity caused experimenters to try to detect radio
signals from the Moon. This was accomplished successfully in the
late 1940’s and led to experiments on other objects in the solar system:
planets, satellites, comets, and asteroids.
Impact
Radar introduced some new and revolutionary concepts into warfare,
and in doing so gave birth to entirely new branches of technology.
In the application of radar to marine navigation, the long-range
navigation system developed during the war was taken up at once
by the merchant fleets that used military-style radar equipment
without modification. In addition, radar systems that could detect
buoys and other ships and obstructions in closed waters, particularly
under conditions of low visibility, proved particularly useful
to peacetime marine navigation.
In the same way, radar was adopted to assist in the navigation of
civil aircraft. The various types of track guidance systems developed after the war were aimed at guiding aircraft in the critical last
hundred kilometers or so of their run into an airport. Subsequent
improvements in the system meant that an aircraft could place itself
on an approach or landing path with great accuracy.
The ability of radar to measure distance to an extraordinary degree
of accuracy resulted in the development of an instrument that
provided pilots with a direct measurement of the distances between
airports. Along with these aids, ground-based radars were developed
for the control of aircraft along the air routes or in the airport
control area.
The development of electronic computers can be traced back to
the enormous advances in circuit design, which were an integral part
of radar research during the war. During that time, some elements
of electronic computing had been built into bombsights and other
weaponry; later, it was realized that a whole range of computing operations
could be performed electronically. By the end of the war,
many pulse-forming networks, pulse-counting circuits, and memory
circuits existed in the form needed for an electronic computer.
Finally, the developing radio technology has continued to help
astronomers explore the universe. Large radio telescopes exist in almost
every country and enable scientists to study the solar system
in great detail. Radar-assisted cosmic background radiation studies
have been a building block for the big bang theory of the origin of
the universe.
Wednesday, December 2, 2009
Pyrex glass
The invention: Asuperhard and durable glass product with widespread
uses in industry and home products.
The people behind the invention:
Jesse T. Littleton (1888-1966), the chief physicist of Corning
Glass Works’ research department
Eugene G. Sullivan (1872-1962), the founder of Corning’s
research laboratories
William C. Taylor (1886-1958), an assistant to Sullivan
Cooperating with Science
By the twentieth century, Corning GlassWorks had a reputation
as a corporation that cooperated with the world of science to improve
existing products and develop new ones. In the 1870’s, the
company had hired university scientists to advise on improving the
optical quality of glasses, an early example of today’s common practice
of academics consulting for industry.
When Eugene G. Sullivan established Corning’s research laboratory
in 1908 (the first of its kind devoted to glass research), the task
that he undertook withWilliam C. Taylor was that of making a heatresistant
glass for railroad lantern lenses. The problem was that ordinary
flint glass (the kind in bottles and windows, made by melting
together silica sand, soda, and lime) has a fairly high thermal expansion,
but a poor heat conductivity. The glass thus expands
unevenly when exposed to heat. This condition can cause the glass
to break, sometimes violently. Colored lenses for oil or gas railroad
signal lanterns sometimes shattered if they were heated too much
by the flame that produced the light and were then sprayed by rain
or wet snow. This changed a red “stop” light to a clear “proceed”
signal and caused many accidents or near misses in railroading in
the late nineteenth century.
Two solutions were possible: to improve the thermal conductivity
or reduce the thermal expansion. The first is what metals do:
When exposed to heat, most metals have an expansion much greater
than that of glass, but they conduct heat so quickly that they expand
nearly equally throughout and seldom lose structural integrity from
uneven expansion. Glass, however, is an inherently poor heat conductor,
so this approach was not possible.
Therefore, a formulation had to be found that had little or no
thermal expansivity. Pure silica (one example is quartz) fits this description,
but it is expensive and, with its high melting point, very
difficult to work.
The formulation that Sullivan and Taylor devised was a borosilicate
glass—essentially a soda-lime glass with the lime replaced by
borax, with a small amount of alumina added. This gave the low thermal
expansion needed for signal lenses. It also turned out to have
good acid-resistance, which led to its being used for the battery jars
required for railway telegraph systems and other applications. The
glass was marketed as “Nonex” (for “nonexpansion glass”).
From the Railroad to the Kitchen
Jesse T. Littleton joined Corning’s research laboratory in 1913.
The company had a very successful lens and battery jar material,
but no one had even considered it for cooking or other heat-transfer
applications, because the prevailing opinion was that glass absorbed
and conducted heat poorly. This meant that, in glass pans,
cakes, pies, and the like would cook on the top, where they were exposed
to hot air, but would remain cold and wet (or at least undercooked)
next to the glass surface. As a physicist, Littleton knew that
glass absorbed radiant energy very well. He thought that the heatconduction
problem could be solved by using the glass vessel itself
to absorb and distribute heat. Glass also had a significant advantage
over metal in baking. Metal bakeware mostly reflects radiant energy
to the walls of the oven, where it is lost ultimately to the surroundings.
Glass would absorb this radiation energy and conduct it evenly to
the cake or pie, giving a better result than that of the metal bakeware.
Moreover, glass would not absorb and carry over flavors from
one baking effort to the next, as some metals do.
Littleton took a cut-off battery jar home and asked his wife to
bake a cake in it. He took it to the laboratory the next day, handing
pieces around and not disclosing the method of baking until all had
agreed that the results were excellent. With this agreement, he was
able to commit laboratory time to developing variations on the
Nonex formula that were more suitable for cooking. The result was
Pyrex, patented and trademarked in May of 1915.
Impact
In the 1930’s, Pyrex “Flameware” was introduced, with a new
glass formulation that could resist the increased heat of stovetop
cooking. In the half century since Flameware was introduced,
Corning went on to produce a variety of other products and materials:
tableware in tempered opal glass; cookware in Pyroceram, a
glass product that during heat treatment gained such mechanical
strength as to be virtually unbreakable; even hot plates and stoves
topped with Pyroceram.
In the same year that Pyrex was marketed for cooking, it was
also introduced for laboratory apparatus. Laboratory glassware
had been coming from Germany at the beginning of the twentieth
century; World War I cut off the supply. Corning filled the gap
with Pyrex beakers, flasks, and other items. The delicate blownglass
equipment that came from Germany was completely displaced
by the more rugged and heat-resistant machine-made Pyrex
ware.
Any number of operations are possible with Pyrex that cannot
be performed safely in flint glass: Test tubes can be thrust directly
into burner flames, with no preliminary warming; beakers and
flasks can be heated on hot plates; and materials that dissolve
when exposed to heat can be made into solutions directly in Pyrex
storage bottles, a process that cannot be performed in regular
glass. The list of such applications is almost endless.
Pyrex has also proved to be the material of choice for lenses in
the great reflector telescopes, beginning in 1934 with that at Mount
Palomar. By its nature, astronomical observation must be done
with the scope open to the weather. This means that the mirror
must not change shape with temperature variations, which rules
out metal mirrors. Silvered (or aluminized) Pyrex serves very well,
and Corning has developed great expertise in casting and machining
Pyrex blanks for mirrors of all sizes.
Propeller-coordinated machine gun
The invention: A mechanism that synchronized machine gun fire
with propeller movement to prevent World War I fighter plane
pilots from shooting off their own propellers during combat.
The people behind the invention:
Anthony Herman Gerard Fokker (1890-1939), a Dutch-born
American entrepreneur, pilot, aircraft designer, and
manufacturer
Roland Garros (1888-1918), a French aviator
Max Immelmann (1890-1916), a German aviator
Raymond Saulnier (1881-1964), a French aircraft designer and
manufacturer
French Innovation
The first true aerial combat ofWorldWar I took place in 1915. Before
then, weapons attached to airplanes were inadequate for any
real combat work. Hand-held weapons and clumsily mounted machine
guns were used by pilots and crew members in attempts to
convert their observation planes into fighters. On April 1, 1915, this
situation changed. From an airfield near Dunkerque, France, a
French airman, Lieutenant Roland Garros, took off in an airplane
equipped with a device that would make his plane the most feared
weapon in the air at that time.
During a visit to Paris, Garros met with Raymond Saulnier, a French
aircraft designer. In April of 1914, Saulnier had applied for a patent on
a device that mechanically linked the trigger of a machine
gun to a cam
on the engine shaft. Theoretically, such an assembly would allow the
gun to fire between the moving blades of the propeller. Unfortunately,
the available machine gun Saulnier used to test his device was a
Hotchkiss gun, which tended to fire at an uneven rate. On Garros’s arrival,
Saulnier showed him a new invention: a steel deflector shield
that, when fastened to the propeller, would deflect the small percentage
of mistimed bullets that would otherwise destroy the blade.
The first test-firing was a disaster, shooting the propeller off and
destroying the fuselage. Modifications were made to the deflector
braces, streamlining its form into a wedge shape with gutterchannels
for deflected bullets. The invention was attached to a
Morane-Saulnier monoplane, and on April 1, Garros took off alone
toward the German lines. Success was immediate. Garros shot
down a German observation plane that morning. During the next
two weeks, Garros shot down five more German aircraft.
German Luck
The German high command, frantic over the effectiveness of the
French “secret weapon,” sent out spies to try to steal the secret and
also ordered engineers to develop a similar weapon. Luck was with
them. On April 18, 1915, despite warnings by his superiors not to fly
over enemy-held territory, Garros was forced to crash-land behind
German lines with engine trouble. Before he could destroy his aircraft,
Garros and his plane were captured by German troops. The secret
weapon was revealed.
The Germans were ecstatic about the opportunity to examine
the new French weapon. Unlike the French, the Germans had the
first air-cooled machine gun, the Parabellum, which shot continuous
bands of one hundred bullets and was reliable enough to be
adapted to a timing mechanism.
In May of 1915, Anthony Herman Gerard Fokker was shown
Garros’s captured plane and was ordered to copy the idea. Instead,
Fokker and his assistant designed a new firing system. It is unclear
whether Fokker and his team were already working on a synchronizer
or to what extent they knew of Saulnier’s previous work in
France.Within several days, however, they had constructed a working
prototype and attached it to a Fokker Eindecker 1 airplane. The
design consisted of a simple linkage of cams and push-rods connected
to the oil-pump drive of an Oberursel engine and the trigger
of a Parabellum machine gun. The firing of the gun had to be timed
precisely to fire its six hundred rounds per minute between the
twelve-hundred-revolutions-per-minute propeller blades.
Fokker took his invention to Doberitz air base, and after a series of exhausting trials before the German high command, both on the
ground and in the air, he was allowed to take two prototypes of the
machine-gun-mounted airplanes to Douai in German-held France.
At Douai, two German pilots crowded into the cockpit with Fokker
and were given demonstrations of the plane’s capabilities. The airmen
were Oswald Boelcke, a test pilot and veteran of forty reconnaissance
missions, and Max Immelmann, a young, skillful aviator
who was assigned to the front.
When the first combat-ready versions of Fokker’s Eindecker 1
were delivered to the front lines, one was assigned to Boelcke, the
other to Immelmann. On August 1, 1915, with their aerodrome under attack from nine English bombers, Boelcke and Immelmann
manned their aircraft and attacked. Boelcke’s gun jammed, and he
was forced to cut off his attack and return to the aerodrome. Immelmann,
however, succeeded in shooting down one of the bombers
with his synchronized machine gun. It was the first victory credited
to the Fokker-designed weapon system.
Impact
At the outbreak of World War I, military strategists and commanders
on both sides saw the wartime function of airplanes as a
means to supply intelligence information behind enemy lines or as
airborne artillery spotting platforms. As the war progressed and aircraft
flew more or less freely across the trenches, providing vital information
to both armies, it became apparent to ground commanders
that while it was important to obtain intelligence on enemy
movements, it was important also to deny the enemy similar information.
Early in the war, the French used airplanes as strategic bombing
platforms. As both armies began to use their air forces for strategic
bombing of troops, railways, ports, and airfields, it became evident
that aircraft would have to be employed against enemy aircraft to
prevent reconnaissance and bombing raids.
With the invention of the synchronized forward-firing machine
gun, pilots could use their aircraft as attack weapons. Apilot finally
could coordinate control of his aircraft and his armaments with
maximum efficiency. This conversion of aircraft from nearly passive
observation platforms to attack fighters is the single greatest innovation
in the history of aerial warfare. The development of fighter
aircraft forced a change in military strategy, tactics, and logistics and
ushered in the era of modern warfare. Fighter planes are responsible
for the battle-tested military adage: Whoever controls the sky controls
the battlefield.
Wednesday, November 18, 2009
Polystyrene
The invention: A clear, moldable polymer with many industrial
uses whose overuse has also threatened the environment.
The people behind the invention:
Edward Simon, an American chemist
Charles Gerhardt (1816-1856), a French chemist
Marcellin Pierre Berthelot (1827-1907), a French chemist
Polystyrene Is Characterized
In the late eighteenth century, a scientist by the name of Casper
Neuman described the isolation of a chemical called “storax” from a
balsam tree that grew in Asia Minor. This isolation led to the first report
on the physical properties of the substance later known as “styrene.”
The work of Neuman was confirmed and expanded upon
years later, first in 1839 by Edward Simon, who evaluated the temperature
dependence of styrene, and later by Charles Gerhardt,
who proposed its molecular formula. The work of these two men
sparked an interest in styrene and its derivatives.
Polystyrene belongs to a special class of molecules known as
polymers.Apolymer (the name means “many parts”) is a giant molecule
formed by combining small molecular units, called “monomers.”
This combination results in a macromolecule whose physical
properties—especially its strength and flexibility—are significantly
different fromthose of its monomer components. Such polymers are
often simply called “plastics.”
Polystyrene has become an important material in modern society
because it exhibits a variety of physical characteristics that can be
manipulated for the production of consumer products. Polystyrene
is a “thermoplastic,” which means that it can be softened by heat
and then reformed, after which it can be cooled to form a durable
and resilient product.
At 94 degrees Celsius, polystyrene softens; at room temperature,
however, it rings like a metal when struck. Because of the glasslike
nature and high refractive index of polystyrene, products made from it are known for their shine and attractive texture. In addition,
the material is characterized by a high level of water resistance and
by electrical insulating qualities. It is also flammable, can by dissolved
or softened by many solvents, and is sensitive to light. These
qualities make polystyrene a valuable material in the manufacture
of consumer products.
Plastics on the Market
In 1866, Marcellin Pierre Berthelot prepared styrene from ethylene
and benzene mixtures in a heated reaction flask. This was the
first synthetic preparation of polystyrene. In 1925, the Naugatuck
Chemical Company began to operate the first commercial styrene/
polystyrene manufacturing plant. In the 1930’s, the Dow Chemical
Company became involved in the manufacturing and marketing of
styrene/polystyrene products. Dow’s Styron 666 was first marketed
as a general-purpose polystyrene in 1938. This material was
the first plastic product to demonstrate polystyrene’s excellent mechanical
properties and ease of fabrication.
The advent ofWorldWar II increased the need for plastics. When
the Allies’ supply of natural rubber was interrupted, chemists sought
to develop synthetic substitutes. The use of additives with polymer
species was found to alter some of the physical properties of those
species. Adding substances called “elastomers” during the polymerization
process was shown to give a rubberlike quality to a normally
brittle species. An example of this is Dow’s Styron 475, which
was marketed in 1948 as the first “impact” polystyrene. It is called
an impact polystyrene because it also contains butadiene, which increases
the product’s resistance to breakage. The continued characterization
of polystyrene products has led to the development of a
worldwide industry that fills a wide range of consumer needs.
Following World War II, the plastics industry revolutionized
many aspects of modern society. Polystyrene is only one of the
many plastics involved in this process, but it has found its way into
a multitude of consumer products. Disposable kitchen utensils,
trays and packages, cups, videocassettes, insulating foams, egg cartons,
food wrappings, paints, and appliance parts are only a few of
the typical applications of polystyrenes. In fact, the production of polystyrene has grown to exceed 5 billion pounds per year.
The tremendous growth of this industry in the postwar era has
been fueled by a variety of factors. Having studied the physical
and chemical properties of polystyrene, chemists and engineers
were able to envision particular uses and to tailor the manufacture
of the product to fit those uses precisely. Because of its low cost of
production, superior performance, and light weight, polystyrene
has become the material of choice for the packaging industry. The
automobile industry also enjoys its benefits. Polystyrene’s lower
density compared to those of glass and steel makes it appropriate
for use in automobiles, since its light weight means that using
it can reduce the weight of automobiles, thereby increasing gas
efficiency.
Impact
There is no doubt that the marketing of polystyrene has greatly
affected almost every aspect of modern society. Fromcomputer keyboards
to food packaging, the use of polystyrene has had a powerful
impact on both the quality and the prices of products. Its use is not,
however, without drawbacks; it has also presented humankind
with a dilemma. The wholesale use of polystyrene has created an
environmental problem that represents a danger to wildlife, adds to
roadside pollution, and greatly contributes to the volume of solid
waste in landfills.
Polystyrene has become a household commodity because it lasts.
The reciprocal effect of this fact is that it may last forever. Unlike natural
products, which decompose upon burial, polystyrene is very
difficult to convert into degradable forms. The newest challenge facing
engineers and chemists is to provide for the safe and efficient
disposal of plastic products. Thermoplastics such as polystyrene
can be melted down and remolded into new products, which makes
recycling and reuse of polystyrene a viable option, but this option
requires the cooperation of the same consumers who have benefited
from the production of polystyrene products.
uses whose overuse has also threatened the environment.
The people behind the invention:
Edward Simon, an American chemist
Charles Gerhardt (1816-1856), a French chemist
Marcellin Pierre Berthelot (1827-1907), a French chemist
Polystyrene Is Characterized
In the late eighteenth century, a scientist by the name of Casper
Neuman described the isolation of a chemical called “storax” from a
balsam tree that grew in Asia Minor. This isolation led to the first report
on the physical properties of the substance later known as “styrene.”
The work of Neuman was confirmed and expanded upon
years later, first in 1839 by Edward Simon, who evaluated the temperature
dependence of styrene, and later by Charles Gerhardt,
who proposed its molecular formula. The work of these two men
sparked an interest in styrene and its derivatives.
Polystyrene belongs to a special class of molecules known as
polymers.Apolymer (the name means “many parts”) is a giant molecule
formed by combining small molecular units, called “monomers.”
This combination results in a macromolecule whose physical
properties—especially its strength and flexibility—are significantly
different fromthose of its monomer components. Such polymers are
often simply called “plastics.”
Polystyrene has become an important material in modern society
because it exhibits a variety of physical characteristics that can be
manipulated for the production of consumer products. Polystyrene
is a “thermoplastic,” which means that it can be softened by heat
and then reformed, after which it can be cooled to form a durable
and resilient product.
At 94 degrees Celsius, polystyrene softens; at room temperature,
however, it rings like a metal when struck. Because of the glasslike
nature and high refractive index of polystyrene, products made from it are known for their shine and attractive texture. In addition,
the material is characterized by a high level of water resistance and
by electrical insulating qualities. It is also flammable, can by dissolved
or softened by many solvents, and is sensitive to light. These
qualities make polystyrene a valuable material in the manufacture
of consumer products.
Plastics on the Market
In 1866, Marcellin Pierre Berthelot prepared styrene from ethylene
and benzene mixtures in a heated reaction flask. This was the
first synthetic preparation of polystyrene. In 1925, the Naugatuck
Chemical Company began to operate the first commercial styrene/
polystyrene manufacturing plant. In the 1930’s, the Dow Chemical
Company became involved in the manufacturing and marketing of
styrene/polystyrene products. Dow’s Styron 666 was first marketed
as a general-purpose polystyrene in 1938. This material was
the first plastic product to demonstrate polystyrene’s excellent mechanical
properties and ease of fabrication.
The advent ofWorldWar II increased the need for plastics. When
the Allies’ supply of natural rubber was interrupted, chemists sought
to develop synthetic substitutes. The use of additives with polymer
species was found to alter some of the physical properties of those
species. Adding substances called “elastomers” during the polymerization
process was shown to give a rubberlike quality to a normally
brittle species. An example of this is Dow’s Styron 475, which
was marketed in 1948 as the first “impact” polystyrene. It is called
an impact polystyrene because it also contains butadiene, which increases
the product’s resistance to breakage. The continued characterization
of polystyrene products has led to the development of a
worldwide industry that fills a wide range of consumer needs.
Following World War II, the plastics industry revolutionized
many aspects of modern society. Polystyrene is only one of the
many plastics involved in this process, but it has found its way into
a multitude of consumer products. Disposable kitchen utensils,
trays and packages, cups, videocassettes, insulating foams, egg cartons,
food wrappings, paints, and appliance parts are only a few of
the typical applications of polystyrenes. In fact, the production of polystyrene has grown to exceed 5 billion pounds per year.
The tremendous growth of this industry in the postwar era has
been fueled by a variety of factors. Having studied the physical
and chemical properties of polystyrene, chemists and engineers
were able to envision particular uses and to tailor the manufacture
of the product to fit those uses precisely. Because of its low cost of
production, superior performance, and light weight, polystyrene
has become the material of choice for the packaging industry. The
automobile industry also enjoys its benefits. Polystyrene’s lower
density compared to those of glass and steel makes it appropriate
for use in automobiles, since its light weight means that using
it can reduce the weight of automobiles, thereby increasing gas
efficiency.
Impact
There is no doubt that the marketing of polystyrene has greatly
affected almost every aspect of modern society. Fromcomputer keyboards
to food packaging, the use of polystyrene has had a powerful
impact on both the quality and the prices of products. Its use is not,
however, without drawbacks; it has also presented humankind
with a dilemma. The wholesale use of polystyrene has created an
environmental problem that represents a danger to wildlife, adds to
roadside pollution, and greatly contributes to the volume of solid
waste in landfills.
Polystyrene has become a household commodity because it lasts.
The reciprocal effect of this fact is that it may last forever. Unlike natural
products, which decompose upon burial, polystyrene is very
difficult to convert into degradable forms. The newest challenge facing
engineers and chemists is to provide for the safe and efficient
disposal of plastic products. Thermoplastics such as polystyrene
can be melted down and remolded into new products, which makes
recycling and reuse of polystyrene a viable option, but this option
requires the cooperation of the same consumers who have benefited
from the production of polystyrene products.
Polyethylene
The invention: An artificial polymer with strong insulating properties
and many other applications.
The people behind the invention:
Karl Ziegler (1898-1973), a German chemist
Giulio Natta (1903-1979), an Italian chemist
August Wilhelm von Hofmann (1818-1892), a German chemist
The Development of Synthetic Polymers
In 1841, August Hofmann completed his Ph.D. with Justus von
Liebig, a German chemist and founding father of organic chemistry.
One of Hofmann’s students,William Henry Perkin, discovered that
coal tars could be used to produce brilliant dyes. The German chemical
industry, under Hofmann’s leadership, soon took the lead in
this field, primarily because the discipline of organic chemistry was
much more developed in Germany than elsewhere.
The realities of the early twentieth century found the chemical
industry struggling to produce synthetic substitutes for natural
materials that were in short supply, particularly rubber. Rubber is
a natural polymer, a material composed of a long chain of small
molecules that are linked chemically. An early synthetic rubber,
neoprene, was one of many synthetic polymers (some others were
Bakelite, polyvinyl chloride, and polystyrene) developed in the
1920’s and 1930’s. Another polymer, polyethylene, was developed
in 1936 by Imperial Chemical Industries. Polyethylene was a
tough, waxy material that was produced at high temperature and
at pressures of about one thousand atmospheres. Its method of
production made the material expensive, but it was useful as an insulating
material.
WorldWar II and the material shortages associated with it brought
synthetic materials into the limelight. Many new uses for polymers
were discovered, and after the war they were in demand for the production
of a variety of consumer goods, although polyethylene was
Karl Ziegler, an organic chemist with an excellent international
reputation, spent most of his career in Germany. With his international
reputation and lack of political connections, he was a natural
candidate to take charge of the KaiserWilhelm Institute for Coal Research
(later renamed the Max Planck Institute) in 1943. Wise planners
saw him as a director who would be favored by the conquering
Allies. His appointment was a shrewd one, since he was allowed to
retain his position after World War II ended. Ziegler thus played a
key role in the resurgence of German chemical research after the war.
Before accepting the position at the Kaiser Wilhelm Institute,
Ziegler made it clear that he would take the job only if he could pursue
his own research interests in addition to conducting coal research.
The location of the institute in the Ruhr Valley meant that
abundant supplies of ethylene were available from the local coal industry,
so it is not surprising that Ziegler began experimenting with
that material.
Although Ziegler’s placement as head of the institute was an important
factor in his scientific breakthrough, his previous research
was no less significant. Ziegler devoted much time to the field of
organometallic compounds, which are compounds that contain a
metal atom that is bonded to one or more carbon atoms. Ziegler was
interested in organoaluminum compounds, which are compounds
that contain aluminum-carbon bonds.
Ziegler was also interested in polymerization reactions, which
involve the linking of thousands of smaller molecules into the single
long chain of a polymer. Several synthetic polymers were known,
but chemists could exert little control on the actual process. It was
impossible to regulate the length of the polymer chain, and the extent
of branching in the chain was unpredictable. It was as a result of
studying the effect of organoaluminum compounds on these chain
formation reactions that the key discovery was made.
Ziegler and his coworkers already knew that ethylene would react
with organoaluminum compounds to produce hydrocarbons,
which are compounds that contain only carbon and hydrogen and
that have varying chain lengths. Regulating the product chain length
continued to be a problem.
At this point, fate intervened in the form of a trace of nickel left in a
reactor from a previous experiment. The nickel caused the chain
lengthening to stop after two ethylene molecules had been linked.
Ziegler and his colleagues then tried to determine whether metals
other than nickel caused a similar effect with a longer polymeric
chain. Several metals were tested, and the most important finding
was that a trace of titanium chloride in the reactor caused the deposition
of large quantities of high-density polyethylene at low pressures.
Ziegler licensed the procedure, and within a year, Giulio Natta
had modified the catalysts to give high yields of polymers with
highly ordered side chains branching from the main chain. This
opened the door for the easy production of synthetic rubber. For
their discovery of Ziegler-Natta catalysts, Ziegler and Natta shared
the 1963 Nobel Prize in Chemistry.
Consequences
Ziegler’s process produced polyethylene that was much more
rigid than the material produced at high pressure. His product also
had a higher density and a higher softening temperature. Industrial
exploitation of the process was unusually rapid, and within ten years
more than twenty plants utilizing the process had been built throughout
Europe, producing more than 120,000 metric tons of polyethylene.
This rapid exploitation was one reason Ziegler and Natta were
awarded the Nobel Prize after such a relatively short time.
By the late 1980’s, total production stood at roughly 18 billion
pounds worldwide. Other polymeric materials, including polypropylene,
can be produced by similar means. The ready availability
and low cost of these versatile materials have radically transformed
the packaging industry. Polyethylene bottles are far lighter
than their glass counterparts; in addition, gases and liquids do not
diffuse into polyethylene very easily, and it does not break easily.
As a result, more and more products are bottled in containers
made of polyethylene or other polymers. Other novel materials
possessing properties unparalleled by any naturally occurring material
(Kevlar, for example, which is used to make bullet-resistant
vests) have also been an outgrowth of the availability of low-cost
polymeric materials.
and many other applications.
The people behind the invention:
Karl Ziegler (1898-1973), a German chemist
Giulio Natta (1903-1979), an Italian chemist
August Wilhelm von Hofmann (1818-1892), a German chemist
The Development of Synthetic Polymers
In 1841, August Hofmann completed his Ph.D. with Justus von
Liebig, a German chemist and founding father of organic chemistry.
One of Hofmann’s students,William Henry Perkin, discovered that
coal tars could be used to produce brilliant dyes. The German chemical
industry, under Hofmann’s leadership, soon took the lead in
this field, primarily because the discipline of organic chemistry was
much more developed in Germany than elsewhere.
The realities of the early twentieth century found the chemical
industry struggling to produce synthetic substitutes for natural
materials that were in short supply, particularly rubber. Rubber is
a natural polymer, a material composed of a long chain of small
molecules that are linked chemically. An early synthetic rubber,
neoprene, was one of many synthetic polymers (some others were
Bakelite, polyvinyl chloride, and polystyrene) developed in the
1920’s and 1930’s. Another polymer, polyethylene, was developed
in 1936 by Imperial Chemical Industries. Polyethylene was a
tough, waxy material that was produced at high temperature and
at pressures of about one thousand atmospheres. Its method of
production made the material expensive, but it was useful as an insulating
material.
WorldWar II and the material shortages associated with it brought
synthetic materials into the limelight. Many new uses for polymers
were discovered, and after the war they were in demand for the production
of a variety of consumer goods, although polyethylene was
still too expensive to be used widely.
Organometallics Provide the KeyKarl Ziegler, an organic chemist with an excellent international
reputation, spent most of his career in Germany. With his international
reputation and lack of political connections, he was a natural
candidate to take charge of the KaiserWilhelm Institute for Coal Research
(later renamed the Max Planck Institute) in 1943. Wise planners
saw him as a director who would be favored by the conquering
Allies. His appointment was a shrewd one, since he was allowed to
retain his position after World War II ended. Ziegler thus played a
key role in the resurgence of German chemical research after the war.
Before accepting the position at the Kaiser Wilhelm Institute,
Ziegler made it clear that he would take the job only if he could pursue
his own research interests in addition to conducting coal research.
The location of the institute in the Ruhr Valley meant that
abundant supplies of ethylene were available from the local coal industry,
so it is not surprising that Ziegler began experimenting with
that material.
Although Ziegler’s placement as head of the institute was an important
factor in his scientific breakthrough, his previous research
was no less significant. Ziegler devoted much time to the field of
organometallic compounds, which are compounds that contain a
metal atom that is bonded to one or more carbon atoms. Ziegler was
interested in organoaluminum compounds, which are compounds
that contain aluminum-carbon bonds.
Ziegler was also interested in polymerization reactions, which
involve the linking of thousands of smaller molecules into the single
long chain of a polymer. Several synthetic polymers were known,
but chemists could exert little control on the actual process. It was
impossible to regulate the length of the polymer chain, and the extent
of branching in the chain was unpredictable. It was as a result of
studying the effect of organoaluminum compounds on these chain
formation reactions that the key discovery was made.
Ziegler and his coworkers already knew that ethylene would react
with organoaluminum compounds to produce hydrocarbons,
which are compounds that contain only carbon and hydrogen and
that have varying chain lengths. Regulating the product chain length
continued to be a problem.
At this point, fate intervened in the form of a trace of nickel left in a
reactor from a previous experiment. The nickel caused the chain
lengthening to stop after two ethylene molecules had been linked.
Ziegler and his colleagues then tried to determine whether metals
other than nickel caused a similar effect with a longer polymeric
chain. Several metals were tested, and the most important finding
was that a trace of titanium chloride in the reactor caused the deposition
of large quantities of high-density polyethylene at low pressures.
Ziegler licensed the procedure, and within a year, Giulio Natta
had modified the catalysts to give high yields of polymers with
highly ordered side chains branching from the main chain. This
opened the door for the easy production of synthetic rubber. For
their discovery of Ziegler-Natta catalysts, Ziegler and Natta shared
the 1963 Nobel Prize in Chemistry.
Consequences
Ziegler’s process produced polyethylene that was much more
rigid than the material produced at high pressure. His product also
had a higher density and a higher softening temperature. Industrial
exploitation of the process was unusually rapid, and within ten years
more than twenty plants utilizing the process had been built throughout
Europe, producing more than 120,000 metric tons of polyethylene.
This rapid exploitation was one reason Ziegler and Natta were
awarded the Nobel Prize after such a relatively short time.
By the late 1980’s, total production stood at roughly 18 billion
pounds worldwide. Other polymeric materials, including polypropylene,
can be produced by similar means. The ready availability
and low cost of these versatile materials have radically transformed
the packaging industry. Polyethylene bottles are far lighter
than their glass counterparts; in addition, gases and liquids do not
diffuse into polyethylene very easily, and it does not break easily.
As a result, more and more products are bottled in containers
made of polyethylene or other polymers. Other novel materials
possessing properties unparalleled by any naturally occurring material
(Kevlar, for example, which is used to make bullet-resistant
vests) have also been an outgrowth of the availability of low-cost
polymeric materials.
Tuesday, November 3, 2009
Polyester
The invention: Asynthetic fibrous polymer used especially in fabrics.
The people behind the invention:
Wallace H. Carothers (1896-1937), an American polymer
chemist
Hilaire de Chardonnet (1839-1924), a French polymer chemist
John R. Whinfield (1901-1966), a British polymer chemist
A Story About Threads
Human beings have worn clothing since prehistoric times. At
first, clothing consisted of animal skins sewed together. Later, people
learned to spin threads from the fibers in plant or animal materials
and to weave fabrics from the threads (for example, wool, silk,
and cotton). By the end of the nineteenth century, efforts were begun
to produce synthetic fibers for use in fabrics. These efforts were
motivated by two concerns. First, it seemed likely that natural materials
would become too scarce to meet the needs of a rapidly increasing
world population. Second, a series of natural disasters—
affecting the silk industry in particular—had demonstrated the
problems of relying solely on natural fibers for fabrics.
The first efforts to develop synthetic fabric focused on artificial
silk, because of the high cost of silk, its beauty, and the fact that silk
production had been interrupted by natural disasters more often
than the production of any other material. The first synthetic silk
was rayon, which was originally patented by a French count,
Hilaire de Chardonnet, and was later much improved by other
polymer chemists. Rayon is a semisynthetic material that is made
from wood pulp or cotton.
Because there was a need for synthetic fabrics whose manufacture
did not require natural materials, other avenues were explored. One
of these avenues led to the development of totally synthetic polyester
fibers. In the United States, the best-known of these is Dacron, which
is manufactured by E. I. Du Pont de Nemours. Easily made intthreads, Dacron is widely used in clothing. It is also used to make audiotapes
and videotapes and in automobile and boat bodies.
From Polymers to Polyester
Dacron belongs to a group of chemicals known as “synthetic
polymers.” All polymers are made of giant molecules, each of
which is composed of a large number of simpler molecules (“monomers”)
that have been linked, chemically, to form long strings. Efforts
by industrial chemists to prepare synthetic polymers developed
in the twentieth century after it was discovered that many
natural building materials and fabrics (such as rubber, wood, wool,
silk, and cotton) were polymers, and as the ways in which monomers
could be joined to make polymers became better understood.
One group of chemists who studied polymers sought to make inexpensive
synthetic fibers to replace expensive silk and wool. Their efforts
led to the development of well-known synthetic fibers such as
nylon and Dacron.
Wallace H. Carothers of Du Pont pioneered the development of
polyamide polymers, collectively called “nylon,” and was the first
researcher to attempt to make polyester. It was British polymer
chemists John R. Whinfield and J. T. Dickson of Calico Printers Association
(CPA) Limited, however, who in 1941 perfected and patented
polyester that could be used to manufacture clothing. The
first polyester fiber products were produced in 1950 in Great Britain
by London’s British Imperial Chemical Industries, which had secured
the British patent rights from CPA. This polyester, which was
made of two monomers, terphthalic acid and ethylene glycol, was
called Terylene. In 1951, Du Pont, which had acquired Terylene patent
rights for theWestern Hemisphere, began to market its own version
of this polyester, which was called Dacron. Soon, other companies
around the world were selling polyester materials of similar
composition.
Dacron and other polyesters are used in many items in the
United States. Made into fibers and woven, Dacron becomes cloth.
When pressed into thin sheets, it becomes Mylar, which is used in
videotapes and audiotapes. Dacron polyester, mixed with other materials,
is also used in many industrial items, including motor vehicle and boat bodies. Terylene and similar polyester preparations
serve the same purposes in other countries.
The production of polyester begins when monomers are mixed
in huge reactor tanks and heated, which causes them to form giant
polymer chains composed of thousands of alternating monomer
units. If T represents terphthalic acid and E represents ethylene glycol,
a small part of a necklace-like polymer can be shown in the following
way: (TETETETETE). Once each batch of polyester polymer
has the desired composition, it is processed for storage until it is
needed. In this procedure, the material, in liquid form in the hightemperature
reactor, is passed through a device that cools it and
forms solid strips. These strips are then diced, dried, and stored.
When polyester fiber is desired, the diced polyester is melted and
then forced through tiny holes in a “spinneret” device; this process
is called “extruding.” The extruded polyester cools again, while
passing through the spinneret holes, and becomes fine fibers called
“filaments.” The filaments are immediately wound into threads that
are collected in rolls. These rolls of thread are then dyed and used to
weave various fabrics. If polyester sheets or other forms of polyester
are desired, the melted, diced polyester is processed in other ways.
Polyester preparations are often mixed with cotton, glass fibers, or
other synthetic polymers to produce various products.
Impact
The development of polyester was a natural consequence of the
search for synthetic fibers that developed fromwork on rayon. Once
polyester had been developed, its great utility led to its widespread
use in industry. In addition, the profitability of the material spurred
efforts to produce better synthetic fibers for specific uses. One example
is that of stretchy polymers such as Helance, which is a form
of nylon. In addition, new chemical types of polymer fibers were developed,
including the polyurethane materials known collectively
as “spandex” (for example, Lycra and Vyrenet).
The wide variety of uses for polyester is amazing. Mixed with
cotton, it becomes wash-and-wear clothing; mixed with glass, it is
used to make boat and motor vehicle bodies; combined with other
materials, it is used to make roofing materials, conveyor belts,hoses, and tire cords. In Europe, polyester has become the main
packaging material for consumer goods, and the United States does
not lag far behind in this area.
The future is sure to hold more uses for polyester and the invention
of new polymers. These spinoffs of polyester will be essential in
the development of high technology.
Wednesday, October 28, 2009
Polio vaccine (Salk)
The invention: Jonas Salk’s vaccine was the first that prevented polio,resulting in the virtual eradication of crippling polio epidemics.The people behind the invention:
Jonas Edward Salk (1914-1995), an American physician,
immunologist, and virologist
Thomas Francis, Jr. (1900-1969), an
American microbiologist
Cause for Celebration
Poliomyelitis (polio) is an infectious disease that can adversely
affect the central nervous system, causing paralysis and great muscle
wasting due to the destruction of motor neurons (nerve cells) in
the spinal cord. Epidemiologists believe that polio has existed since
ancient times, and evidence of its presence in Egypt, circa 1400 b.c.e.,
has been presented. Fortunately, the Salk vaccine and the later vaccine
developed by the American virologist Albert Bruce Sabin can
prevent the disease. Consequently, except in underdeveloped nations,
polio is rare. Moreover, although once a person develops polio,
there is still no cure for it, a large number of polio cases end without
paralysis or any observable effect.
Polio is often called “infantile paralysis.” This results from the
fact that it is seen most often in children. It is caused by a virus and
begins with body aches, a stiff neck, and other symptoms that are
very similar to those of a severe case of influenza. In some cases,
within two weeks after its onset, the course of polio begins to lead to
muscle wasting and paralysis.
On April 12, 1955, the world was thrilled with the announcement
that Jonas Edward Salk’s poliomyelitis vaccine could prevent the
disease. It was reported that schools were closed in celebration of
this event. Salk, the son of a New York City garment worker, has
since become one of the most well-known and publicly venerated
medical scientists in the world.
Vaccination is a method of disease prevention by immunization,
whereby a small amount of virus is injected into the body to prevent
a viral disease. The process depends on the production of antibodies
(body proteins that are specifically coded to prevent the disease
spread by the virus) in response to the vaccination. Vaccines are
made of weakened or killed virus preparations.
Electrifying Results
The Salk vaccine was produced in two steps. First, polio viruses
were grown in monkey kidney tissue cultures. These polio viruses
were then killed by treatment with the right amount of formaldehyde
to produce an effective vaccine. The killed-virus polio vaccine
was found to be safe and to cause the production of antibodies
against the disease, a sign that it should prevent polio.
In early 1952, Salk tested a prototype vaccine against Type I polio virus
on children who were afflicted with the disease and were thus
deemed safe from reinfection. This test showed that the vaccination greatly elevated the concentration of polio antibodies in these children.
On July 2, 1952, encouraged by these results, Salk vaccinated fortythree
children who had never had polio with vaccines against each of
the three virus types (Type I, Type II, and Type III). All inoculated children
produced high levels of polio antibodies, and none of them developed
the disease. Consequently, the vaccine appeared to be both safe in
humans and likely to become an effective public health tool.
In 1953, Salk reported these findings in the Journal of the American
Medical Association. In April, 1954, nationwide testing of the Salk
vaccine began, via the mass vaccination of American schoolchildren.
The results of the trial were electrifying. The vaccine was safe,
and it greatly reduced the incidence of the disease. In fact, it was estimated
that Salk’s vaccine gave schoolchildren 60 to 90 percent protection
against polio.
Salk was instantly praised. Then, however, several cases of polio
occurred as a consequence of the vaccine. Its use was immediately
suspended by the U.S. surgeon general, pending a complete examination.
Soon, it was evident that all the cases of vaccine-derived polio
were attributable to faulty batches of vaccine made by one
pharmaceutical company. Salk and his associates were in no way responsible
for the problem. Appropriate steps were taken to ensure
that such an error would not be repeated, and the Salk vaccine was
again released for use by the public.
Consequences
The first reports on the polio epidemic in the United States had
occurred on June 27, 1916, when one hundred residents of Brooklyn,
New York, were afflicted. Soon, the disease had spread. By August,
twenty-seven thousand people had developed polio. Nearly seven
thousand afflicted people died, and many survivors of the epidemic
were permanently paralyzed to varying extents. In New York City
alone, nine thousand people developed polio and two thousand
died. Chaos reigned as large numbers of terrified people attempted
to leave and were turned back by police. Smaller polio epidemics
occurred throughout the nation in the years that followed (for example,
the Catawba County, North Carolina, epidemic of 1944). A
particularly horrible aspect of polio was the fact that more than 70 percent of polio victims were small children. Adults caught it too;
the most famous of these adult polio victims was U.S. President
Franklin D. Roosevelt. There was no cure for the disease. The best
available treatment was physical therapy.
As of August, 1955, more than four million polio vaccines had
been given. The Salk vaccine appeared to work very well. There were
only half as many reported cases of polio in 1956 as there had been in
1955. It appeared that polio was being conquered. By 1957, the number
of cases reported nationwide had fallen below six thousand.
Thus, in two years, its incidence had dropped by about 80 percent.
This was very exciting, and soon other countries clamored for the
vaccine. By 1959, ninety other countries had been supplied with the
Salk vaccine.Worldwide, the disease was being eradicated. The introduction
of an oral polio vaccine by Albert Bruce Sabin supported
this progress.
Salk received many honors, including honorary degrees from
American and foreign universities, the LaskerAward, a Congressional
Medal for Distinguished Civilian Service, and membership in
the French Legion of Honor, yet he received neither the Nobel Prize
nor membership in the American National Academy of Sciences. It
is believed by many that this neglect was a result of the personal antagonism
of some of the members of the scientific community who
strongly disagreed with his theories of viral inactivation.
Polio vaccine (Sabin)
The invention: Albert Bruce Sabin’s vaccine was the first to stimulate
long-lasting immunity against polio without the risk of causing
paralytic disease.
The people behind the invention:
Albert Bruce Sabin (1906-1993), a Russian-born American
virologist
Jonas Edward Salk (1914-1995), an American physician,
immunologist, and virologist
Renato Dulbecco (1914- ), an Italian-born American
virologist who shared the 1975 Nobel Prize in Physiology or
Medicine
The Search for a Living Vaccine
Almost a century ago, the first major poliomyelitis (polio) epidemic
was recorded. Thereafter, epidemics of increasing
frequency
and severity struck the industrialized world. By the 1950’s, as many
as sixteen thousand individuals, most of them children, were being
paralyzed by the disease each year.
Poliovirus enters the body through ingestion by the mouth. It
replicates in the throat and the intestines and establishes an infection
that normally is harmless. From there, the virus can enter the
bloodstream. In some individuals it makes its way to the nervous
system, where it attacks and destroys nerve cells crucial for muscle
movement. The presence of antibodies in the bloodstream will prevent
the virus from reaching the nervous system and causing paralysis.
Thus, the goal of vaccination is to administer poliovirus that
has been altered so that it cannot cause disease but nevertheless will
stimulate the production of antibodies to fight the disease.
Albert Bruce Sabin received his medical degree from New York
University College of Medicine in 1931. Polio was epidemic in 1931,
and for Sabin polio research became a lifelong interest. In 1936,
while working at the Rockefeller Institute, Sabin and Peter Olinsky
successfully grew poliovirus using tissues cultured in vitro. Tissue
culture proved to be an excellent source of virus. Jonas Edward Salk
soon developed an inactive polio vaccine consisting of virus grown
from tissue culture that had been inactivated (killed) by chemical
treatment. This vaccine became available for general use in 1955, almost
fifty years after poliovirus had first been identified.
Sabin, however, was not convinced that an inactivated virus vaccine
was adequate. He believed that it would provide only temporary
protection and that individuals would have to be vaccinated
repeatedly in order to maintain protective levels of antibodies.
Knowing that natural infection with poliovirus induced lifelong immunity,
Sabin believed that a vaccine consisting of a living virus
was necessary to produce long-lasting immunity. Also, unlike the
inactive vaccine, which is injected, a living virus (weakened so that
it would not cause disease) could be taken orally and would invade
the body and replicate of its own accord.
Sabin was not alone in his beliefs. Hilary Koprowski and Harold
Cox also favored a living virus vaccine and had, in fact, begun
searching for weakened strains of poliovirus as early as 1946 by repeatedly
growing the virus in rodents. When Sabin began his search
for weakened virus strains in 1953, a fiercely competitive contest ensued
to achieve an acceptable live virus vaccine.
Rare, Mutant Polioviruses
Sabin’s approach was based on the principle that, as viruses acquire
the ability to replicate in a foreign species or tissue (for example,
in mice), they become less able to replicate in humans and thus
less able to cause disease. Sabin used tissue culture techniques to
isolate those polioviruses that grew most rapidly in monkey kidney
cells. He then employed a technique developed by Renato Dulbecco
that allowed him to recover individual virus particles. The recovered
viruses were injected directly into the brains or spinal cords of
monkeys in order to identify those viruses that did not damage the
nervous system. These meticulously performed experiments, which
involved approximately nine thousand monkeys and more than
one hundred chimpanzees, finally enabled Sabin to isolate rare mutant
polioviruses that would replicate in the intestinal tract but not
in the nervous systems of chimpanzees or, it was hoped, of humans.
In addition, the weakened virus strains were shown to stimulate antibodies when they were fed to chimpanzees; this was a critical attribute
for a vaccine strain.
By 1957, Sabin had identified three strains of attenuated viruses that
were ready for small experimental trials in humans. Asmall group of
volunteers, including Sabin’s own wife and children, were fed the vaccine
with promising results. Sabin then gave his vaccine to virologists
in the Soviet Union, Eastern Europe, Mexico, and Holland for further
testing. Combined with smaller studies in the United States, these trials
established the effectiveness and safety of his oral vaccine.
During this period, the strains developed by Cox and by Koprowski
were being tested also in millions of persons in field trials
around the world. In 1958, two laboratories independently compared
the vaccine strains and concluded that the Sabin strains were
superior. In 1962, after four years of deliberation by the U.S. Public
Health Service, all three of Sabin’s vaccine strains were licensed for
general use.Consequences
The development of polio vaccines ranks as one of the triumphs of
modern medicine. In the early 1950’s, paralytic polio struck 13,500
out of every 100 million Americans. The use of the Salk vaccine
greatly reduced the incidence of polio, but outbreaks of paralytic disease
continued to occur: Fifty-seven hundred cases were reported in
1959 and twenty-five hundred cases in 1960. In 1962, the oral Sabin
vaccine became the vaccine of choice in the United States. Since its
widespread use, the number of paralytic cases in the United States
has dropped precipitously, eventually averaging fewer than ten per
year. Worldwide, the oral vaccine prevented an estimated 5 million
cases of paralytic poliomyelitis between 1970 and 1990.
The oral vaccine is not without problems. Occasionally, the living
virus mutates to a disease-causing (virulent) form as it multiplies in
the vaccinated person. When this occurs, the person may develop
paralytic poliomyelitis. The inactive vaccine, in contrast, cannot
mutate to a virulent form. Ironically, nearly every incidence of polio
in the United States is caused by the vaccine itself.
In the developing countries of the world, the issue of vaccination is
more pressing. Millions receive neither form of polio vaccine; as a result,
at least 250,000 individuals are paralyzed or die each year. The World
Health Organization and other health providers continue to work toward
the very practical goal of completely eradicating this disease.
Wednesday, October 21, 2009
Pocket calculator
The invention: The first portable and reliable hand-held calculator
capable of performing a wide range of mathematical computations.
The people behind the invention:
Jack St. Clair Kilby (1923- ), the inventor of the
semiconductor microchip
Jerry D. Merryman (1932- ), the first project manager of the
team that invented the first portable calculator
James Van Tassel (1929- ), an inventor and expert on
semiconductor components
An Ancient Dream
In the earliest accounts of civilizations that developed number
systems to perform mathematical calculations, evidence has been
found of efforts to fashion a device that would permit people to perform
these calculations with reduced effort and increased accuracy.
The ancient Babylonians are regarded as the inventors of the first
abacus (or counting board, from the Greek abakos, meaning “board”
or “tablet”). It was originally little more than a row of shallow
grooves with pebbles or bone fragments as counters.
The next step in mechanical calculation did not occur until the
early seventeenth century. John Napier, a Scottish baron and mathematician,
originated the concept of “logarithms” as a mathematical
device to make calculating easier. This concept led to the first slide
rule, created by the English mathematician William Oughtred of
Cambridge. Oughtred’s invention consisted of two identical, circular
logarithmic scales held together and adjusted by hand. The slide
rule made it possible to perform rough but rapid multiplication and
division. Oughtred’s invention in 1623 was paralleled by the work
of a German professor,Wilhelm Schickard, who built a “calculating
clock” the same year. Because the record of Schickard’s work was
lost until 1935, however, the French mathematician Blaise Pascal
was generally thought to have built the first mechanical calculator,
the “Pascaline,” in 1645.Other versions of mechanical calculators were built in later centuries,
but none was rapid or compact enough to be useful beyond specific
laboratory or mercantile situations. Meanwhile, the dream of
such a machine continued to fascinate scientists and mathematicians.
The development that made a fast, small calculator possible did
not occur until the middle of the twentieth century, when Jack St.
Clair Kilby of Texas Instruments invented the silicon microchip (or
integrated circuit) in 1958. An integrated circuit is a tiny complex of
electronic components and their connections that is produced in or
on a small slice of semiconductor material such as silicon. Patrick
Haggerty, then president of Texas Instruments, wrote in 1964 that
“integrated electronics” would “remove limitations” that determined
the size of instruments, and he recognized that Kilby’s invention
of the microchip made possible the creation of a portable,
hand-held calculator. He challenged Kilby to put together a team to
design a calculator that would be as powerful as the large, electromechanical
models in use at the time but small enough to fit into a
coat pocket. Working with Jerry D. Merryman and James Van Tassel,
Kilby began to work on the project in October, 1965.
An Amazing Reality
At the outset, there were basically five elements that had to be designed.
These were the logic designs that enabled the machine to
perform the actual calculations, the keyboard or keypad, the power
supply, the readout display, and the outer case. Kilby recalls that
once a particular size for the unit had been determined (something
that could be easily held in the hand), project manager Merryman
was able to develop the initial logic designs in three days.Van Tassel
contributed his experience with semiconductor components to solve
the problems of packaging the integrated circuit. The display required
a thermal printer that would work on a low power source.
The machine also had to include a microencapsulated ink source so
that the paper readouts could be imprinted clearly. Then the paper
had to be advanced for the next calculation. Kilby, Merryman, and
Van Tassel filed for a patent on their work in 1967.
Although this relatively small, working prototype of the minicalculator
made obsolete the transistor-operated design of the much larger desk calculators, the cost of setting up new production lines
and the need to develop a market made it impractical to begin production
immediately. Instead, Texas Instruments and Canon of Tokyo
formed a joint venture, which led to the introduction of the
Canon Pocketronic Printing Calculator in Japan in April, 1970, and
in the United States that fall. Built entirely of Texas Instruments
parts, this four-function machine with three metal oxide semiconductor (MOS) circuits was similar to the prototype designed in 1967.
The calculator was priced at $400, weighed 740 grams, and measured
101 millimeters wide by 208 millimeters long by 49 millimeters
high. It could perform twelve-digit calculations and worked up
to four decimal places.
In September, 1972, Texas Instruments put the Datamath, its first
commercial hand-held calculator using a single MOS chip, on the
retail market. It weighed 340 grams and measured 75 millimeters
wide by 137 millimeters long by 42 millimeters high. The Datamath
was priced at $120 and included a full-floating decimal point that
could appear anywhere among the numbers on its eight-digit, lightemitting
diode (LED) display. It came with a rechargeable battery
that could also be connected to a standard alternating current (AC)
outlet. The Datamath also had the ability to conserve power while
awaiting the next keyboard entry. Finally, the machine had a built-in
limited amount of memory storage.Consequences
Prior to 1970, most calculating machines were of such dimensions
that professional mathematicians and engineers were either tied to
their desks or else carried slide rules whenever they had to be away
from their offices. By 1975, Keuffel&Esser, the largest slide rule manufacturer
in the world, was producing its last model, and mechanical
engineers found that problems that had previously taken a week
could now be solved in an hour using the new machines.
That year, the Smithsonian Institution accepted the world’s first
miniature electronic calculator for its permanent collection, noting
that it was the forerunner of more than one hundred million pocket
calculators then in use. By the 1990’s, more than fifty million portable
units were being sold each year in the United States. In general,
the electronic pocket calculator revolutionized the way in which
people related to the world of numbers.
Moreover, the portability of the hand-held calculator made it
ideal for use in remote locations, such as those a petroleum engineer
might have to explore. Its rapidity and reliability made it an indispensable
instrument for construction engineers, architects, and real
estate agents, who could figure the volume of a room and other
building dimensions almost instantly and then produce cost estimates
almost on the spot.
Wednesday, October 14, 2009
Plastic
The invention: The first totally synthetic thermosetting plastic,
which paved the way for modern materials science.
The people behind the invention:
John Wesley Hyatt (1837-1920), an American inventor
Leo Hendrik Baekeland (1863-1944), a Belgian-born chemist,
consultant, and inventor
Christian Friedrich Schönbein (1799-1868), a German chemist
who produced guncotton, the first artificial polymer
Adolf von Baeyer (1835-1917), a German chemist
Exploding Billiard Balls
In the 1860’s, the firm of Phelan and Collender offered a prize of
ten thousand dollars to anyone producing a substance that could
serve as an inexpensive substitute for ivory, which was somewhat
difficult to obtain in large quantities at reasonable prices. Earlier,
Christian Friedrich Schönbein had laid the groundwork for a breakthrough
in the quest for a new material in 1846 by the serendipitous
discovery of nitrocellulose, more commonly known as “guncotton,”
which was produced by the reaction of nitric acid with cotton.
An American inventor, John Wesley Hyatt, while looking for a
substitute for ivory as a material for making billiard balls, discovered
that the addition of camphor to nitrocellulose under certain
conditions led to the formation of a white material that could be
molded and machined. He dubbed this substance “celluloid,” and
this product is now acknowledged as the first synthetic plastic. Celluloid
won the prize for Hyatt, and he promptly set out to exploit his
product. Celluloid was used to make baby rattles, collars, dentures,
and other manufactured goods.
As a billiard ball substitute, however, it was not really adequate,
for various reasons. First, it is thermoplastic—in other words, a material
that softens when heated and can then be easily deformed or
molded. It was thus too soft for billiard ball use. Second, it was
highly flammable, hardly a desirable characteristic. Awidely circulated, perhaps apocryphal, story claimed that celluloid billiard balls
detonated when they collided.
Truly Artificial
Since celluloid can be viewed as a derivative of a natural product,
it is not a completely synthetic substance. Leo Hendrik Baekeland
has the distinction of being the first to produce a completely artificial
plastic. Born in Ghent, Belgium, Baekeland emigrated to the
United States in 1889 to pursue applied research, a pursuit not encouraged
in Europe at the time. One area in which Baekeland hoped
to make an inroad was in the development of an artificial shellac.
Shellac at the time was a natural and therefore expensive product,
and there would be a wide market for any reasonably priced substitute.
Baekeland’s research scheme, begun in 1905, focused on finding
a solvent that could dissolve the resinous products from a certain
class of organic chemical reaction.
The particular resins he used had been reported in the mid-
1800’s by the German chemist Adolf von Baeyer. These resins were
produced by the condensation reaction of formaldehyde with a
class of chemicals called “phenols.” Baeyer found that frequently
the major product of such a reaction was a gummy residue that was
virtually impossible to remove from glassware. Baekeland focused
on finding a material that could dissolve these resinous products.
Such a substance would prove to be the shellac substitute he sought.
These efforts proved frustrating, as an adequate solvent for these
resins could not be found. After repeated attempts to dissolve these
residues, Baekeland shifted the orientation of his work. Abandoning
the quest to dissolve the resin, he set about trying to develop a resin
that would be impervious to any solvent, reasoning that such a material
would have useful applications.
Baekeland’s experiments involved the manipulation of phenolformaldehyde
reactions through precise control of the temperature
and pressure at which the reactions were performed. Many of these
experiments were performed in a 1.5-meter-tall reactor vessel, which
he called a “Bakelizer.” In 1907, these meticulous experiments paid
off when Baekeland opened the reactor to reveal a clear solid that
was heat resistant, nonconducting, and machinable. Experimentation proved that the material could be dyed practically any color in
the manufacturing process, with no effect on the physical properties
of the solid.
Baekeland filed a patent for this new material in 1907. (This patent
was filed one day before that filed by James Swinburne, a British electrical engineer who had developed a similar material in his
quest to produce an insulating material.) Baekeland dubbed his new
creation “Bakelite” and announced its existence to the scientific
community on February 15, 1909, at the annual meeting of the American
Chemical Society. Among its first uses was in the manufacture
of ignition parts for the rapidly growing automobile industry.
Impact
Bakelite proved to be the first of a class of compounds called
“synthetic polymers.” Polymers are long chains of molecules chemically
linked together. There are many natural polymers, such as cotton.
The discovery of synthetic polymers led to vigorous research
into the field and attempts to produce other useful artificial materials.
These efforts met with a fair amount of success; by 1940, a multitude
of new products unlike anything found in nature had been discovered.
These included such items as polystyrene and low-density
polyethylene. In addition, artificial substitutes for natural polymers,
such as rubber, were a goal of polymer chemists. One of the results
of this research was the development of neoprene.
Industries also were interested in developing synthetic polymers
to produce materials that could be used in place of natural fibers
such as cotton. The most dramatic success in this area was achieved
by Du Pont chemist Wallace Carothers, who had also developed
neoprene. Carothers focused his energies on forming a synthetic fiber
similar to silk, resulting in the synthesis of nylon.
Synthetic polymers constitute one branch of a broad area known
as “materials science.” Novel, useful materials produced synthetically
from a variety of natural materials have allowed for tremendous
progress in many areas. Examples of these new materials include
high-temperature superconductors, composites, ceramics, and
plastics. These materials are used to make the structural components
of aircraft, artificial limbs and implants, tennis rackets, garbage
bags, and many other common objects.
Tuesday, October 13, 2009
Photovoltaic cell
Photovoltaic cell
The invention: Drawing their energy directly from the Sun, the
first photovoltaic cells powered instruments on early space vehicles
and held out hope for future uses of solar energy.
The people behind the invention:
Daryl M. Chapin (1906-1995), an American physicist
Calvin S. Fuller (1902-1994), an American chemist
Gerald L. Pearson (1905- ), an American physicist
Unlimited Energy Source
All the energy that the world has at its disposal ultimately comes
from the Sun. Some of this solar energy was trapped millions of years
ago in the form of vegetable and animal matter that became the coal,
oil, and natural gas that the world relies upon for energy. Some of this
fuel is used directly to heat homes and to power factories and gasoline
vehicles. Much of this fossil fuel, however, is burned to produce
the electricity on which modern society depends.
The amount of energy available from the Sun is difficult to imagine,
but some comparisons may be helpful. During each forty-hour
period, the Sun provides the earth with as much energy as the
earth’s total reserves of coal, oil, and natural gas. It has been estimated
that the amount of energy provided by the sun’s radiation
matches the earth’s reserves of nuclear fuel every forty days. The
annual solar radiation that falls on about twelve hundred square
miles of land in Arizona matched the world’s estimated total annual
energy requirement for 1960. Scientists have been searching for
many decades for inexpensive, efficient means of converting this
vast supply of solar radiation directly into electricity.
The Bell Solar Cell
Throughout its history, Bell Systems has needed to be able to
transmit, modulate, and amplify electrical signals. Until the 1930’s,
these tasks were accomplished by using insulators and metallic conductors. At that time, semiconductors, which have electrical properties
that are between those of insulators and those of conductors,
were developed. One of the most important semiconductor materials
is silicon, which is one of the most common elements on the
earth. Unfortunately, silicon is usually found in the form of compounds
such as sand or quartz, and it must be refined and purified
before it can be used in electrical circuits. This process required
much initial research, and very pure silicon was not available until
the early 1950’s.
Electric conduction in silicon is the result of the movement of
negative charges (electrons) or positive charges (holes). One way of
accomplishing this is by deliberately adding to the silicon phosphorus
or arsenic atoms, which have five outer electrons. This addition
creates a type of semiconductor that has excess negative charges (an
n-type semiconductor). Adding boron atoms, which have three
outer electrons, creates a semiconductor that has excess positive
charges (a p-type semiconductor). Calvin Fuller made an important
study of the formation of p-n junctions, which are the points at
which p-type and n-type semiconductors meet, by using the process
of diffusing impurity atoms—that is, adding atoms of materials that
would increase the level of positive or negative charges, as described
above. Fuller’s work stimulated interested in using the process
of impurity diffusion to create cells that would turn solar energy
into electricity. Fuller and Gerald Pearson made the first largearea
p-n junction by using the diffusion process. Daryl Chapin,
Fuller, and Pearson made a similar p-n junction very close to the
surface of a silicon crystal, which was then exposed to sunlight.
The cell was constructed by first making an ingot of arsenicdoped
silicon that was then cut into very thin slices. Then a very
thin layer of p-type silicon was formed over the surface of the n-type
wafer, providing a p-n junction close to the surface of the cell. Once
the cell cooled, the p-type layer was removed from the back of the
cell and lead wires were attached to the two surfaces. When light
was absorbed at the p-n junction, electron-hole pairs were produced,
and the electric field that was present at the junction forced
the electrons to the n side and the holes to the p side.
The recombination of the electrons and holes takes place after the
electrons have traveled through the external wires, where they do useful work. Chapin, Fuller, and Pearson announced in 1954 that
the resulting photovoltaic cell was the most efficient (6 percent)
means then available for converting sunlight into electricity.
The first experimental use of the silicon solar battery was in amplifiers
for electrical telephone signals in rural areas. An array of 432
silicon cells, capable of supplying 9 watts of power in bright sunlight,
was used to charge a nickel-cadmium storage battery. This, in
turn, powered the amplifier for the telephone signal. The electrical
energy derived from sunlight during the day was sufficient to keep
the storage battery charged for continuous operation. The system
was successfully tested for six months of continuous use in Americus,
Georgia, in 1956. Although it was a technical success, the silicon solar
cell was not ready to compete economically with conventional
means of producing electrical power.
Consequences
One of the immediate applications of the solar cell was to supply
electrical energy for Telstar satellites. These cells are used extensively
on all satellites to generate power. The success of the U.S. satellite program prompted serious suggestions in 1965 for the use of
an orbiting power satellite. A large satellite could be placed into a
synchronous orbit of the earth. It would collect sunlight, convert it
to microwave radiation, and beam the energy to an Earth-based receiving
station. Many technical problems must be solved, however,
before this dream can become a reality.
Solar cells are used in small-scale applications such as power
sources for calculators. Large-scale applications are still not economically
competitive with more traditional means of generating
electric power. The development of the ThirdWorld countries, however,
may provide the incentive to search for less-expensive solar
cells that can be used, for example, to provide energy in remote villages.
As the standards of living in such areas improve, the need for
electric power will grow. Solar cells may be able to provide the necessary
energy while safeguarding the environment for future generations.
Monday, October 12, 2009
Photoelectric cell
The invention: The first devices to make practical use of the photoelectric
effect, photoelectric cells were of decisive importance in
the electron theory of metals.
The people behind the invention:
Julius Elster (1854-1920), a German experimental physicist
Hans Friedrich Geitel (1855-1923), a German physicist
Wilhelm Hallwachs (1859-1922), a German physicist
Early Photoelectric Cells
The photoelectric effect was known to science in the early
nineteenth century when the French physicist Alexandre-Edmond
Becquerel wrote of it in connection with his work on glass-enclosed
primary batteries. He discovered that the voltage of his batteries increased
with intensified illumination and that green light produced
the highest voltage. Since Becquerel researched batteries exclusively,
however, the liquid-type photocell was not discovered until
1929, when the Wein and Arcturus cells were introduced commercially.
These cells were miniature voltaic cells arranged so that light
falling on one side of the front plate generated a considerable
amount of electrical energy. The cells had short lives, unfortunately;
when subjected to cold, the electrolyte froze, and when subjected to
heat, the gas generated would expand and explode the cells.
What came to be known as the photoelectric cell, a device connecting
light and electricity, had its beginnings in the 1880’s. At
that time, scientists noticed that a negatively charged metal plate
lost its charge much more quickly in the light (especially ultraviolet
light) than in the dark. Several years later, researchers demonstrated
that this phenomenon was not an “ionization” effect because
of the air’s increased conductivity, since the phenomenon
took place in a vacuum but did not take place if the plate were positively
charged. Instead, the phenomenon had to be attributed to
the light that excited the electrons of the metal and caused them to
fly off: Aneutral plate even acquired a slight positive charge under the influence of strong light. Study of this effect not only contributed
evidence to an electronic theory of matter—and, as a result of
some brilliant mathematical work by the physicist Albert Einstein,
later increased knowledge of the nature of radiant energy—but
also further linked the studies of light and electricity. It even explained
certain chemical phenomena, such as the process of photography.
It is important to note that all the experimental work on
photoelectricity accomplished prior to the work of Julius Elster
and Hans Friedrich Geitel was carried out before the existence of
the electron was known.
Explaining Photoelectric Emission
After the English physicist Sir Joseph John Thomson’s discovery
of the electron in 1897, investigators soon realized that the photoelectric
effect was caused by the emission of electrons under the influence
of radiation. The fundamental theory of photoelectric emission
was put forward by Einstein in 1905 on the basis of the German
physicist Max Planck’s quantum theory (1900). Thus, it was not surprising
that light was found to have an electronic effect. Since it was
known that the longer radio waves could shake electrons into resonant
oscillations and the shorter X rays could detach electrons from
the atoms of gases, the intermediate waves of visual light would
have been expected to have some effect upon electrons—such as detaching
them from metal plates and therefore setting up a difference
of potential. The photoelectric cell, developed by Elster and Geitel
in 1904, was a practical device that made use of this effect.
In 1888,Wilhelm Hallwachs observed that an electrically charged
zinc electrode loses its charge when exposed to ultraviolet radiation
if the charge is negative, but is able to retain a positive charge under
the same conditions. The following year, Elster and Geitel discovered
a photoelectric effect caused by visible light; however, they
used the alkali metals potassium and sodium for their experiments
instead of zinc.
The Elster-Geitel photocell (a vacuum emission cell, as opposed to
a gas-filled cell) consisted of an evacuated glass bulb containing two
electrodes. The cathode consisted of a thin film of a rare, chemically
active metal (such as potassium) that lost its electrons fairly readily; the anode was simply a wire sealed in to complete the circuit. This anode
was maintained at a positive potential in order to collect the negative
charges released by light from the cathode. The Elster-Geitel
photocell resembled two other types of vacuum tubes in existence at
the time: the cathode-ray tube, in which the cathode emitted electrons
under the influence of a high potential, and the thermionic
valve (a valve that permits the passage of current in one direction only), in which it emitted electrons under the influence of heat. Like
both of these vacuum tubes, the photoelectric cell could be classified
as an “electronic” device.
The new cell, then, emitted electrons when stimulated by light, and
at a rate proportional to the intensity of the light. Hence, a current
could be obtained from the cell. Yet Elster and Geitel found that their
photoelectric currents fell off gradually; they therefore spoke of “fatigue”
(instability). It was discovered later that most of this change was
not a direct effect of a photoelectric current’s passage; it was not even
an indirect effect but was caused by oxidation of the cathode by the air.
Since all modern cathodes are enclosed in sealed vessels, that source of
change has been completely abolished. Nevertheless, the changes that
persist in modern cathodes often are indirect effects of light that can be
produced independently of any photoelectric current.
Impact
The Elster-Geitel photocell was, for some twenty years, used in
all emission cells adapted for the visible spectrum, and throughout
the twentieth century, the photoelectric cell has had a wide variety
of applications in numerous fields. For example, if products leaving
a factory on a conveyor belt were passed between a light and a cell,
they could be counted as they interrupted the beam. Persons entering
a building could be counted also, and if invisible ultraviolet rays
were used, those persons could be detected without their knowledge.
Simple relay circuits could be arranged that would automatically
switch on street lamps when it grew dark. The sensitivity of
the cell with an amplifying circuit enabled it to “see” objects too
faint for the human eye, such as minor stars or certain lines in the
spectra of elements excited by a flame or discharge. The fact that the
current depended on the intensity of the light made it possible to
construct photoelectric meters that could judge the strength of illumination
without risking human error—for example, to determine
the right exposure for a photograph.
A further use for the cell was to make talking films possible. The
early “talkies” had depended on gramophone records, but it was very
difficult to keep the records in time with the film. Now, the waves of
speech and music could be recorded in a “sound track” by turning the sound first into current through a microphone and then into light with
a neon tube or magnetic shutter; next, the variations in the intensity of
this light on the side of the film were photographed. By reversing the
process and running the film between a light and a photoelectric cell,
the visual signals could be converted back to sound.
Subscribe to:
Posts (Atom)