Sunday, October 21, 2012
Solar thermal engine
The invention:
The first commercially practical plant for generating
electricity from solar energy.
The people behind the invention:
Frank Shuman (1862-1918), an American inventor
John Ericsson (1803-1889), an American engineer
Augustin Mouchout (1825-1911), a French physics professor
Power from the Sun
According to tradition, the Greek scholar Archimedes used
reflective mirrors to concentrate the rays of the Sun and set afire
the ships of an attacking Roman fleet in 212 b.c.e. The story illustrates
the long tradition of using mirrors to concentrate solar energy
from a large area onto a small one, producing very high
temperatures.
With the backing of Napoleon III, the Frenchman Augustin
Mouchout built, between 1864 and 1872, several steam engines
that were powered by the Sun. Mirrors concentrated the sun’s rays
to a point, producing a temperature that would boil water. The
steam drove an engine that operated a water pump. The largest engine
had a cone-shaped collector, or “axicon,” lined with silverplated
metal. The French government operated the engine for six
months but decided it was too expensive to be practical.
John Ericsson, the American famous for designing and building
the CivilWar ironclad ship Monitor, built seven steam-driven
solar engines between 1871 and 1878. In Ericsson’s design,
rays were focused onto a line rather than a point. Long mirrors,
curved into a parabolic shape, tracked the Sun. The rays were focused
onto a water-filled tube mounted above the reflectors to
produce steam. The engineer’s largest engine, which used an 11- ×
16-foot trough-shaped mirror, delivered nearly 2 horsepower. Because
his solar engines were ten times more expensive than conventional
steam engines, Ericsson converted them to run on coal to
avoid financial loss.
Frank Shuman, a well-known inventor in Philadelphia, Pennsylvania,
entered the field of solar energy in 1906. The self-taught engineer
believed that curved, movable mirrors were too expensive. His
first large solar engine was a hot-box, or flat-plate, collector. It lay
flat on the ground and had blackened pipes filled with a liquid that
had a low boiling point. The solar-heated vapor ran a 3.5-horsepower
engine.
Shuman’s wealthy investors formed the Sun Power Company to
develop and construct the largest solar plant ever built. The site chosen
was in Egypt, but the plant was built near Shuman’s home for
testing before it was sent to Egypt.
When the inventor added ordinary flat mirrors to reflect more
sunlight into each collector, he doubled the heat production of the
collectors. The 572 trough-type collectors were assembled in twentysix
rows. Water was piped through the troughs and converted to
steam. A condenser converted the steam to water, which reentered
the collectors. The engine pumped 3,000 gallons of water per minute
and produced 14 horsepower per day; performance was expected to
improve 25 percent in the sunny climate of Egypt.
British investors requested that professor C. V. Boys review the
solar plant before it was shipped to Egypt. Boys pointed out that the
bottom of each collector was not receiving any direct solar energy;
in fact, heat was being lost through the bottom. He suggested that
each row of flat mirrors be replaced by a single parabolic reflector,
and Shuman agreed. Shuman thought Boys’s idea was original, but
he later realized it was based on Ericsson’s design.
The company finally constructed the improved plant in Meadi,
Egypt, a farming district on the Nile River. Five solar collectors,
spaced 25 feet apart, were built in a north-south line. Each was
about 200 feet long and 10 feet wide. Trough-shaped reflectors were
made of mirrors held in place by brass springs that expanded
and contracted with changing temperatures. The parabolic mirrors
shifted automatically so that the rays were always focused on the
boiler. Inside the 15-inch boiler that ran down the middle of the collector,
water was heated and converted to steam. The engine produced
more than 55 horsepower, which was enough to pump 6,000
gallons of water per minute.
The purchase price of Shuman’s solar plant was twice as high as
that of a coal-fired plant, but its operating costs were far lower. In
Egypt, where coal was expensive, the entire purchase price would
be recouped in four years. Afterward, the plant would operate for
practically nothing. The first practical solar engine was now in operation,
providing enough energy to drive a large-scale irrigation system
in the floodplain of the Nile River.
By 1914, Shuman’s work was enthusiastically supported, and solar
plants were planned for India and Africa. Shuman hoped to
build 20,000 reflectors in the Sahara Desert and generate energy
equal to all the coal mined in one year, but the outbreak of World
War I ended his dreams of large-scale solar developments. The
Meadi project was abandoned in 1915, and Shuman died before the
war ended. Powerful nations lost interest in solar power and began
to replace coal with oil. Rich oil reserves were discovered in many
desert zones that were ideal locations for solar power.
Impact
Although World War I ended Frank Shuman’s career, his breakthrough
proved to the world that solar power held great promise for
the future. His ideas were revived in 1957, when the Soviet Union
planned a huge solar project for Siberia. Alarge boiler was fixed on
a platform 140 feet high. Parabolic mirrors, mounted on 1,300 railroad
cars, revolved on circular tracks to focus light on the boiler. The
full-scale model was never built, but the design inspired the solar
power tower.
In the Mojave desert near Barstow, California, an experimental
power tower, Solar One, began operation in 1982. The system collects
solar energy to deliver steam to turbines that produce electric
power. The 30-story tower is surrounded by more than 1,800 mirrors
that adjust continually to track the Sun. Solar One generates
about 10 megawatts per day, enough power for 5,000 people.
Solar One was expensive, but future power towers will generate
electricity as cheaply as fossil fuels can. If the costs of the air and
water pollution caused by coal burning were considered, solar power
plants would already be recognized as cost effective. Meanwhile,
Frank Shuman’s success in establishing and operating a thoroughly
practical large-scale solar engine continues to inspire research and
development.
See also : Compressed-air-accumulating power plant; Fuel cell;
Geothermal power; Nuclear power plant; Photoelectric cell; Photovoltaiccell ; Solar Power
Wednesday, October 10, 2012
Silicones
The invention:
Synthetic polymers characterized by lubricity, extreme
water repellency, thermal stability, and inertness that are
widely used in lubricants, protective coatings, paints, adhesives,
electrical insulation, and prosthetic replacements for body parts.
The people behind the invention:
Eugene G. Rochow (1909 - 2002 ), an American research chemist
Frederic Stanley Kipping (1863-1949), a Scottish chemist and
professor
James Franklin Hyde (1903- ), an American organic chemist
Synthesizing Silicones
Frederic Stanley Kipping, in the first four decades of the twentieth
century, made an extensive study of the organic (carbon-based)
chemistry of the element silicon. He had a distinguished academic
career and summarized his silicon work in a lecture in 1937 (“Organic
Derivatives of Silicon”). Since Kipping did not have available
any naturally occurring compounds with chemical bonds between
carbon and silicon atoms (organosilicon compounds), it was necessary
for him to find methods of establishing such bonds. The basic
method involved replacing atoms in naturally occurring silicon
compounds with carbon atoms from organic compounds.
While Kipping was probably the first to prepare a silicone and was
certainly the first to use the term silicone, he did not pursue the commercial
possibilities of silicones. Yet his careful experimental work was
a valuable starting point for all subsequent workers in organosilicon
chemistry, including those who later developed the silicone industry.
On May 10, 1940, chemist Eugene G. Rochow of the General
Electric (GE) Company’s corporate research laboratory in
Schenectady, New York, discovered that methyl chloride gas,
passed over a heated mixture of elemental silicon and copper, reacted
to form compounds with silicon-carbon bonds. Kipping
had shown that these silicon compounds react with water to form
silicones.
The importance of Rochow’s discovery was that it opened the
way to a continuous process that did not consume expensive metals,
such as magnesium, or flammable ether solvents, such as those
used by Kipping and other researchers. The copper acts as a catalyst,
and the desired silicon compounds are formed with only minor
quantities of by-products. This “direct synthesis,” as it came to be
called, is now done commercially on a large scale.
Silicone Structure
Silicones are examples of what chemists call polymers. Basically, a
polymer is a large molecule made up of many smaller molecules
that are linked together. At the molecular level, silicones consist of
long, repeating chains of atoms. In this molecular characteristic, silicones
resemble plastics and rubber.
Silicone molecules have a chain composed of alternate silicon and
oxygen atoms. Each silicon atom bears two organic groups as substituents,
while the oxygen atoms serve to link the silicon atoms into a
chain. The silicon-oxygen backbone of the silicones is responsible for
their unique and useful properties, such as the ability of a silicone oil
to remain liquid over an extremely broad temperature range and to
resist oxidative and thermal breakdown at high temperatures.
Afundamental scientific consideration with silicone, as with any
polymer, is to obtain the desired physical and chemical properties in
a product by closely controlling its chemical structure and molecular
weight. Oily silicones with thousands of alternating silicon and
oxygen atoms have been prepared. The average length of the molecular
chain determines the flow characteristics (viscosity) of the oil.
In samples with very long chains, rubber-like elasticity can be
achieved by cross-linking the silicone chains in a controlled manner
and adding a filler such as silica. High degrees of cross-linking
could produce a hard, intractable material instead of rubber.
The action of water on the compounds produced from Rochow’s
direct synthesis is a rapid method of obtaining silicones, but does
not provide much control of the molecular weight. Further development
work at GE and at the Dow-Corning company showed that
the best procedure for controlled formation of silicone polymers involved
treating the crude silicones with acid to produce a mixture
from which high yields of an intermediate called “D4” could be obtained
by distillation. The intermediate D4 could be polymerized in
a controlled manner by use of acidic or basic catalysts. Wilton I.
Patnode of GE and James F. Hyde of Dow-Corning made important
advances in this area. Hyde’s discovery of the use of traces of potassium
hydroxide as a polymerization catalyst for D4 made possible
the manufacture of silicone rubber, which is the most commercially
valuable of all the silicones.
Impact
Although Kipping’s discovery and naming of the silicones occurred
from 1901 to 1904, the practical use and impact of silicones
started in 1940, with Rochow’s discovery of direct synthesis.
Production of silicones in the United States came rapidly enough
to permit them to have some influence on military supplies for
WorldWar II (1939-1945). In aircraft communication equipment, extensive
waterproofing of parts by silicones resulted in greater reliability
of the radios under tropical conditions of humidity, where
condensing water could be destructive. Silicone rubber, because
of its ability to withstand heat, was used in gaskets under hightemperature
conditions, in searchlights, and in the engines on B-29
bombers. Silicone grease applied to aircraft engines also helped to
protect spark plugs from moisture and promote easier starting.
AfterWorldWar II, the uses for silicones multiplied. Silicone rubber
appeared in many products from caulking compounds to wire insulation
to breast implants for cosmetic surgery. Silicone rubber boots were
used on the moon walks where ordinary rubber would have failed.
Silicones in their present form owe much to years of patient developmental
work in industrial laboratories. Basic research, such as
that conducted by Kipping and others, served to point the way and
catalyzed the process of commercialization.
Eugene G. Rochow
Eugene George Rochow was born in 1909 and grew up in the
rural New Jersey town of Maplewood. There his father, who
worked in the tanning industry, and his big brother maintained
a small attic laboratory. They experimented with electricity, radio—Eugene put together his own crystal set in an oatmeal
box—and chemistry.
Rochow followed his brother to Cornell University in 1927.
The Great Depression began during his junior year, and although
he had to take jobs as lecture assistant to get by, he managed
to earn his bachelor’s degree in chemistry in 1931 and his
doctorate in 1935. Luck came his way in the extremely tight job
market: General Electric (GE) hired him for his expertise in inorganic chemistry.
In 1938 the automobile industry, among other manufacturers,
had a growing need for a high-temperature-resistant insulators.
Scientists at Corning were convinced that silicone would
have the best properties for the purpose, but they could not find
a way to synthesize it cheaply and in large volume. When word
about their ideas got back to Rochow at GE, he reasoned that a
flexible silicone able to withstand temperatures of 200 to 300 degrees
Celsius could be made by bonding silicone to carbon. His
research succeeded in producing methyl silicone in 1939, and
he devised a way to make it cheaply in 1941. It was the first
commercially practical silicone. His process is still used.
After World War II GE asked him to work on an effort to
make aircraft carriers nuclear powered. However, Rochow was
a Quaker and pacifist, and he refused. Instead, he moved to
Harvard University as a chemistry professor in 1948 and remained
there until his retirement in 1970. As the founder of a
new branch of industrial chemistry, he received most of the discipline’s
awards and medals, including the Perkin Award, and
honorary doctorates.
See also : Buna rubber ; Neoprene ; Nylon ; Plastic ; Polyethylene ; Silicone Wikipedia
Wednesday, October 3, 2012
Scanning tunneling microscope
The invention:
A major advance on the field ion microscope, the
scanning tunneling microscope has pointed toward new directions
in the visualization and control of matter at the atomic
level.
The people behind the invention:
Gerd Binnig (1947- ), a West German physicist who was a
cowinner of the 1986 Nobel Prize in Physics
Heinrich Rohrer (1933- ), a Swiss physicist who was a
cowinner of the 1986 Nobel Prize in Physics
Ernst Ruska (1906-1988), a West German engineer who was a
cowinner of the 1986 Nobel Prize in Physics
Antoni van Leeuwenhoek (1632-1723), a Dutch naturalist
The Limit of Light
The field of microscopy began at the end of the seventeenth century,
when Antoni van Leeuwenhoek developed the first optical microscope.
In this type of microscope, a magnified image of a sample
is obtained by directing light onto it and then taking the light
through a lens system. Van Leeuwenhoek’s microscope allowed
him to observe the existence of life on a scale that is invisible to the
naked eye. Since then, developments in the optical microscope have
revealed the existence of single cells, pathogenic agents, and bacteria.
There is a limit, however, to the resolving power of optical microscopes.
Known as “Abbe’s barrier,” after the German physicist and
lens maker Ernst Abbe, this limit means that objects smaller than
about 400 nanometers (about a millionth of a millimeter) cannot be
viewed by conventional microscopes.
In 1925, the physicist Louis de Broglie predicted that electrons
would exhibit wave behavior as well as particle behavior. This prediction
was confirmed by Clinton J. Davisson and Lester H. Germer
of Bell Telephone Laboratories in 1927. It was found that highenergy
electrons have shorter wavelengths than low-energy electrons
and that electrons with sufficient energies exhibit wave-
lengths comparable to the diameter of the atom. In 1927, Hans
Busch showed in a mathematical analysis that current-carrying
coils behave like electron lenses and that they obey the same lens
equation that governs optical lenses. Using these findings, Ernst
Ruska developed the electron microscope in the early 1930’s.
By 1944, the German corporation of Siemens and Halske had
manufactured electron microscopes with a resolution of 7 nanometers;
modern instruments are capable of resolving objects as
small as 0.5 nanometer. This development made it possible to view
structures as small as a few atoms across as well as large atoms and
large molecules.
The electron beam used in this type of microscope limits the usefulness
of the device. First, to avoid the scattering of the electrons,
the samples must be put in a vacuum, which limits the applicability
of the microscope to samples that can sustain such an environment.
Most important, some fragile samples, such as organic molecules,
are inevitably destroyed by the high-energy beams required for
high resolutions.
Viewing Atoms
From 1936 to 1955, ErwinWilhelm Müller developed the field ion
microscope (FIM), which used an extremely sharp needle to hold the
sample. This was the first microscope to make possible the direct
viewing of atomic structures, but it was limited to samples capable of
sustaining the high electric fields necessary for its operation.
In the early 1970’s, Russel D. Young and Clayton Teague of the
National Bureau of Standards (NBS) developed the “topografiner,”
a new kind of FIM. In this microscope, the sample is placed at a large
distance from the tip of the needle. The tip is scanned across the surface
of the sample with a precision of about a nanometer. The precision
in the three-dimensional motion of the tip was obtained by using
three legs made of piezoelectric crystals. These materials change
shape in a reproducible manner when subjected to a voltage. The
extent of expansion or contraction of the crystal depends on the
amount of voltage that is applied. Thus, the operator can control the
motion of the probe by varying the voltage acting on the three legs.
The resolution of the topografiner is limited by the size of the probe.
The idea for the scanning tunneling microscope (STM) arose
when Heinrich Rohrer of the International Business Machines (IBM)
Corporation’s Zurich research laboratory met Gerd Binnig in Frankfurt
in 1978. The STM is very similar to the topografiner. In the STM,
however, the tip is kept at a height of less than a nanometer away
from the surface, and the voltage that is applied between the specimen
and the probe is low. Under these conditions, the electron
cloud of atoms at the end of the tip overlaps with the electron cloud
of atoms at the surface of the specimen. This overlapping results in a
measurable electrical current flowing through the vacuum or insulating
material existing between the tip and the sample. When the
probe is moved across the surface and the voltage between the
probe and sample is kept constant, the change in the distance between
the probe and the surface (caused by surface irregularities)
results in a change of the tunneling current.
Two methods are used to translate these changes into an image of
the surface. The first method involves changing the height of the
probe to keep the tunneling current constant; the voltage used to
change the height is translated by a computer into an image of the
surface. The second method scans the probe at a constant height
away from the sample; the voltage across the probe and sample is
changed to keep the tunneling current constant. These changes in
voltage are translated into the image of the surface. The main limitation
of the technique is that it is applicable only to conducting samples
or to samples with some surface treatment.
Consequences
In October, 1989, the STM was successfully used in the manipulation
of matter at the atomic level. By letting the probe sink into the
surface of a metal-oxide crystal, researchers at Rutgers University
were able to dig a square hole about 250 atoms across and 10 atoms
deep.Amore impressive feat was reported in the April 5, 1990, issue
of Nature; M. Eigler and Erhard K. Schweiser of IBM’s Almaden Research
Center spelled out their employer’s three-letter acronym using
thirty-five atoms of xenon. This ability to move and place individual
atoms precisely raises several possibilities, which include the
creation of custom-made molecules, atomic-scale data storage, and
ultrasmall electrical logic circuits.
The success of the STM has led to the development of several
new microscopes that are designed to study other features of sample
surfaces. Although they all use the scanning probe technique to
make measurements, they use different techniques for the actual detection.
The most popular of these new devices is the atomic force
microscope (AFM). This device measures the tiny electric forces that
exist between the electrons of the probe and the electrons of the
sample without the need for electron flow, which makes the tech-
nique particularly useful in imaging nonconducting surfaces. Other
scanned probe microscopes use physical properties such as temperature
and magnetism to probe the surfaces.
Gerd Binnig and Heinrich Rohrer
Both Gerd Binnig and Heinrich Rohrer believe an early and
pleasurable introduction to teamwork led to their later success
in inventing the scanning tunneling microscope, for which they
shared the 1986 Nobel Prize in Physics with Ernst Ruska.
Binnig was born in Frankfurt, Germany, in 1947. He acquired
an early interest in physics but was always deeply influenced
by classical music, introduced to him by his mother, and
the rock music that his younger brother played for him. Binnig
played in rock bands as a teenager and learned to enjoy the creative
interplay of teamwork. At J. W. Goethe University in
Frankfurt he earned a bachelor’s degree (1973) and doctorate
(1978) in physics and then took a position at International Business
Machine’s Zurich Research Laboratory. There he recaptured
the pleasures of working with a talented team after joining
Rohrer in research.
Rohrer had been at the Zurich facility since just after it
opened in 1963. He was born in Buch, Switzerland, in 1933, and
educated at the Swiss Federal Institute of Technology in Zurich,
where he completed his doctorate in 1960. After post-doctoral
work at Rutgers University, he joined the IBM research team, a
time that he describes as among the most enjoyable passages of
his career.
In addition to the Nobel Prize, the pair also received the German
Physics Prize, Otto Klung Prize, Hewlett Packard Prize,
and King Faisal Prize. Rohrer became an IBM Fellow in 1986
and was selected to manage the physical sciences department at
the Zurich Research Laboratory. He retired from IBM in July
1997. Binnig became an IBM Fellow in 1987
See also : Electron microscope ; Mass spectrograph ; Neutrino detector ; Wikipedia
Saturday, September 29, 2012
Salvarsan
The invention:
The first successful chemotherapeutic for the treatment
of syphilis
The people behind the invention:
Paul Ehrlich (1854-1915), a German research physician and
chemist
Wilhelm von Waldeyer (1836-1921), a German anatomist
Friedrich von Frerichs (1819-1885), a German physician and
professor
Sahachiro Hata (1872-1938), a Japanese physician and
bacteriologist
Fritz Schaudinn (1871-1906), a German zoologist
The Great Pox
The ravages of syphilis on humankind are seldom discussed
openly. A disease that struck all varieties of people and was transmitted
by direct and usually sexual contact, syphilis was both
feared and reviled. Many segments of society across all national
boundaries were secure in their belief that syphilis was divine punishment
of the wicked for their evil ways.
It was not until 1903 that bacteriologists Élie Metchnikoff and
Pierre-Paul-Émile Roux demonstrated the transmittal of syphilis to
apes, ending the long-held belief that syphilis was exclusively a human
disease. The disease destroyed families, careers, and lives,
driving its infected victims mad, destroying the brain, or destroying
the cardiovascular system. It was methodical and slow, but in every
case, it killed with singular precision. There was no hope of a safe
and effective cure prior to the discovery of Salvarsan.
Prior to 1910, conventional treatment consisted principally of
mercury or, later, potassium iodide. Mercury, however, administered
in large doses, led to severe ulcerations of the tongue, jaws,
and palate. Swelling of the gums and loosening of the teeth resulted.
Dribbling saliva and the attending fetid odor also occurred. These
side effects of mercury treatment were so severe that many pre-
ferred to suffer the disease to the end rather than undergo the standard
cure. About 1906, Metchnikoff and Roux demonstrated that
mercurial ointments, applied very early, at the first appearance of
the primary lesion, were effective.
Once the spirochete-type bacteria invaded the bloodstream and
tissues, the infected person experienced symptoms of varying nature
and degree—high fever, intense headaches, and excruciating
pain. The patient’s skin often erupted in pustular lesions similar in
appearance to smallpox. It was the distinguishing feature of these
pustular lesions that gave syphilis its other name: the “Great Pox.”
Death brought the only relief then available.
Poison Dyes
Paul Ehrlich became fascinated by the reactions of dyes with biological
cells and tissues while a student at the University of Strasbourg
under Wilhelm von Waldeyer. It was von Waldeyer who
sparked Ehrlich’s interest in the chemical viewpoint of medicine.
Thus, as a student, Ehrlich spent hours at this laboratory experimenting
with different dyes on various tissues. In 1878, he published
a book that detailed the discriminate staining of cells and cellular
components by various dyes.
Ehrlich joined Friedrich von Frerichs at the Charité Hospital in
Berlin, where Frerichs allowed Ehrlich to do as much research as he
wanted. Ehrlich began studying atoxyl in 1908, the year he won
jointly with Metchnikoff the Nobel Prize in Physiology or Medicine
for his work on immunity. Atoxyl was effective against trypanosome—
a parasite responsible for a variety of infections, notably
sleeping sickness—but also imposed serious side effects upon the
patient, not the least of which was blindness. It was Ehrlich’s study
of atoxyl, and several hundred derivatives sought as alternatives to
atoxyl in trypanosome treatment, that led to the development of derivative
606 (Salvarsan). Although compound 606 was the first
chemotherapeutic to be used effectively against syphilis, it was discontinued
as an atoxyl alternative and shelved as useless for five
years.
The discovery and development of compound 606 was enhanced
by two critical events. First, the Germans Fritz Schaudinn and Erich
Hoffmann discovered that syphilis is a bacterially caused disease.
The causative microorganism is a spirochete so frail and gossameric
in substance that it is nearly impossible to detect by casual microscopic
examination; Schaudinn chanced upon it one day in March,
1905. This discovery led, in turn, to German bacteriologist August
von Wassermann’s development of the now famous test for syphilis:
the Wassermann test. Second, a Japanese bacteriologist, Sahachiro
Hata, came to Frankfurt in 1909 to study syphilis with
Ehrlich. Hata had studied syphilis in rabbits in Japan. Hata’s assignment
was to test every atoxyl derivative ever developed under
Ehrlich for its efficacy in syphilis treatment. After hundreds of tests
and clinical trials, Ehrlich and Hata announced Salvarsan as a
“magic bullet” that could cure syphilis, at the April, 1910, Congress
of Internal Medicine in Wiesbaden, Germany.
The announcement was electrifying. The remedy was immediately
and widely sought, but it was not without its problems. Afew deaths
resulted fromits use, and it was not safe for treatment of the gravely ill.
Some of the difficulties inherent in Salvarsan were overcome by the development
of neosalvarsan in 1912 and sodium salvarsan in 1913. Although
Ehrlich achieved much, he fell short of his own assigned goal, a
chemotherapeutic that would cure in one injection.
Impact
The significance of the development of Salvarsan as an antisyphilitic
chemotherapeutic agent cannot be overstated. Syphilis at
that time was as frightening and horrifying as leprosy and was a virtual
sentence of slow, torturous death. Salvarsan was such a significant
development that Ehrlich was recommended for a 1912 and
1913 Nobel Prize for his work in chemotherapy.
It was several decades before any further significant advances in
“wonder drugs” occurred, namely, the discovery of prontosil in 1932
and its first clinical use in 1935. On the heels of prontosil—a sulfa
drug—came other sulfa drugs. The sulfa drugs would remain supreme
in the fight against bacterial infection until the antibiotics, the
first being penicillin, were discovered in 1928; however, they were
not clinically recognized untilWorldWar II (1939-1945).With the discovery
of streptomycin in 1943 and Aureomycin in 1944, the assault
against bacteria was finally on a sound basis. Medicine possessed an
arsenal with which to combat the pathogenic microbes that for centuries
before had visited misery and death upon humankind.
See also : Abortion pill ; Antibacterial drugs ; Artificial insemination ; Birth control pill ; Penicillin ; Reserpine ; Arsphenamine
Friday, September 28, 2012
SAINT
The invention:
Taking its name from the acronym for symbolic automatic
integrator, SAINT is recognized as the first “expert system”—
a computer program designed to perform mental tasks requiring
human expertise.
The person behind the invention:
James R. Slagle (1934-1994), an American computer scientist
The Advent of Artificial Intelligence
In 1944, the Harvard-IBM Mark I was completed. This was an
electromechanical (that is, not fully electronic) digital computer
that was operated by means of coding instructions punched into
paper tape. The machine took about six seconds to perform a multiplication
operation, twelve for a division operation. In the following
year, 1945, the world’s first fully electronic digital computer,
the Electronic Numerical Integrator and Calculator (ENIAC),
became operational. This machine, which was constructed at the
University of Pennsylvania, was thirty meters long, three meters
high, and one meter deep.
At the same time that these machines were being built, a similar
machine was being constructed in the United Kingdom: the automated
computing engine (ACE).Akey figure in the British development
was Alan Turing, a mathematician who had used computers
to break German codes during World War II. After the war, Turing
became interested in the area of “computing machinery and intelligence.”
He posed the question “Can machines think?” and set the
following problem, which is known as the “Turing test.” This test
involves an interrogator who sits at a computer terminal and asks
questions on the terminal about a subject for which he or she seeks intelligent
answers. The interrogator does not know, however, whether
the system is linked to a human or if the responses are, in fact, generated
by a program that is acting intelligently. If the interrogator cannot
tell the difference between the human operator and the computer
system, then the system is said to have passed the Turing test
and has exhibited intelligent behavior.
SAINT: An Expert System
In the attempt to answer Turing’s question and create machines
that could pass the Turing test, researchers investigated techniques
for performing tasks that were considered to require expert levels of
knowledge. These tasks included games such as checkers, chess, and
poker. These games were chosen because the total possible number of
variations in each game was very large. This led the researchers to
several interesting questions for study. How do experts make a decision
in a particular set of circumstances? How can a problem such as
a game of chess be represented in terms of a computer program? Is it
possible to know why the system chose a particular solution?
One researcher, James R. Slagle at the Massachusetts Institute of
Technology, chose to develop a program that would be able to solve
elementary symbolic integration problems (involving the manipulation
of integrals in calculus) at the level of a good college freshman.
The program that Slagle constructed was known as SAINT, an
acronym for symbolic automatic integrator, and it is acknowledged
as the first “expert system.”
An expert system is a system that performs at the level of a human
expert. An expert system has three basic components: a knowledge
base, in which domain-specific information is held (for example, rules
on how best to perform certain types of integration problems); an inference
engine, which decides how to break down a given problem utilizing
the rules in the knowledge base; and a human-computer interface
that inputs data—in this case, the integral to be solved—and
outputs the result of performing the integration. Another feature of expert
systems is their ability to explain their reasoning.
The integration problems that could be solved by SAINT were
in the form of elementary integral functions. SAINT could perform
indefinite integration (also called “antidifferentiation”) on these
functions. In addition, it was capable of performing definite and
indefinite integration on trivial extensions of indefinite integration.
SAINT was tested on a set of eighty-six problems, fifty-four of
which were drawn from the MIT final examinations in freshman
calculus; it succeeded in solving all but two. Slagle added more
rules to the knowledge base so that problems of the type it encountered
but could not solve could be solved in the future.
The power of the SAINT system was, in part, based on its ability
to perform integration through the adoption of a “heuristic” processing
system.Aheuristic method is one that helps in discovering a
problem’s solution by making plausible but feasible guesses about
the best strategy to apply next to the current problem situation. A
heuristic is a rule of thumb that makes it possible to take short cuts
in reaching a solution, rather than having to go through every step
in a solution path. These heuristic rules are contained in the knowledge
base. SAINT was written in the LISP programming language
and ran on an IBM 7090 computer. The program and research were
Slagle’s doctoral dissertation.
Consequences
The SAINT system that Slagle developed was significant for several
reasons: First, it was the first serious attempt at producing a
program that could come close to passing the Turing test. Second, it
brought the idea of representing an expert’s knowledge in a computer
program together with strategies for solving complex and difficult
problems in an area that previously required human expertise.
Third, it identified the area of knowledge-based systems and
showed that computers could feasibly be used for programs that
did not relate to business data processing. Fourth, the SAINT system
showed how the use of heuristic rules and information could
lead to the solution of problems that could not have been solved
previously because of the amount of time needed to calculate a solution.
SAINT’s major impact was in outlining the uses of these techniques,
which led to continued research in the subfield of artificial
intelligence that became known as expert systems.
James R. Slagle
James R. Slagle was born in 1934 in Brooklyn,NewYork, and
attended nearby St. John’s University. He majored in mathematics
and graduated with a bachelor of science degree in 1955,
also winning the highest scholastic average award. While earning
his master’s degree (1957) and doctorate (1961) at the Massachusetts
Institute of Technology (MIT), he was a staff mathematician
in the university’s Lincoln Laboratory.
Slagle taught in MIT’s electrical engineering department
part-time after completing his dissertation on the first expert
computer system and then moved to Lawrence-Livermore
National Laboratory near Berkeley, California. While working
there he also taught at the University of California. From 1967
until 1974 he was an adjunct member of the computer science
faculty of Johns Hopkins University in Baltimore, Maryland,
and then was appointed chief of the computer science laboratory
at the Naval Research Laboratory (NRL) inWashington, D.C., receiving
the Outstanding Handicapped Federal Employee of the
Year Award in 1979. In 1984 he was made a special assistant in
the Navy Center for Applied Research in Artificial Intelligence
at NRL but left in 1984 to become Distinguished Professor of
Computer Science at the University of Minnesota.
In these various positions Slagle helped mature the fledgling
discipline of artificial intelligence, publishing the influential
book Artificial Intelligence in 1971. He developed an expert system
designed to set up other expert systems—A Generalized
Network-based Expert System Shell, or AGNESS. He also worked
on parallel expert systems, artificial neural networks, timebased
logic, and methods for uncovering causal knowledge in
large databases. He died in 1994.
Labels:
1934,
1994,
computer program,
Expert System,
info,
informations,
invention,
inventor,
James R. Slagle,
SAINT,
software
Wednesday, September 26, 2012
Rotary dial telephone
The invention:
The first device allowing callers to connect their
telephones to other parties without the aid of an operator, the rotary
dial telephone preceded the touch-tone phone.
The people behind the invention:
Alexander Graham Bell (1847-1922), an American inventor
Antoine Barnay (1883-1945), a French engineer
Elisha Gray (1835-1901), an American inventor
Rotary Telephones Dials Make Phone Linkups Automatic
The telephone uses electricity to carry sound messages over long
distances. When a call is made from a telephone set, the caller
speaks into a telephone transmitter and the resultant sound waves
are converted into electrical signals. The electrical signals are then
transported over a telephone line to the receiver of a second telephone
set that was designated when the call was initiated. This receiver
reverses the process, converting the electrical signals into the
sounds heard by the recipient of the call. The process continues as
the parties talk to each other.
The telephone was invented in the 1870’s and patented in 1876 by
Alexander Graham Bell. Bell’s patent application barely preceded
an application submitted by his competitor Elisha Gray. After a
heated patent battle between Bell and Gray, which Bell won, Bell
founded the Bell Telephone Company, which later came to be called
the American Telephone and Telegraph Company.
At first, the transmission of phone calls between callers and recipients
was carried out manually, by switchboard operators. In
1923, however, automation began with Antoine Barnay’s development
of the rotary telephone dial. This dial caused the emission of
variable electrical impulses that could be decoded automatically
and used to link the telephone sets of callers and call recipients. In
time, the rotary dial system gave way to push-button dialing and
other more modern networking techniques.
Telephones, Switchboards, and Automation
The carbon transmitter, which is still used in many modern telephone
sets, was the key to the development of the telephone by Alexander
Graham Bell. This type of transmitter—and its more modern
replacements—operates like an electric version of the human
ear. When a person talks into the telephone set in a carbon transmitter-
equipped telephone, the sound waves that are produced strike
an electrically connected metal diaphragm and cause it to vibrate.
The speed of vibration of this electric eardrum varies in accordance
with the changes in air pressure caused by the changing tones of the
speaker’s voice.
Behind the diaphragm of a carbon transmitter is a cup filled with
powdered carbon. As the vibrations cause the diaphragm to press
against the carbon, the electrical signals—electrical currents of varying
strength—pass out of the instrument through a telephone wire.
Once the electrical signals reach the receiver of the phone being
called, they activate electromagnets in the receiver that make a second
diaphragm vibrate. This vibration converts the electrical signals
into sounds that are very similar to the sounds made by the person
who is speaking. Therefore, a telephone receiver may be viewed
as an electric mouth.
In modern telephone systems, transportation of the electrical signals
between any two phone sets requires the passage of those signals
through vast telephone networks consisting of huge numbers
of wires, radio systems, and other media. The linkup of any two
phone sets was originally, however, accomplished manually—on a
relatively small scale—by a switchboard operator who made the
necessary connections by hand. In such switchboard systems, each
telephone set in the network was associated with a jack connector in
the switchboard. The operator observed all incoming calls, identified
the phone sets for which they were intended, and then used
wires to connect the appropriate jacks. At the end of the call, the
jacks were disconnected.
This cumbersome methodology limited the size and efficiency of
telephone networks and invaded the privacy of callers. The development
of automated switching systems soon solved these problems
and made switchboard operators obsolete. It was here that
Antoine Barnay’s rotary dial was used, making possible an exchange
that automatically linked the phone sets of callers and call
recipients in the following way.
First, a caller lifted a telephone “off the hook,” causing a switchhook,
like those used in modern phones, to close the circuit that connected
the telephone set to the telephone network. Immediately, a
dial tone (still familiar to callers) came on to indicate that the automatic
switching system could handle the planned call. When the
phone dial was used, each number or letter that was dialed produced
a fixed number of clicks. Every click indicated that an electrical
pulse had been sent to the network’s automatic switching system,
causing switches to change position slightly. Immediately after
a complete telephone number was dialed, the overall operation of
the automatic switchers connected the two telephone sets. This connection
was carried out much more quickly and accurately than had
been possible when telephone operators at manual switchboards
made the connection.
Impact
The telephone has become the world’s most important communication
device. Most adults use it between six and eight times per
day, for personal and business calls. This widespread use has developed
because huge changes have occurred in telephones and telephone
networks. For example, automatic switching and the rotary
dial system were only the beginning of changes in phone calling.
Touch-tone dialing replaced Barnay’s electrical pulses with audio
tones outside the frequency of human speech. This much-improved
system can be used to send calls over much longer distances than
was possible with the rotary dial system, and it also interacts well
with both answering machines and computers.
Another advance in modern telephoning is the use of radio
transmission techniques in mobile phones, rendering telephone
cords obsolete. The mobile phone communicates with base stations
arranged in “cells” throughout the service area covered. As the user
changes location, the phone link automatically moves from cell to
cell in a cellular network.
In addition, the use of microwave, laser, and fiber-optic technologies
has helped to lengthen the distance over which phone calls can
be transmitted. These technologies have also increased the number
of messages that phone networks can handle simultaneously and
have made it possible to send radio and television programs (such
as cable television), scientific data (via modems), and written messages
(via facsimile, or “fax,” machines) over phone lines. Many
other advances in telephone technology are expected as society’s
needs change and new technology is developed.
Alexander Graham Bell
During the funeral for Alexander Graham Bell in 1922, telephone
service throughout the United States stopped for one
minute to honor him. To most people he was the inventor of the
telephone. In fact, his genius ranged much further.
Bell was born in Edinburgh, Scotland, in 1847. His father,
an elocutionist who invented a phonetic alphabet, and his
mother, who was deaf, imbued him with deep curiosity, especially
about sound. As a boy Bell became an exceptional pianist,
and he produced his first invention, for cleaning wheat, at
fourteen. After Edinburgh’s Royal High School, he attended
classes at Edinburgh University and University College, London,
but at the age of twenty-three, battling tuberculosis, he
left school to move with his parents to Ontario, Canada, to
convalesce. Meanwhile, he worked on his idea for a telegraph
capable of sending multiple messages at once. From it grew
the basic concept for the telephone. He developed it while
teaching Visible Speech at the Boston School for Deaf Mutes
after 1871. Assisted by ThomasWatson, he succeeded in sending
speech over a wire and was issued a patent for his device,
among the most valuable ever granted, in 1876. His demonstration
of the telephone later that year at Philadelphia’s
Centennial Exhibition and its subsequent development into a
household appliance brought him wealth and fame.
He moved to Nova Scotia, Canada, and continued inventing.
He created a photophone, tetrahedron modules for construction,
and an airplane, the Silver Dart, which flew in 1909.
Even though existing technology made them impracticable,
some of his ideas anticipated computers and magnetic sound
recording. His last patented invention, tested three years before
his death, was a hydrofoil. Capable of reaching seventy-one
miles per hour and freighting fourteen thousand pounds, the
HD-4 was then the fastest watercraft in the world.
Bell also helped found the National Geographic Society in
1888 and became its president in 1898. He hired Gilbert Grosvenor
to edit the society’s famous magazine, National Geographic
and together they planned the format—breathtaking
photography and vivid writing—that made it one of the world’s
best known magazines.
See also here !
Monday, September 24, 2012
Rocket
The invention: Liquid-fueled rockets developed by Robert H. Goddard
made possible all later developments in modern rocketry,
which in turn has made the exploration of space practical.
The person behind the invention:
Robert H. Goddard (1882-1945), an American physics professor
History in a Cabbage Patch
Just as the age of air travel began on an out-of-the-way shoreline
at Kitty Hawk, North Carolina, with the Wright brothers’ airplane
in 1903, so too the seemingly impossible dream of spaceflight
began in a cabbage patch in Auburn, Massachusetts, with
Robert H. Goddard’s launch of a liquid-fueled rocket on March 16,
1926. On that clear, cold day, with snow still on the ground, Goddard
launched a three-meter-long rocket using liquid oxygen and
gasoline. The flight lasted only about two and one-half seconds,
during which the rocket rose 12 meters and landed about 56 meters
away.
Although the launch was successful, the rocket’s design was
clumsy. At first, Goddard had thought that a rocket would be
steadier if the motor and nozzles were ahead of the fuel tanks,
rather like a horse and buggy. After this first launch, it was clear
that the motor needed to be placed at the rear of the rocket. Although
Goddard had spent several years working on different
pumps to control the flow of fuel to the motor, the first rocket had
no pumps or electrical system. Henry Sacks, a Clark University
machinist, launched the rocket by turning a valve, placing an alcohol
stove beneath the motor, and dashing for safety. Goddard and
his coworker Percy Roope watched the launch from behind an iron
wall.
Despite its humble setting, this simple event changed the course
of history. Many people saw in Goddard’s launch the possibilities
for high-altitude research, space travel, and modern weaponry. Although
Goddard invented and experimented mostly in private,
others in the United States, the Soviet Union, and Germany quickly
followed in his footsteps. The V-2 rockets used by Nazi Germany
in World War II (1939-1945) included many of Goddard’s designs
and ideas.
A Lifelong Interest
Goddard’s success was no accident. He had first become interested
in rockets and space travel when he was seventeen, no doubt
because of reading books such as H. G.Wells’s The War of the Worlds
(1898) and Garrett P. Serviss’s Edison’s Conquest of Mars (1898). In
1907, he sent to several scientific journals a paper describing his ideas
about traveling through a near vacuum. Although the essay was rejected,
Goddard began thinking about liquid fuels in 1909. After finishing
his doctorate in physics at Clark University and postdoctoral
studies at Princeton University, he began to experiment.
One of the things that made Goddard so successful was his ability
to combine things he had learned from chemistry, physics, and
engineering into rocket design. More than anyone else at the time,
Goddard had the ability to combine ideas with practice.
Goddard was convinced that the key for moving about in space
was the English physicist and mathematician Sir Isaac Newton’s
third law of motion (for every action there is an equal and opposite
reaction). To prove this, he showed that a gun recoiled when it was
fired in a vacuum. During World War I (1914-1918), Goddard
moved to the Mount Wilson Observatory in California, where he
investigated the use of black powder and smokeless powder as
rocket fuel. Goddard’s work led to the invention of the bazooka, a
weapon that was much used duringWorldWar II, as well as bombardment
and antiaircraft rockets.
After World War I, Goddard returned to Clark University. By
1920, mostly because of the experiments he had done during the
war, he had decided that a liquid-fuel motor, with its smooth thrust,
had the best chance of boosting a rocket into space. The most powerful
fuel was hydrogen, but it is very difficult to handle. Oxygen had
many advantages, but it was hard to find and extremely dangerous,
since it boils at -148 degrees Celsius and explodes when it comes in
contact with oils, greases, and flames. Other possible fuels were pro-
pane, ether, kerosene, or gasoline, but they all had serious disadvantages.
Finally, Goddard found a local source of oxygen and was able
to begin testing its thrust.
Another problem was designing a fuel pump. Goddard and his
assistant Nils Riffolt spent years on this problem before the historic
test flight of March, 1926. In the end, because of pressure from the
Smithsonian Institution and others who were funding his research,
Goddard decided to do without a pump and use an inert gas to
push the fuel into the explosion chamber.
Goddard worked without much funding between 1920 and 1925.
Riffolt helped him greatly in designing a pump, and Goddard’s
wife, Esther, photographed some of the tests and helped in other
ways. Clark University had granted him some research money in
1923, but by 1925 money was in short supply, and the Smithsonian
Institution did not seem willing to grant more. Goddard was convinced
that his research would be taken seriously if he could show
some serious results, so on March 16, 1926, he launched a rocket
even though his design was not yet perfect. The success of that
launch not only changed his career but also set the stage for rocketry
experiments both in the United States and in Europe.
Impact
Goddard was described as being secretive and a loner. He never
tried to cash in on his invention but continued his research during
the next three years. On July 17, 1929, Goddard launched a rocket
carrying a camera and instruments for measuring temperature
and air pressure. The New York Times published a story about the
noisy crash of this rocket and local officials’ concerns about public
safety. The article also mentioned Goddard’s idea that a similar
rocket might someday strike the Moon. When American aviation
hero Charles A. Lindbergh learned of Goddard’s work, Lindbergh
helped him to get grants from the Carnegie Institution and the
Guggenheim Foundation.
By the middle of 1930, Goddard and a small group of assistants
had established a full-time research program near Roswell, New
Mexico. Now that money was not so much of a problem, Goddard
began to make significant advances in almost every area of astronautics.
In 1941, Goddard launched a rocket to a height of 2,700 meters.
Flight stability was helped by a gyroscope, and he was finally
able to use a fuel pump.
During the 1920’s and 1930’s, members of the American Rocket
Society and the German Society for Space Travel continued their
own research. When World War II began, rocket research became a
high priority for the American and German governments.
Germany’s success with the V-2 rocket was a direct result of
Goddard’s research and inventions, but the United States did not
benefit fully from Goddard’s work until after his death. Nevertheless,
Goddard remains modern rocketry’s foremost pioneer—a scientist
with vision, understanding, and practical skill.
Robert H. Goddard
In 1920 The New York Times made fun of Robert Hutchings
Goddard (1882-1945) for claiming that rockets could travel
through outer space to the Moon. It was impossible, the newspaper’s
editorial writer confidently asserted, because in outer
space the engine would have no air to push against and so
could not move the rocket. A sensitive, quiet man, the Clark
University physics professor was stung by the public rebuke,
all the more so because it displayed ignorance of
basic physics. “Every vision is a joke,” Goddard
said, somewhat bitterly, “until the first man accomplishes
it.”
Goddard had already proved that a rocket could
move in a vacuum, but he refrained from rebutting
the Times article. In 1919 he had become the first
American to describe mathematically the theory of
rocket propulsion in his classic article “A Method of
Reaching Extreme Altitude,” and duringWorldWar I
he had acquired experience designing solid-fuel rockets.
However, even though he was the world’s leading
expert on rocketry, he decided to seek privacy for
his experiments. His successful launch of a liquidfuel
rocket in 1926, followed by new designs that reached ever
higher altitudes, was a source of satisfaction, as were his 214
patents, but real recognition of his achievements did not come
his way untilWorldWar II. In 1942 he was named director of research
at the U.S. Navy’s Bureau of Aeronautics, for which he
worked on jet-assisted takeoff rockets and variable-thrust liquid-
propellant rockets. In 1943 the Curtiss-Wright Corporation
hired him as a consulting engineer, and in 1945 he became director
of the American Rocket Society.
The New York Times finally apologized to Goddard for its
1920 article on the morning after Apollo 11 took off for the
Moon in 1969. However, Goddard, who battled tuberculosis
most of his life, had died twenty-four years earlier.
See also here !
Subscribe to:
Posts (Atom)