Saturday, October 27, 2012

Sonar















The invention:



A device that detects soundwaves transmitted

through water, sonar was originally developed to detect enemy

submarines but is also used in navigation, fish location, and

ocean mapping.



The people behind the invention:



Jacques Curie (1855-1941), a French physicist

Pierre Curie (1859-1906), a French physicist

Paul Langévin (1872-1946), a French physicist







Active Sonar, Submarines, and Piezoelectricity



Sonar, which stands for sound navigation and ranging, is the

American name for a device that the British call “asdic.” There are

two types of sonar. Active sonar, the more widely used of the two

types, detects and locates underwater objects when those objects reflect

sound pulses sent out by the sonar. Passive sonar merely listens

for sounds made by underwater objects. Passive sonar is used

mostly when the loud signals produced by active sonar cannot be

used (for example, in submarines).

The invention of active sonar was the result of American, British,

and French efforts, although it is often credited to Paul Langévin,

who built the first working active sonar system by 1917. Langévin’s

original reason for developing sonar was to locate icebergs, but the

horrors of German submarine warfare inWorldWar I led to the new

goal of submarine detection. Both Langévin’s short-range system

and long-range modern sonar depend on the phenomenon of “piezoelectricity,”

which was discovered by Pierre and Jacques Curie in

1880. (Piezoelectricity is electricity that is produced by certain materials,

such as certain crystals, when they are subjected to pressure.)

Since its invention, active sonar has been improved and its capabilities

have been increased. Active sonar systems are used to detect

submarines, to navigate safely, to locate schools of fish, and to map

the oceans.









Sonar Theory, Development, and Use



Although active sonar had been developed by 1917, it was not

available for military use until World War II. An interesting major

use of sonar before that time was measuring the depth of the ocean.

That use began when the 1922 German Meteor Oceanographic Expedition

was equipped with an active sonar system. The system

was to be used to help pay German WorldWar I debts by aiding in

the recovery of gold from wrecked vessels. It was not used successfully

to recover treasure, but the expedition’s use of sonar to determine

ocean depth led to the discovery of the Mid-Atlantic Ridge.

This development revolutionized underwater geology.

Active sonar operates by sending out sound pulses, often called

“pings,” that travel through water and are reflected as echoes when

they strike large objects. Echoes from these targets are received by

the system, amplified, and interpreted. Sound is used instead of

light or radar because its absorption by water is much lower. The

time that passes between ping transmission and the return of an

echo is used to identify the distance of a target from the system by

means of a method called “echo ranging.” The basis for echo ranging

is the normal speed of sound in seawater (5,000 feet per second).

The distance of the target from the radar system is calculated by

means of a simple equation: range = speed of sound × 0.5 elapsed

time. The time is divided in half because it is made up of the time

taken to reach the target and the time taken to return.

The ability of active sonar to show detail increases as the energy

of transmitted sound pulses is raised by decreasing the

sound wavelength. Figuring out active sonar data is complicated

by many factors. These include the roughness of the ocean, which

scatters sound and causes the strength of echoes to vary, making

it hard to estimate the size and identity of a target; the speed of

the sound wave, which changes in accordance with variations in

water temperature, pressure, and saltiness; and noise caused by

waves, sea animals, and ships, which limits the range of active sonar

systems.

Asimple active pulse sonar system produces a piezoelectric signal

of a given frequency and time duration. Then, the signal is amplified

and turned into sound, which enters the water. Any echo that is produced

returns to the system to be amplified and used to determine the identity

and distance of the target.

Most active sonar systems are mounted near surface vessel keels

or on submarine hulls in one of three ways. The first and most popular

mounting method permits vertical rotation and scanning of a

section of the ocean whose center is the system’s location. The second

method, which is most often used in depth sounders, directs

the beam downward in order to measure ocean depth. The third

method, called wide scanning, involves the use of two sonar systems,

one mounted on each side of the vessel, in such a way that the

two beams that are produced scan the whole ocean at right angles to

the direction of the vessel’s movement.

Active single-beam sonar operation applies an alternating voltage

to a piezoelectric crystal, making it part of an underwater loudspeaker

(transducer) that creates a sound beam of a particular frequency.

When an echo returns, the system becomes an underwater

microphone (receiver) that identifies the target and determines its

range. The sound frequency that is used is determined by the sonar’s

purpose and the fact that the absorption of sound by water increases

with frequency. For example, long-range submarine-seeking sonar

systems (whose detection range is about ten miles) operate at 3 to 40

kilohertz. In contrast, short-range systems that work at about 500 feet

(in mine sweepers, for example) use 150 kilohertz to 2 megahertz.



Impact



Modern active sonar has affected military and nonmilitary activities

ranging from submarine location to undersea mapping and

fish location. In all these uses, two very important goals have been

to increase the ability of sonar to identify a target and to increase the

effective range of sonar. Much work related to these two goals has

involved the development of new piezoelectric materials and the replacement

of natural minerals (such as quartz) with synthetic piezoelectric

ceramics.

Efforts have also been made to redesign the organization of sonar

systems. One very useful development has been changing beammaking

transducers from one-beam units to multibeam modules

made of many small piezoelectric elements. Systems that incorporate

these developments have many advantages, particularly the ability

to search simultaneously in many directions. In addition, systems

have been redesigned to be able to scan many echo beams simultaneously

with electronic scanners that feed into a central receiver.

These changes, along with computer-aided tracking and target

classification, have led to the development of greatly improved active

sonar systems. It is expected that sonar systems will become

even more powerful in the future, finding uses that have not yet

been imagined.



Paul Langévin









If he had not published the Special Theory of Relativity in

1905, Albert Einstein once said, Paul Langévin would have

done so not long afterward. Born in Paris in 1872, Langévin was

among the foremost physicists of his generation. He studied in

the best French schools of science—and with such teachers as

Pierre Curie and Jean Perrin—and became a professor of physics

at the College de France in 1904. He moved to the Sorbonne

in 1909.

Langévin’s research was always widely influential. In addition

to his invention of active sonar, he was especially noted for

his studies of the molecular structure of gases, analysis of secondary

X rays from irradiated metals, his theory of magnetism,

and work on piezoelectricity and piezoceramics. His suggestion

that magnetic properties are linked to the valence electrons of atoms

inspired Niels Bohr’s classic model of the atom. In his later

career, a champion of Einstein’s theories of relativity, Langévin

worked on the implications of the space-time continuum.

DuringWorldWar II, Langévin, a pacifist, publicly denounced

the Nazis and their occupation of France. They jailed him for it.

He escaped to Switzerland in 1944, returning as soon as France

was liberated. He died in late 1946.







See also :  Aqualung ; Bathyscaphe ; Bathysphere ; Geiger counter ;

Gyrocompass ; Radar ; Richter scalePaul Langévin .



 Further Reading






Sunday, October 21, 2012

Solar thermal engine







The invention:

The first commercially practical plant for generating
electricity from solar energy.


The people behind the invention:


Frank Shuman (1862-1918), an American inventor
John Ericsson (1803-1889), an American engineer
Augustin Mouchout (1825-1911), a French physics professor







Power from the Sun


According to tradition, the Greek scholar Archimedes used
reflective mirrors to concentrate the rays of the Sun and set afire
the ships of an attacking Roman fleet in 212 b.c.e. The story illustrates
the long tradition of using mirrors to concentrate solar energy
from a large area onto a small one, producing very high
temperatures.
With the backing of Napoleon III, the Frenchman Augustin
Mouchout built, between 1864 and 1872, several steam engines
that were powered by the Sun. Mirrors concentrated the sun’s rays
to a point, producing a temperature that would boil water. The
steam drove an engine that operated a water pump. The largest engine
had a cone-shaped collector, or “axicon,” lined with silverplated
metal. The French government operated the engine for six
months but decided it was too expensive to be practical.
John Ericsson, the American famous for designing and building
the CivilWar ironclad ship Monitor, built seven steam-driven
solar engines between 1871 and 1878. In Ericsson’s design,
rays were focused onto a line rather than a point. Long mirrors,
curved into a parabolic shape, tracked the Sun. The rays were focused
onto a water-filled tube mounted above the reflectors to
produce steam. The engineer’s largest engine, which used an 11- ×
16-foot trough-shaped mirror, delivered nearly 2 horsepower. Because
his solar engines were ten times more expensive than conventional
steam engines, Ericsson converted them to run on coal to
avoid financial loss.

 Frank Shuman, a well-known inventor in Philadelphia, Pennsylvania,
entered the field of solar energy in 1906. The self-taught engineer
believed that curved, movable mirrors were too expensive. His
first large solar engine was a hot-box, or flat-plate, collector. It lay
flat on the ground and had blackened pipes filled with a liquid that
had a low boiling point. The solar-heated vapor ran a 3.5-horsepower
engine.
Shuman’s wealthy investors formed the Sun Power Company to
develop and construct the largest solar plant ever built. The site chosen
was in Egypt, but the plant was built near Shuman’s home for
testing before it was sent to Egypt.
When the inventor added ordinary flat mirrors to reflect more
sunlight into each collector, he doubled the heat production of the
collectors. The 572 trough-type collectors were assembled in twentysix
rows. Water was piped through the troughs and converted to
steam. A condenser converted the steam to water, which reentered
the collectors. The engine pumped 3,000 gallons of water per minute
and produced 14 horsepower per day; performance was expected to
improve 25 percent in the sunny climate of Egypt.
British investors requested that professor C. V. Boys review the
solar plant before it was shipped to Egypt. Boys pointed out that the
bottom of each collector was not receiving any direct solar energy;
in fact, heat was being lost through the bottom. He suggested that
each row of flat mirrors be replaced by a single parabolic reflector,
and Shuman agreed. Shuman thought Boys’s idea was original, but
he later realized it was based on Ericsson’s design.
The company finally constructed the improved plant in Meadi,
Egypt, a farming district on the Nile River. Five solar collectors,
spaced 25 feet apart, were built in a north-south line. Each was
about 200 feet long and 10 feet wide. Trough-shaped reflectors were
made of mirrors held in place by brass springs that expanded
and contracted with changing temperatures. The parabolic mirrors
shifted automatically so that the rays were always focused on the
boiler. Inside the 15-inch boiler that ran down the middle of the collector,
water was heated and converted to steam. The engine produced
more than 55 horsepower, which was enough to pump 6,000
gallons of water per minute.
The purchase price of Shuman’s solar plant was twice as high as

that of a coal-fired plant, but its operating costs were far lower. In
Egypt, where coal was expensive, the entire purchase price would
be recouped in four years. Afterward, the plant would operate for
practically nothing. The first practical solar engine was now in operation,
providing enough energy to drive a large-scale irrigation system
in the floodplain of the Nile River.
By 1914, Shuman’s work was enthusiastically supported, and solar
plants were planned for India and Africa. Shuman hoped to
build 20,000 reflectors in the Sahara Desert and generate energy
equal to all the coal mined in one year, but the outbreak of World

 War I ended his dreams of large-scale solar developments. The
Meadi project was abandoned in 1915, and Shuman died before the
war ended. Powerful nations lost interest in solar power and began
to replace coal with oil. Rich oil reserves were discovered in many
desert zones that were ideal locations for solar power.

Impact
Although World War I ended Frank Shuman’s career, his breakthrough
proved to the world that solar power held great promise for
the future. His ideas were revived in 1957, when the Soviet Union
planned a huge solar project for Siberia. Alarge boiler was fixed on
a platform 140 feet high. Parabolic mirrors, mounted on 1,300 railroad
cars, revolved on circular tracks to focus light on the boiler. The
full-scale model was never built, but the design inspired the solar
power tower.
In the Mojave desert near Barstow, California, an experimental
power tower, Solar One, began operation in 1982. The system collects
solar energy to deliver steam to turbines that produce electric
power. The 30-story tower is surrounded by more than 1,800 mirrors
that adjust continually to track the Sun. Solar One generates
about 10 megawatts per day, enough power for 5,000 people.
Solar One was expensive, but future power towers will generate
electricity as cheaply as fossil fuels can. If the costs of the air and
water pollution caused by coal burning were considered, solar power
plants would already be recognized as cost effective. Meanwhile,
Frank Shuman’s success in establishing and operating a thoroughly
practical large-scale solar engine continues to inspire research and
development.



See also : Compressed-air-accumulating power plant; Fuel cell;
Geothermal power; Nuclear power plant; Photoelectric cell; Photovoltaiccell
; Solar Power

Wednesday, October 10, 2012

Silicones













The invention:



Synthetic polymers characterized by lubricity, extreme
water repellency, thermal stability, and inertness that are
widely used in lubricants, protective coatings, paints, adhesives,
electrical insulation, and prosthetic replacements for body parts.



The people behind the invention:
Eugene G. Rochow (1909 - 2002 ), an American research chemist
Frederic Stanley Kipping (1863-1949), a Scottish chemist and
professor
James Franklin Hyde (1903- ), an American organic chemist










Synthesizing Silicones


Frederic Stanley Kipping, in the first four decades of the twentieth
century, made an extensive study of the organic (carbon-based)
chemistry of the element silicon. He had a distinguished academic
career and summarized his silicon work in a lecture in 1937 (“Organic
Derivatives of Silicon”). Since Kipping did not have available
any naturally occurring compounds with chemical bonds between
carbon and silicon atoms (organosilicon compounds), it was necessary
for him to find methods of establishing such bonds. The basic
method involved replacing atoms in naturally occurring silicon
compounds with carbon atoms from organic compounds.
While Kipping was probably the first to prepare a silicone and was
certainly the first to use the term silicone, he did not pursue the commercial
possibilities of silicones. Yet his careful experimental work was
a valuable starting point for all subsequent workers in organosilicon
chemistry, including those who later developed the silicone industry.
On May 10, 1940, chemist Eugene G. Rochow of the General
Electric (GE) Company’s corporate research laboratory in
Schenectady, New York, discovered that methyl chloride gas,
passed over a heated mixture of elemental silicon and copper, reacted
to form compounds with silicon-carbon bonds. Kipping
had shown that these silicon compounds react with water to form
silicones.

The importance of Rochow’s discovery was that it opened the
way to a continuous process that did not consume expensive metals,
such as magnesium, or flammable ether solvents, such as those
used by Kipping and other researchers. The copper acts as a catalyst,
and the desired silicon compounds are formed with only minor
quantities of by-products. This “direct synthesis,” as it came to be
called, is now done commercially on a large scale.





Silicone Structure


Silicones are examples of what chemists call polymers. Basically, a
polymer is a large molecule made up of many smaller molecules
that are linked together. At the molecular level, silicones consist of
long, repeating chains of atoms. In this molecular characteristic, silicones
resemble plastics and rubber.
Silicone molecules have a chain composed of alternate silicon and
oxygen atoms. Each silicon atom bears two organic groups as substituents,
while the oxygen atoms serve to link the silicon atoms into a
chain. The silicon-oxygen backbone of the silicones is responsible for
their unique and useful properties, such as the ability of a silicone oil
to remain liquid over an extremely broad temperature range and to
resist oxidative and thermal breakdown at high temperatures.
Afundamental scientific consideration with silicone, as with any
polymer, is to obtain the desired physical and chemical properties in
a product by closely controlling its chemical structure and molecular
weight. Oily silicones with thousands of alternating silicon and
oxygen atoms have been prepared. The average length of the molecular
chain determines the flow characteristics (viscosity) of the oil.
In samples with very long chains, rubber-like elasticity can be
achieved by cross-linking the silicone chains in a controlled manner
and adding a filler such as silica. High degrees of cross-linking
could produce a hard, intractable material instead of rubber.
The action of water on the compounds produced from Rochow’s
direct synthesis is a rapid method of obtaining silicones, but does
not provide much control of the molecular weight. Further development
work at GE and at the Dow-Corning company showed that
the best procedure for controlled formation of silicone polymers involved
treating the crude silicones with acid to produce a mixture

from which high yields of an intermediate called “D4” could be obtained
by distillation. The intermediate D4 could be polymerized in
a controlled manner by use of acidic or basic catalysts. Wilton I.
Patnode of GE and James F. Hyde of Dow-Corning made important
advances in this area. Hyde’s discovery of the use of traces of potassium
hydroxide as a polymerization catalyst for D4 made possible

the manufacture of silicone rubber, which is the most commercially
valuable of all the silicones.





Impact


Although Kipping’s discovery and naming of the silicones occurred
from 1901 to 1904, the practical use and impact of silicones
started in 1940, with Rochow’s discovery of direct synthesis.
Production of silicones in the United States came rapidly enough
to permit them to have some influence on military supplies for
WorldWar II (1939-1945). In aircraft communication equipment, extensive
waterproofing of parts by silicones resulted in greater reliability
of the radios under tropical conditions of humidity, where
condensing water could be destructive. Silicone rubber, because
of its ability to withstand heat, was used in gaskets under hightemperature
conditions, in searchlights, and in the engines on B-29
bombers. Silicone grease applied to aircraft engines also helped to
protect spark plugs from moisture and promote easier starting.
AfterWorldWar II, the uses for silicones multiplied. Silicone rubber
appeared in many products from caulking compounds to wire insulation
to breast implants for cosmetic surgery. Silicone rubber boots were
used on the moon walks where ordinary rubber would have failed.
Silicones in their present form owe much to years of patient developmental
work in industrial laboratories. Basic research, such as
that conducted by Kipping and others, served to point the way and
catalyzed the process of commercialization.







                     



                                                                   Eugene G. Rochow














Eugene George Rochow was born in 1909 and grew up in the
rural New Jersey town of Maplewood. There his father, who
worked in the tanning industry, and his big brother maintained
a small attic laboratory. They experimented with electricity, radio—Eugene put together his own crystal set in an oatmeal
box—and chemistry.
Rochow followed his brother to Cornell University in 1927.
The Great Depression began during his junior year, and although
he had to take jobs as lecture assistant to get by, he managed
to earn his bachelor’s degree in chemistry in 1931 and his
doctorate in 1935. Luck came his way in the extremely tight job
market: General Electric (GE) hired him for his expertise in inorganic chemistry.


In 1938 the automobile industry, among other manufacturers,
had a growing need for a high-temperature-resistant insulators.
Scientists at Corning were convinced that silicone would
have the best properties for the purpose, but they could not find
a way to synthesize it cheaply and in large volume. When word
about their ideas got back to Rochow at GE, he reasoned that a
flexible silicone able to withstand temperatures of 200 to 300 degrees
Celsius could be made by bonding silicone to carbon. His
research succeeded in producing methyl silicone in 1939, and
he devised a way to make it cheaply in 1941. It was the first
commercially practical silicone. His process is still used.
After World War II GE asked him to work on an effort to
make aircraft carriers nuclear powered. However, Rochow was
a Quaker and pacifist, and he refused. Instead, he moved to
Harvard University as a chemistry professor in 1948 and remained
there until his retirement in 1970. As the founder of a
new branch of industrial chemistry, he received most of the discipline’s
awards and medals, including the Perkin Award, and
honorary doctorates.







See also : Buna rubber ; Neoprene ; Nylon ; Plastic ; Polyethylene ; Silicone Wikipedia



Wednesday, October 3, 2012

Scanning tunneling microscope







The invention:



A major advance on the field ion microscope, the

scanning tunneling microscope has pointed toward new directions

in the visualization and control of matter at the atomic

level.





The people behind the invention:



Gerd Binnig (1947- ), a West German physicist who was a

cowinner of the 1986 Nobel Prize in Physics

Heinrich Rohrer (1933- ), a Swiss physicist who was a

cowinner of the 1986 Nobel Prize in Physics

Ernst Ruska (1906-1988), a West German engineer who was a

cowinner of the 1986 Nobel Prize in Physics

Antoni van Leeuwenhoek (1632-1723), a Dutch naturalist









The Limit of Light



The field of microscopy began at the end of the seventeenth century,

when Antoni van Leeuwenhoek developed the first optical microscope.

In this type of microscope, a magnified image of a sample

is obtained by directing light onto it and then taking the light

through a lens system. Van Leeuwenhoek’s microscope allowed

him to observe the existence of life on a scale that is invisible to the

naked eye. Since then, developments in the optical microscope have

revealed the existence of single cells, pathogenic agents, and bacteria.

There is a limit, however, to the resolving power of optical microscopes.

Known as “Abbe’s barrier,” after the German physicist and

lens maker Ernst Abbe, this limit means that objects smaller than

about 400 nanometers (about a millionth of a millimeter) cannot be

viewed by conventional microscopes.

In 1925, the physicist Louis de Broglie predicted that electrons

would exhibit wave behavior as well as particle behavior. This prediction

was confirmed by Clinton J. Davisson and Lester H. Germer

of Bell Telephone Laboratories in 1927. It was found that highenergy

electrons have shorter wavelengths than low-energy electrons

and that electrons with sufficient energies exhibit wave-

lengths comparable to the diameter of the atom. In 1927, Hans

Busch showed in a mathematical analysis that current-carrying

coils behave like electron lenses and that they obey the same lens

equation that governs optical lenses. Using these findings, Ernst

Ruska developed the electron microscope in the early 1930’s.

By 1944, the German corporation of Siemens and Halske had

manufactured electron microscopes with a resolution of 7 nanometers;

modern instruments are capable of resolving objects as

small as 0.5 nanometer. This development made it possible to view

structures as small as a few atoms across as well as large atoms and

large molecules.

The electron beam used in this type of microscope limits the usefulness

of the device. First, to avoid the scattering of the electrons,

the samples must be put in a vacuum, which limits the applicability

of the microscope to samples that can sustain such an environment.

Most important, some fragile samples, such as organic molecules,

are inevitably destroyed by the high-energy beams required for

high resolutions.





Viewing Atoms



From 1936 to 1955, ErwinWilhelm Müller developed the field ion

microscope (FIM), which used an extremely sharp needle to hold the

sample. This was the first microscope to make possible the direct

viewing of atomic structures, but it was limited to samples capable of

sustaining the high electric fields necessary for its operation.

In the early 1970’s, Russel D. Young and Clayton Teague of the

National Bureau of Standards (NBS) developed the “topografiner,”

a new kind of FIM. In this microscope, the sample is placed at a large

distance from the tip of the needle. The tip is scanned across the surface

of the sample with a precision of about a nanometer. The precision

in the three-dimensional motion of the tip was obtained by using

three legs made of piezoelectric crystals. These materials change

shape in a reproducible manner when subjected to a voltage. The

extent of expansion or contraction of the crystal depends on the

amount of voltage that is applied. Thus, the operator can control the

motion of the probe by varying the voltage acting on the three legs.

The resolution of the topografiner is limited by the size of the probe.

The idea for the scanning tunneling microscope (STM) arose

when Heinrich Rohrer of the International Business Machines (IBM)

Corporation’s Zurich research laboratory met Gerd Binnig in Frankfurt

in 1978. The STM is very similar to the topografiner. In the STM,

however, the tip is kept at a height of less than a nanometer away

from the surface, and the voltage that is applied between the specimen

and the probe is low. Under these conditions, the electron

cloud of atoms at the end of the tip overlaps with the electron cloud

of atoms at the surface of the specimen. This overlapping results in a

measurable electrical current flowing through the vacuum or insulating

material existing between the tip and the sample. When the

probe is moved across the surface and the voltage between the

probe and sample is kept constant, the change in the distance between

the probe and the surface (caused by surface irregularities)

results in a change of the tunneling current.

Two methods are used to translate these changes into an image of

the surface. The first method involves changing the height of the

probe to keep the tunneling current constant; the voltage used to

change the height is translated by a computer into an image of the

surface. The second method scans the probe at a constant height

away from the sample; the voltage across the probe and sample is

changed to keep the tunneling current constant. These changes in

voltage are translated into the image of the surface. The main limitation

of the technique is that it is applicable only to conducting samples

or to samples with some surface treatment.





Consequences



In October, 1989, the STM was successfully used in the manipulation

of matter at the atomic level. By letting the probe sink into the

surface of a metal-oxide crystal, researchers at Rutgers University

were able to dig a square hole about 250 atoms across and 10 atoms

deep.Amore impressive feat was reported in the April 5, 1990, issue

of Nature; M. Eigler and Erhard K. Schweiser of IBM’s Almaden Research

Center spelled out their employer’s three-letter acronym using

thirty-five atoms of xenon. This ability to move and place individual

atoms precisely raises several possibilities, which include the

creation of custom-made molecules, atomic-scale data storage, and

ultrasmall electrical logic circuits.

The success of the STM has led to the development of several

new microscopes that are designed to study other features of sample

surfaces. Although they all use the scanning probe technique to

make measurements, they use different techniques for the actual detection.

The most popular of these new devices is the atomic force

microscope (AFM). This device measures the tiny electric forces that

exist between the electrons of the probe and the electrons of the

sample without the need for electron flow, which makes the tech-

nique particularly useful in imaging nonconducting surfaces. Other

scanned probe microscopes use physical properties such as temperature

and magnetism to probe the surfaces.







                                                   Gerd Binnig and Heinrich Rohrer









Both Gerd Binnig and Heinrich Rohrer believe an early and

pleasurable introduction to teamwork led to their later success

in inventing the scanning tunneling microscope, for which they

shared the 1986 Nobel Prize in Physics with Ernst Ruska.

Binnig was born in Frankfurt, Germany, in 1947. He acquired

an early interest in physics but was always deeply influenced

by classical music, introduced to him by his mother, and

the rock music that his younger brother played for him. Binnig

played in rock bands as a teenager and learned to enjoy the creative

interplay of teamwork. At J. W. Goethe University in

Frankfurt he earned a bachelor’s degree (1973) and doctorate

(1978) in physics and then took a position at International Business

Machine’s Zurich Research Laboratory. There he recaptured

the pleasures of working with a talented team after joining

Rohrer in research.

Rohrer had been at the Zurich facility since just after it

opened in 1963. He was born in Buch, Switzerland, in 1933, and

educated at the Swiss Federal Institute of Technology in Zurich,

where he completed his doctorate in 1960. After post-doctoral

work at Rutgers University, he joined the IBM research team, a

time that he describes as among the most enjoyable passages of

his career.

In addition to the Nobel Prize, the pair also received the German

Physics Prize, Otto Klung Prize, Hewlett Packard Prize,

and King Faisal Prize. Rohrer became an IBM Fellow in 1986

and was selected to manage the physical sciences department at

the Zurich Research Laboratory. He retired from IBM in July

1997. Binnig became an IBM Fellow in 1987





See also :  Electron microscope  ; Mass spectrograph ; Neutrino detector ; Wikipedia


Saturday, September 29, 2012

Salvarsan













The invention:



The first successful chemotherapeutic for the treatment
of syphilis


The people behind the invention:


Paul Ehrlich (1854-1915), a German research physician and
chemist
Wilhelm von Waldeyer (1836-1921), a German anatomist
Friedrich von Frerichs (1819-1885), a German physician and
professor
Sahachiro Hata (1872-1938), a Japanese physician and
bacteriologist
Fritz Schaudinn (1871-1906), a German zoologist






The Great Pox


The ravages of syphilis on humankind are seldom discussed
openly. A disease that struck all varieties of people and was transmitted
by direct and usually sexual contact, syphilis was both
feared and reviled. Many segments of society across all national
boundaries were secure in their belief that syphilis was divine punishment
of the wicked for their evil ways.
It was not until 1903 that bacteriologists Élie Metchnikoff and
Pierre-Paul-Émile Roux demonstrated the transmittal of syphilis to
apes, ending the long-held belief that syphilis was exclusively a human
disease. The disease destroyed families, careers, and lives,
driving its infected victims mad, destroying the brain, or destroying
the cardiovascular system. It was methodical and slow, but in every
case, it killed with singular precision. There was no hope of a safe
and effective cure prior to the discovery of Salvarsan.
Prior to 1910, conventional treatment consisted principally of
mercury or, later, potassium iodide. Mercury, however, administered
in large doses, led to severe ulcerations of the tongue, jaws,
and palate. Swelling of the gums and loosening of the teeth resulted.
Dribbling saliva and the attending fetid odor also occurred. These
side effects of mercury treatment were so severe that many pre-

ferred to suffer the disease to the end rather than undergo the standard
cure. About 1906, Metchnikoff and Roux demonstrated that
mercurial ointments, applied very early, at the first appearance of
the primary lesion, were effective.
Once the spirochete-type bacteria invaded the bloodstream and
tissues, the infected person experienced symptoms of varying nature
and degree—high fever, intense headaches, and excruciating
pain. The patient’s skin often erupted in pustular lesions similar in
appearance to smallpox. It was the distinguishing feature of these
pustular lesions that gave syphilis its other name: the “Great Pox.”
Death brought the only relief then available.





Poison Dyes


Paul Ehrlich became fascinated by the reactions of dyes with biological
cells and tissues while a student at the University of Strasbourg
under Wilhelm von Waldeyer. It was von Waldeyer who
sparked Ehrlich’s interest in the chemical viewpoint of medicine.
Thus, as a student, Ehrlich spent hours at this laboratory experimenting
with different dyes on various tissues. In 1878, he published
a book that detailed the discriminate staining of cells and cellular
components by various dyes.
Ehrlich joined Friedrich von Frerichs at the Charité Hospital in
Berlin, where Frerichs allowed Ehrlich to do as much research as he
wanted. Ehrlich began studying atoxyl in 1908, the year he won
jointly with Metchnikoff the Nobel Prize in Physiology or Medicine
for his work on immunity. Atoxyl was effective against trypanosome—
a parasite responsible for a variety of infections, notably
sleeping sickness—but also imposed serious side effects upon the
patient, not the least of which was blindness. It was Ehrlich’s study
of atoxyl, and several hundred derivatives sought as alternatives to
atoxyl in trypanosome treatment, that led to the development of derivative
606 (Salvarsan). Although compound 606 was the first
chemotherapeutic to be used effectively against syphilis, it was discontinued
as an atoxyl alternative and shelved as useless for five
years.
The discovery and development of compound 606 was enhanced
by two critical events. First, the Germans Fritz Schaudinn and Erich

Hoffmann discovered that syphilis is a bacterially caused disease.
The causative microorganism is a spirochete so frail and gossameric
in substance that it is nearly impossible to detect by casual microscopic
examination; Schaudinn chanced upon it one day in March,
1905. This discovery led, in turn, to German bacteriologist August
von Wassermann’s development of the now famous test for syphilis:
the Wassermann test. Second, a Japanese bacteriologist, Sahachiro
Hata, came to Frankfurt in 1909 to study syphilis with
Ehrlich. Hata had studied syphilis in rabbits in Japan. Hata’s assignment
was to test every atoxyl derivative ever developed under
Ehrlich for its efficacy in syphilis treatment. After hundreds of tests
and clinical trials, Ehrlich and Hata announced Salvarsan as a
“magic bullet” that could cure syphilis, at the April, 1910, Congress
of Internal Medicine in Wiesbaden, Germany.
The announcement was electrifying. The remedy was immediately
and widely sought, but it was not without its problems. Afew deaths
resulted fromits use, and it was not safe for treatment of the gravely ill.
Some of the difficulties inherent in Salvarsan were overcome by the development
of neosalvarsan in 1912 and sodium salvarsan in 1913. Although
Ehrlich achieved much, he fell short of his own assigned goal, a
chemotherapeutic that would cure in one injection.





Impact


The significance of the development of Salvarsan as an antisyphilitic
chemotherapeutic agent cannot be overstated. Syphilis at
that time was as frightening and horrifying as leprosy and was a virtual
sentence of slow, torturous death. Salvarsan was such a significant
development that Ehrlich was recommended for a 1912 and
1913 Nobel Prize for his work in chemotherapy.
It was several decades before any further significant advances in
“wonder drugs” occurred, namely, the discovery of prontosil in 1932
and its first clinical use in 1935. On the heels of prontosil—a sulfa
drug—came other sulfa drugs. The sulfa drugs would remain supreme
in the fight against bacterial infection until the antibiotics, the
first being penicillin, were discovered in 1928; however, they were
not clinically recognized untilWorldWar II (1939-1945).With the discovery
of streptomycin in 1943 and Aureomycin in 1944, the assault

against bacteria was finally on a sound basis. Medicine possessed an
arsenal with which to combat the pathogenic microbes that for centuries
before had visited misery and death upon humankind.





See also : Abortion pill ; Antibacterial drugs ; Artificial insemination ; Birth control pill ; Penicillin ; ReserpineArsphenamine









Friday, September 28, 2012

SAINT





The invention:



Taking its name from the acronym for symbolic automatic

integrator, SAINT is recognized as the first “expert system”—

a computer program designed to perform mental tasks requiring

human expertise.



The person behind the invention:



James R. Slagle (1934-1994), an American computer scientist









 The Advent of Artificial Intelligence



In 1944, the Harvard-IBM Mark I was completed. This was an

electromechanical (that is, not fully electronic) digital computer

that was operated by means of coding instructions punched into

paper tape. The machine took about six seconds to perform a multiplication

operation, twelve for a division operation. In the following

year, 1945, the world’s first fully electronic digital computer,

the Electronic Numerical Integrator and Calculator (ENIAC),

became operational. This machine, which was constructed at the

University of Pennsylvania, was thirty meters long, three meters

high, and one meter deep.

At the same time that these machines were being built, a similar

machine was being constructed in the United Kingdom: the automated

computing engine (ACE).Akey figure in the British development

was Alan Turing, a mathematician who had used computers

to break German codes during World War II. After the war, Turing

became interested in the area of “computing machinery and intelligence.”

He posed the question “Can machines think?” and set the

following problem, which is known as the “Turing test.” This test

involves an interrogator who sits at a computer terminal and asks

questions on the terminal about a subject for which he or she seeks intelligent

answers. The interrogator does not know, however, whether

the system is linked to a human or if the responses are, in fact, generated

by a program that is acting intelligently. If the interrogator cannot

tell the difference between the human operator and the computer

system, then the system is said to have passed the Turing test

and has exhibited intelligent behavior.





SAINT: An Expert System





In the attempt to answer Turing’s question and create machines

that could pass the Turing test, researchers investigated techniques

for performing tasks that were considered to require expert levels of

knowledge. These tasks included games such as checkers, chess, and

poker. These games were chosen because the total possible number of

variations in each game was very large. This led the researchers to

several interesting questions for study. How do experts make a decision

in a particular set of circumstances? How can a problem such as

a game of chess be represented in terms of a computer program? Is it

possible to know why the system chose a particular solution?

One researcher, James R. Slagle at the Massachusetts Institute of

Technology, chose to develop a program that would be able to solve

elementary symbolic integration problems (involving the manipulation

of integrals in calculus) at the level of a good college freshman.

The program that Slagle constructed was known as SAINT, an

acronym for symbolic automatic integrator, and it is acknowledged

as the first “expert system.”

An expert system is a system that performs at the level of a human

expert. An expert system has three basic components: a knowledge

base, in which domain-specific information is held (for example, rules

on how best to perform certain types of integration problems); an inference

engine, which decides how to break down a given problem utilizing

the rules in the knowledge base; and a human-computer interface

that inputs data—in this case, the integral to be solved—and

outputs the result of performing the integration. Another feature of expert

systems is their ability to explain their reasoning.

The integration problems that could be solved by SAINT were

in the form of elementary integral functions. SAINT could perform

indefinite integration (also called “antidifferentiation”) on these

functions. In addition, it was capable of performing definite and

indefinite integration on trivial extensions of indefinite integration.

SAINT was tested on a set of eighty-six problems, fifty-four of

which were drawn from the MIT final examinations in freshman

calculus; it succeeded in solving all but two. Slagle added more

rules to the knowledge base so that problems of the type it encountered

but could not solve could be solved in the future.

   The power of the SAINT system was, in part, based on its ability

to perform integration through the adoption of a “heuristic” processing

system.Aheuristic method is one that helps in discovering a

problem’s solution by making plausible but feasible guesses about

the best strategy to apply next to the current problem situation. A

heuristic is a rule of thumb that makes it possible to take short cuts

in reaching a solution, rather than having to go through every step

in a solution path. These heuristic rules are contained in the knowledge

base. SAINT was written in the LISP programming language

and ran on an IBM 7090 computer. The program and research were

Slagle’s doctoral dissertation.



 Consequences



 The SAINT system that Slagle developed was significant for several

reasons: First, it was the first serious attempt at producing a

program that could come close to passing the Turing test. Second, it

brought the idea of representing an expert’s knowledge in a computer

program together with strategies for solving complex and difficult

problems in an area that previously required human expertise.

Third, it identified the area of knowledge-based systems and

 showed that computers could feasibly be used for programs that

did not relate to business data processing. Fourth, the SAINT system

showed how the use of heuristic rules and information could

lead to the solution of problems that could not have been solved

previously because of the amount of time needed to calculate a solution.

SAINT’s major impact was in outlining the uses of these techniques,

which led to continued research in the subfield of artificial

intelligence that became known as expert systems.





  

James R. Slagle



James R. Slagle was born in 1934 in Brooklyn,NewYork, and

attended nearby St. John’s University. He majored in mathematics

and graduated with a bachelor of science degree in 1955,

also winning the highest scholastic average award. While earning

his master’s degree (1957) and doctorate (1961) at the Massachusetts

Institute of Technology (MIT), he was a staff mathematician

in the university’s Lincoln Laboratory.

Slagle taught in MIT’s electrical engineering department

part-time after completing his dissertation on the first expert

computer system and then moved to Lawrence-Livermore

National Laboratory near Berkeley, California. While working

there he also taught at the University of California. From 1967

until 1974 he was an adjunct member of the computer science

faculty of Johns Hopkins University in Baltimore, Maryland,

and then was appointed chief of the computer science laboratory

at the Naval Research Laboratory (NRL) inWashington, D.C., receiving

the Outstanding Handicapped Federal Employee of the

Year Award in 1979. In 1984 he was made a special assistant in

the Navy Center for Applied Research in Artificial Intelligence

at NRL but left in 1984 to become Distinguished Professor of

Computer Science at the University of Minnesota.

In these various positions Slagle helped mature the fledgling

discipline of artificial intelligence, publishing the influential

book Artificial Intelligence in 1971. He developed an expert system

designed to set up other expert systems—A Generalized

Network-based Expert System Shell, or AGNESS. He also worked

on parallel expert systems, artificial neural networks, timebased

logic, and methods for uncovering causal knowledge in

large databases. He died in 1994.






Wednesday, September 26, 2012

Rotary dial telephone







The invention:



The first device allowing callers to connect their

telephones to other parties without the aid of an operator, the rotary

dial telephone preceded the touch-tone phone.



The people behind the invention:



Alexander Graham Bell (1847-1922), an American inventor

Antoine Barnay (1883-1945), a French engineer

Elisha Gray (1835-1901), an American inventor







 Rotary Telephones Dials Make Phone Linkups Automatic





The telephone uses electricity to carry sound messages over long

distances. When a call is made from a telephone set, the caller

speaks into a telephone transmitter and the resultant sound waves

are converted into electrical signals. The electrical signals are then

transported over a telephone line to the receiver of a second telephone

set that was designated when the call was initiated. This receiver

reverses the process, converting the electrical signals into the

sounds heard by the recipient of the call. The process continues as

the parties talk to each other.

The telephone was invented in the 1870’s and patented in 1876 by

Alexander Graham Bell. Bell’s patent application barely preceded

an application submitted by his competitor Elisha Gray. After a

heated patent battle between Bell and Gray, which Bell won, Bell

founded the Bell Telephone Company, which later came to be called

the American Telephone and Telegraph Company.

At first, the transmission of phone calls between callers and recipients

was carried out manually, by switchboard operators. In

1923, however, automation began with Antoine Barnay’s development

of the rotary telephone dial. This dial caused the emission of

variable electrical impulses that could be decoded automatically

and used to link the telephone sets of callers and call recipients. In

time, the rotary dial system gave way to push-button dialing and

other more modern networking techniques.





Telephones, Switchboards, and Automation



The carbon transmitter, which is still used in many modern telephone

sets, was the key to the development of the telephone by Alexander

Graham Bell. This type of transmitter—and its more modern

replacements—operates like an electric version of the human

ear. When a person talks into the telephone set in a carbon transmitter-

equipped telephone, the sound waves that are produced strike

an electrically connected metal diaphragm and cause it to vibrate.

The speed of vibration of this electric eardrum varies in accordance

with the changes in air pressure caused by the changing tones of the

speaker’s voice.

Behind the diaphragm of a carbon transmitter is a cup filled with

powdered carbon. As the vibrations cause the diaphragm to press

against the carbon, the electrical signals—electrical currents of varying

strength—pass out of the instrument through a telephone wire.

Once the electrical signals reach the receiver of the phone being

called, they activate electromagnets in the receiver that make a second

diaphragm vibrate. This vibration converts the electrical signals

into sounds that are very similar to the sounds made by the person

who is speaking. Therefore, a telephone receiver may be viewed

as an electric mouth.

In modern telephone systems, transportation of the electrical signals

between any two phone sets requires the passage of those signals

through vast telephone networks consisting of huge numbers

of wires, radio systems, and other media. The linkup of any two

phone sets was originally, however, accomplished manually—on a

relatively small scale—by a switchboard operator who made the

necessary connections by hand. In such switchboard systems, each

telephone set in the network was associated with a jack connector in

the switchboard. The operator observed all incoming calls, identified

the phone sets for which they were intended, and then used

wires to connect the appropriate jacks. At the end of the call, the

jacks were disconnected.

This cumbersome methodology limited the size and efficiency of

telephone networks and invaded the privacy of callers. The development

of automated switching systems soon solved these problems

and made switchboard operators obsolete. It was here that

Antoine Barnay’s rotary dial was used, making possible an exchange

that automatically linked the phone sets of callers and call

recipients in the following way.

First, a caller lifted a telephone “off the hook,” causing a switchhook,

like those used in modern phones, to close the circuit that connected

the telephone set to the telephone network. Immediately, a

dial tone (still familiar to callers) came on to indicate that the automatic

switching system could handle the planned call. When the

phone dial was used, each number or letter that was dialed produced

a fixed number of clicks. Every click indicated that an electrical

pulse had been sent to the network’s automatic switching system,

causing switches to change position slightly. Immediately after

a complete telephone number was dialed, the overall operation of

the automatic switchers connected the two telephone sets. This connection

was carried out much more quickly and accurately than had

been possible when telephone operators at manual switchboards

made the connection.





 Impact



The telephone has become the world’s most important communication

device. Most adults use it between six and eight times per

day, for personal and business calls. This widespread use has developed

because huge changes have occurred in telephones and telephone

networks. For example, automatic switching and the rotary

dial system were only the beginning of changes in phone calling.



Touch-tone dialing replaced Barnay’s electrical pulses with audio

tones outside the frequency of human speech. This much-improved

system can be used to send calls over much longer distances than

was possible with the rotary dial system, and it also interacts well

with both answering machines and computers.

Another advance in modern telephoning is the use of radio

transmission techniques in mobile phones, rendering telephone

cords obsolete. The mobile phone communicates with base stations

arranged in “cells” throughout the service area covered. As the user

changes location, the phone link automatically moves from cell to

cell in a cellular network.

In addition, the use of microwave, laser, and fiber-optic technologies

has helped to lengthen the distance over which phone calls can

be transmitted. These technologies have also increased the number

of messages that phone networks can handle simultaneously and

have made it possible to send radio and television programs (such

as cable television), scientific data (via modems), and written messages

(via facsimile, or “fax,” machines) over phone lines. Many

other advances in telephone technology are expected as society’s

needs change and new technology is developed.







                                                       Alexander Graham Bell











 During the funeral for Alexander Graham Bell in 1922, telephone

service throughout the United States stopped for one

minute to honor him. To most people he was the inventor of the

telephone. In fact, his genius ranged much further.

Bell was born in Edinburgh, Scotland, in 1847. His father,

an elocutionist who invented a phonetic alphabet, and his

mother, who was deaf, imbued him with deep curiosity, especially

about sound. As a boy Bell became an exceptional pianist,

and he produced his first invention, for cleaning wheat, at

fourteen. After Edinburgh’s Royal High School, he attended

classes at Edinburgh University and University College, London,

but at the age of twenty-three, battling tuberculosis, he

left school to move with his parents to Ontario, Canada, to

convalesce. Meanwhile, he worked on his idea for a telegraph

capable of sending multiple messages at once. From it grew

the basic concept for the telephone. He developed it while

teaching Visible Speech at the Boston School for Deaf Mutes

after 1871. Assisted by ThomasWatson, he succeeded in sending

speech over a wire and was issued a patent for his device,

among the most valuable ever granted, in 1876. His demonstration

of the telephone later that year at Philadelphia’s

Centennial Exhibition and its subsequent development into a

household appliance brought him wealth and fame.

He moved to Nova Scotia, Canada, and continued inventing.

He created a photophone, tetrahedron modules for construction,

and an airplane, the Silver Dart, which flew in 1909.

Even though existing technology made them impracticable,

some of his ideas anticipated computers and magnetic sound

recording. His last patented invention, tested three years before

his death, was a hydrofoil. Capable of reaching seventy-one

miles per hour and freighting fourteen thousand pounds, the

HD-4 was then the fastest watercraft in the world.

Bell also helped found the National Geographic Society in

1888 and became its president in 1898. He hired Gilbert Grosvenor

to edit the society’s famous magazine, National Geographic

and together they planned the format—breathtaking

photography and vivid writing—that made it one of the world’s

best known magazines.





See also here !