Wednesday, September 30, 2009
Pap test
The invention: A cytologic technique the diagnosing uterine cancer,
the second most common fatal cancer in American women.
The people behind the invention:
George N. Papanicolaou (1883-1962), a Greek-born American
physician and anatomist
Charles Stockard (1879-1939), an American anatomist
Herbert Traut (1894-1972), an American gynecologist
Cancer in History
Cancer, first named by the ancient Greek physician Hippocrates
of Cos, is one of the most painful and dreaded forms of human disease.
It occurs when body cells run wild and interfere with the normal
activities of the body. The early diagnosis of cancer is extremely
important because early detection often makes it possible to effect
successful cures. The modern detection of cancer is usually done by
the microscopic examination of the cancer cells, using the techniques
of the area of biology called “cytology, ” or cell biology.
Development of cancer cytology began in 1867, after L. S. Beale
reported tumor cells in the saliva from
a patient who was afflicted
with cancer of the pharynx. Beale recommended the use in cancer
detection of microscopic examination of cells shed or removed (exfoliated)
from organs including the digestive, the urinary, and the
reproductive tracts. Soon, other scientists identified numerous striking
differences, including cell size and shape, the size of cell nuclei,
and the complexity of cell nuclei.
Modern cytologic detection of cancer evolved from the work of
George N. Papanicolaou, a Greek physician who trained at the University
of Athens Medical School. In 1913, he emigrated to the
United States.
In 1917, he began studying sex determination of guinea pigs with
Charles Stockard at New York’s Cornell Medical College. Papanicolaou’s
efforts required him to obtain ova (egg cells) at a precise
period in their maturation cycle, a process that required an indicator
of the time at which the animals ovulated. In search of this indicator,
Papanicolaou designed a method that involved microscopic examination
of the vaginal discharges from female guinea pigs.
Initially, Papanicolaou sought traces of blood, such as those
seen in the menstrual discharges from both primates and humans.
Papanicolaou found no blood in the guinea pig vaginal discharges.
Instead, he noticed changes in the size and the shape of the uterine
cells shed in these discharges. These changes recurred in a fifteento-
sixteen-day cycle that correlated well with the guinea pig menstrual
cycle.
“New Cancer Detection Method”
Papanicolaou next extended his efforts to the study of humans.
This endeavor was designed originally to identify whether comparable
changes in the exfoliated cells of the human vagina occurred
in women. Its goal was to gain an understanding of the human menstrual
cycle. In the course of this work, Papanicolaou observed distinctive
abnormal cells in the vaginal fluid from a woman afflicted
with cancer of the cervix. This led him to begin to attempt to develop
a cytologic method for the detection of uterine cancer, the second
most common type of fatal cancer in American women of the
time.
In 1928, Papanicolaou published his cytologic method of cancer
detection in the Proceedings of the Third Race Betterment Conference,
held in Battle Creek, Michigan. The work was received well by the
news media (for example, the January 5, 1928, New YorkWorld credited
him with a “new cancer detection method”). Nevertheless, the
publication—and others he produced over the next ten years—was
not very interesting to gynecologists of the time. Rather, they preferred
use of the standard methodology of uterine cancer diagnosis
(cervical biopsy and curettage).
Consequently, in 1932, Papanicolaou turned his energy toward
studying human reproductive endocrinology problems related to
the effects of hormones on cells of the reproductive system. One example
of this work was published in a 1933 issue of The American
Journal of Anatomy, where he described “the sexual cycle in the human
female.” Other such efforts resulted in better understanding of reproductive problems that include amenorrhea and menopause.
It was not until Papanicolaou’s collaboration with gynecologist
Herbert Traut (beginning in 1939), which led to the publication of
Diagnosis of Uterine Cancer by the Vaginal Smear (1943), that clinical
acceptance of the method began to develop. Their monograph documented
an impressive, irrefutable group of studies of both normal
and disease states that included nearly two hundred cases of cancer
of the uterus.
Soon, many other researchers began to confirm these findings;
by 1948, the newly named American Cancer Society noted that the
“Pap” smear seemed to be a very valuable tool for detecting vaginal
cancer. Wide acceptance of the Pap test followed, and, beginning
in 1947, hundreds of physicians from all over the world
flocked to Papanicolaou’s course on the subject. They learned his
smear/diagnosis techniques and disseminated them around the
world.
Impact
The Pap test has been cited by many physicians as being the most
significant and useful modern discovery in the field of cancer research.
One way of measuring its impact is the realization that the
test allows the identification of uterine cancer in the earliest stages,
long before other detection methods can be used. Moreover, because
of resultant early diagnosis, the disease can be cured in more
than 80 percent of all cases identified by the test. In addition, Pap
testing allows the identification of cancer of the uterine cervix so
early that its cure rate can be nearly 100 percent.
Papanicolaou extended the use of the smear technique from
examination of vaginal discharges to diagnosis of cancer in many
other organs from which scrapings, washings, and discharges
can be obtained. These tissues include the colon, the kidney, the
bladder, the prostate, the lung, the breast, and the sinuses. In
most cases, such examination of these tissues has made it possible
to diagnose cancer much sooner than is possible by using
other existing methods. As a result, the smear method has become
a basis of cancer control in national health programs throughout the
world
Tuesday, September 29, 2009
Pacemaker
The invention: A small device using transistor circuitry that regulates
the heartbeat of the patient in whom it is surgically emplaced.
The people behind the invention:
Ake Senning (1915- ), a Swedish physician
Rune Elmquist, co-inventor of the first pacemaker
Paul Maurice Zoll (1911- ), an American cardiologist
Cardiac Pacing
The fundamentals of cardiac electrophysiology (the electrical activity
of the heart) were determined during the eighteenth century;
the first successful cardiac resuscitation by electrical stimulation occurred
in 1774. The use of artificial pacemakers for resuscitation was
demonstrated in 1929 by Mark Lidwell. Lidwell and his
coworkers
developed a portable apparatus that could be connected to a power
source. The pacemaker was used successfully on several stillborn
infants after other methods of resuscitation failed. Nevertheless,
these early machines were unreliable.
Ake Senning’s first experience with the effect of electrical stimulation
on cardiac physiology was memorable; grasping a radio
ground wire, Senning felt a brief episode of ventricular arrhythmia
(irregular heartbeat). Later, he was able to apply a similar electrical
stimulation to control a heartbeat during surgery.
The principle of electrical regulation of the heart was valid. It was
shown that pacemakers introduced intravenously into the sinus
node area of a dog’s heart could be used to control the heartbeat
rate. Although Paul Maurice Zoll utilized a similar apparatus in
several patients with cardiac arrhythmia, it was not appropriate for
extensive clinical use; it was large and often caused unpleasant sensations
or burns. In 1957, however, Ake Senning observed that attaching
stainless steel electrodes to a child’s heart made it possible
to regulate the heart’s rate of contraction. Senning considered this to
represent the beginning of the era of clinical pacing.
Development of Cardiac Pacemakers
Senning’s observations of the successful use of the cardiac pacemaker
had allowed him to identify the problems inherent in the device.
He realized that the attachment of the device to the lower, ventricular
region of the heart made possible more reliable control, but
other problems remained unsolved. It was inconvenient, for example,
to carry the machine externally; a cord was wrapped around the
patient that allowed the pacemaker to be recharged, which had to be
done frequently. Also, for unknown reasons, heart resistance would
increase with use of the pacemaker, which meant that increasingly
large voltages had to be used to stimulate the heart. Levels as high
as 20 volts could cause quite a “start” in the patient. Furthermore,
there was a continuous threat of infection.
In 1957, Senning and his colleague Rune Elmquist developed a
pacemaker that was powered by rechargeable nickel-cadmium batteries,
which had to be recharged once a month. Although Senning
and Elmquist did not yet consider the pacemaker ready for human
testing, fate intervened.Aforty-three-year-old man was admitted to
the hospital suffering from an atrioventricular block, an inability of
the electrical stimulus to travel along the conductive fibers of the
“bundle of His” (a band of cardiac muscle fibers). As a result of this
condition, the patient required repeated cardiac resuscitation. Similar
types of heart block were associated with a mortality rate higher
than 50 percent per year and nearly 95 percent over five years.
Senning implanted two pacemakers (one failed) into the myocardium
of the patient’s heart, one of which provided a regulatory
rate of 64 beats per minute. Although the pacemakers required periodic
replacement, the patient remained alive and active for twenty
years. (He later became president of the Swedish Association for
Heart and Lung Disease.)
During the next five years, the development of more reliable and
more complex pacemakers continued, and implanting the pacemaker
through the vein rather than through the thorax made it simpler
to use the procedure. The first pacemakers were of the “asynchronous”
type, which generated a regular charge that overrode the
natural pacemaker in the heart. The rate could be set by the physician
but could not be altered if the need arose. In 1963, an atrialtriggered synchronous pacemaker was installed by a Swedish team.
The advantage of this apparatus lay in its ability to trigger a heart
contraction only when the normal heart rhythm was interrupted.
Most of these pacemakers contained a sensing device that detected
the atrial impulse and generated an electrical discharge only when
the heart rate fell below 68 to 72 beats per minute.
The biggest problems during this period lay in the size of the
pacemaker and the short life of the battery. The expiration of the
electrical impulse sometimes caused the death of the patient. In addition,
the most reliable method of checking the energy level of the
battery was to watch for a decreased pulse rate. As improvements
were made in electronics, the pacemaker became smaller, and in
1972, the more reliable lithium-iodine batteries were introduced.
These batteries made it possible to store more energy and to monitor
the energy level more effectively. The use of this type of power
source essentially eliminated the battery as the limiting factor in the
longevity of the pacemaker. The period of time that a pacemaker
could operate continuously in the body increased from a period of
days in 1958 to five to ten years by the 1970’s.
Consequences
The development of electronic heart pacemakers revolutionized
cardiology. Although the initial machines were used primarily to
control cardiac bradycardia, the often life-threatening slowing of
the heartbeat, a wide variety of arrhythmias and problems with cardiac
output can now be controlled through the use of these devices.
The success associated with the surgical implantation of pacemakers
is attested by the frequency of its use. Prior to 1960, only three
pacemakers had been implanted. During the 1990’s, however, some
300,000 were implanted each year throughout the world. In the
United States, the prevalence of implants is on the order of 1 per
1,000 persons in the population.
Pacemaker technology continues to improve. Newer models can
sense pH and oxygen levels in the blood, as well as respiratory rate.
They have become further sensitized to minor electrical disturbances
and can adjust accordingly. The use of easily sterilized circuitry
has eliminated the danger of infection. Once the pacemaker has been installed in the patient, the basic electronics require no additional
attention.With the use of modern pacemakers, many forms
of electrical arrhythmias need no longer be life-threatening.
Monday, September 28, 2009
Orlon
The invention: A synthetic fiber made from polyacrylonitrile that
has become widely used in textiles and in the preparation of
high-strength carbon fibers.
The people behind the invention:
Herbert Rein (1899-1955), a German chemist
Ray C. Houtz (1907- ), an American chemist
A Difficult Plastic
“Polymers” are large molecules that are made up of chains of
many smaller molecules, called “monomers.” Materials that are
made of polymers are also called polymers,
and some polymers,
such as proteins, cellulose, and starch, occur in nature. Most polymers,
however, are synthetic materials, which means that they were
created by scientists.
The twenty-year period beginning in 1930 was the age of great
discoveries in polymers by both chemists and engineers. During
this time, many of the synthetic polymers, which are also known as
plastics, were first made and their uses found. Among these polymers
were nylon, polyester, and polyacrylonitrile. The last of these
materials, polyacrylonitrile (PAN), was first synthesized by German
chemists in the late 1920’s. They linked more than one thousand
of the small, organic molecules of acrylonitrile to make a polymer.
The polymer chains of this material had the properties that
were needed to form strong fibers, but there was one problem. Instead
of melting when heated to a high temperature, PAN simply
decomposed. This made it impossible, with the technology that existed
then, to make fibers.
The best method available to industry at that time was the process
of melt spinning, in which fibers were made by forcing molten
polymer through small holes and allowing it to cool. Researchers realized
that, if PAN could be put into a solution, the same apparatus
could be used to spin PAN fibers. Scientists in Germany and the
United States tried to find a solvent or liquid that would dissolve
PAN, but they were unsuccessful until World War II began.
Fibers for War
In 1938, the German chemist Walter Reppe developed a new
class of organic solvents called “amides.” These new liquids were
able to dissolve many materials, including some of the recently discovered
polymers. WhenWorldWar II began in 1940, both the Germans
and the Allies needed to develop new materials for the war effort.
Materials such as rubber and fibers were in short supply. Thus,
there was increased governmental support for chemical and industrial
research on both sides of the war. This support was to result in
two independent solutions to the PAN problem.
In 1942, Herbert Rein, while working for I. G. Farben in Germany,
discovered that PAN fibers could be produced from a solution of
polyacrylonitrile dissolved in the newly synthesized solvent dimethylformamide.
At the same time Ray C. Houtz, who was working for E.
I. Du Pont de Nemours inWilmington, Delaware, found that the related
solvent dimethylacetamide would also form excellent PAN fibers.
His work was patented, and some fibers were produced for use
by the military during the war. In 1950, Du Pont began commercial
production of a form of polyacrylonitrile fibers called Orlon. The
Monsanto Company followed with a fiber called Acrilon in 1952, and
other companies began to make similar products in 1958.
There are two ways to produce PAN fibers. In both methods,
polyacrylonitrile is first dissolved in a suitable solvent. The solution
is next forced through small holes in a device called a “spinneret.”
The solution emerges from the spinneret as thin streams of a thick,
gooey liquid. In the “wet spinning method,” the streams then enter
another liquid (usually water or alcohol), which extracts the solvent
from the solution, leaving behind the pure PAN fiber. After air drying,
the fiber can be treated like any other fiber. The “dry spinning
method” uses no liquid. Instead, the solvent is evaporated from the
emerging streams by means of hot air, and again the PANfiber is left
behind.
In 1944, another discovery was made that is an important part of
the polyacrylonitrile fiber story. W. P. Coxe of Du Pont and L. L.
Winter at Union Carbide Corporation found that, when PAN fibers
are heated under certain conditions, the polymer decomposes and
changes into graphite (one of the elemental forms of carbon) but still
keeps its fiber form. In contrast to most forms of graphite, these fibers
were exceptionally strong. These were the first carbon fibers
ever made. Originally known as “black Orlon,” they were first produced
commercially by the Japanese in 1964, but they were too
weak to find many uses. After new methods of graphitization were
developed jointly by labs in Japan, Great Britain, and the United
States, the strength of the carbon fibers was increased, and the fibers
began to be used in many fields.
Impact
As had been predicted earlier, PAN fibers were found to have
some very useful properties. Their discovery and commercialization
helped pave the way for the acceptance and wide use of polymers.
The fibers derive their properties from the stiff, rodlike structure
of polyacrylonitrile. Known as acrylics, these fibers are more
durable than cotton, and they are the best alternative to wool for
sweaters. Acrylics are resistant to heat and chemicals, can be dyed
easily, resist fading or wrinkling, and are mildew-resistant. Thus, after
their introduction, PAN fibers were very quickly made into
yarns, blankets, draperies, carpets, rugs, sportswear, and various
items of clothing. Often, the fibers contain small amounts of other
polymers that give them additional useful properties.
A significant amount of PAN fiber is used in making carbon fibers.
These lightweight fibers are stronger for their weight than any
known material, and they are used to make high-strength composites
for applications in aerospace, the military, and sports. A “fiber
composite” is a material made from two parts: a fiber, such as carbon
or glass, and something to hold the fibers together, which is
usually a plastic called an “epoxy.” Fiber composites are used in
products that require great strength and light weight. Their applications
can be as ordinary as a tennis racket or fishing pole or as exotic
as an airplane tail or the body of a spacecraft.
Thursday, September 24, 2009
Optical disk
The invention:Anonmagnetic storage medium for computers that
can hold much greater quantities of data than similar size magnetic
media, such as hard and floppy disks.
The people behind the invention:
Klaas Compaan, a Dutch physicist
Piet Kramer, head of Philips’ optical research laboratory
Lou F. Ottens, director of product development for Philips’
musical equipment division
George T. de Kruiff, manager of Philips’ audio-product
development department
Joop Sinjou, a Philips project leader
Holograms Can Be Copied Inexpensively
Holography is a lensless photographic method that uses laser
light to produce three-dimensional images. This is done by splitting
a laser beam into two beams. One of the beams
is aimed at the object
whose image is being reproduced so that the laser light will reflect
from the object and strike a photographic plate or film. The second
beam of light is reflected from a mirror near the object and also
strikes the photographic plate or film. The “interference pattern,”
which is simply the pattern created by the differences between the
two reflected beams of light, is recorded on the photographic surface.
The recording that is made in this way is called a “hologram.”
When laser light or white light strikes the hologram, an image is created
that appears to be a three-dimensional object.
Early in 1969, Radio Corporation of America (RCA) engineers
found a way to copy holograms inexpensively by impressing interference
patterns on a nickel sheet that then became a mold from
which copies could be made. Klaas Compaan, a Dutch physicist,
learned of this method and had the idea that images could be recorded
in a similar way and reproduced on a disk the size of a phonograph
record. Once the images were on the disk, they could be
projected onto a screen in any sequence. Compaan saw the possibilities
of such a technology in the fields of training and education.
Computer Data Storage Breakthrough
In 1969, Compaan shared his idea with Piet Kramer, who was the
head of Philips’ optical research laboratory. The idea intrigued
Kramer. Between 1969 and 1971, Compaan spent much of his time
working on the development of a prototype.
By September, 1971, Compaan and Kramer, together with a handful
of others, had assembled a prototype that could read a blackand-
white video signal from a spinning glass disk. Three months
later, they demonstrated it for senior managers at Philips. In July,
1972, a color prototype was demonstrated publicly. After the demonstration,
Philips began to consider putting sound, rather than images,
on the disks. The main attraction of that idea was that the 12-
inch (305-millimeter) disks would hold up to forty-eight hours of
music. Very quickly, however, Lou F. Ottens, director of product development
for Philips’ musical equipment division, put an end to
any talk of a long-playing audio disk.
Ottens had developed the cassette-tape cartridge in the 1960’s.
He had plenty of experience with the recording industry, and he had
no illusions that the industry would embrace that new medium. He
was convinced that the recording companies would consider fortyeight
hours of music unmarketable. He also knew that any new
medium would have to offer a dramatic improvement over existing
vinyl records.
In 1974, only three years after the first microprocessor (the basic
element of computers) was invented, designing a digital consumer
product—rather than an analog product such as those that were already
commonly accepted—was risky. (Digital technology uses
numbers to represent information, whereas analog technology represents
information by mechanical or physical means.) When
George T. de Kruiff became Ottens’s manager of audio-product
development in June, 1974, he was amazed that there were no
digital circuit specialists in the audio department. De Kruiff recruited
new digital engineers, bought computer-aided design
tools, and decided that the project should go digital.
Within a few months, Ottens’s engineers had rigged up a digital
system. They used an audio signal that was representative of an
acoustical wave, sampled it to change it to digital form,
and encoded it as a series of pulses.
On the disk itself, they varied the
length of the “dimples” that were used to represent the sound so
that the rising and falling edges of the series of pulses corresponded
to the dimples’ walls. A helium-neon laser was reflected from
the dimples to photodetectors that were connected to a digital-toanalog
converter.
In 1978, Philips demonstrated a prototype for Polygram (a West
German company) and persuaded Polygram to develop an inexpensive
disk material with the appropriate optical qualities. Most
important was that the material could not warp. Polygram spent
about $150,000 and three months to develop the disk. In addition, it
was determined that the gallium-arsenide (GaAs) laser would be
used in the project. Sharp Corporation agreed to manufacture a
long-life GaAs diode laser to Philips’ specifications.
The optical-system designers wanted to reduce the number
of parts in order to decrease manufacturing costs and improve
reliability. Therefore, the lenses were simplified and considerable
work was devoted to developing an error-correction code.
Philips and Sony engineers also worked together to create a standard
format. In 1983, Philips made almost 100,000 units of optical
disks.
Consequences
In 1983, one of the most successful consumer products of all time
was introduced: the optical-disk system. The overwhelming success
of optical-disk reproduction led to the growth of a multibillion-dollar
industry around optical information and laid the groundwork
for a whole crop of technologies that promise to revolutionize computer
data storage. Common optical-disk products are the compact
disc (CD), the compact disc read-only memory (CD-ROM), the
write-once, read-many (WORM) erasable disk, and CD-I (interactive
CD).
The CD-ROM, the WORM, and the erasable optical disk, all of
which are used in computer applications, can hold more than 550
megabytes, from 200 to 800 megabytes, and 650 megabytes of data,
respectively.
The CD-ROM is a nonerasable disc that is used to store computer
data. After the write-once operation is performed, a WORM becomes
a read-only optical disk. An erasable optical disk can be
erased and rewritten easily. CD-ROMs, coupled with expert-system
technology, are expected to make data retrieval easier. The CD-ROM,
the WORM, and the erasable optical disk may replace magnetic
hard and floppy disks as computer data storage devices.
Tuesday, September 22, 2009
Oil-well drill bit
to penetrate hard rock formations.
The people behind the invention:
Howard R. Hughes (1869-1924), an American lawyer, drilling
engineer, and inventor
Walter B. Sharp (1860-1912), an American drilling engineer,
inventor, and partner to Hughes
Digging for Oil
Arotary drill rig of the 1990’s is basically unchanged in its essential
components from its earlier versions of the 1900’s. A drill bit is
attached to a line of hollow drill pipe. The latter passes through a
hole on a rotary table, which acts essentially as a horizontal gear
wheel and is driven by an engine. As the rotary table turns, so do the
pipe and drill bit.
During drilling operations, mud-laden water is pumped under
high pressure down the sides of the drill pipe and jets out with great
force through the small holes in the rotary drill bit against the bottom
of the borehole. This fluid then returns outside the drill pipe to
the surface, carrying with it rock material cuttings from below. Circulated
rock cuttings and fluids are regularly examined at the surface
to determine the precise type and age of rock formation and for
signs of oil and gas.
Akey part of the total rotary drilling system is the drill bit, which
has sharp cutting edges that make direct contact with the geologic
formations to be drilled. The first bits used in rotary drilling were
paddlelike “fishtail” bits, fairly successful for softer formations, and
tubular coring bits for harder surfaces. In 1893, M. C. Baker and C. E.
Baker brought a rotary water-well drill rig to Corsicana, Texas, for
modification to deeper oil drilling. This rig led to the discovery of
the large Corsicana-Powell oil field in Navarro County, Texas. This
success also motivated its operators, the American Well and Prospecting
Company, to begin the first large-scale manufacture of rotary
drilling rigs for commercial sale.In the earliest rotary drilling for oil, short fishtail bits were the
tool of choice, insofar as they were at that time the best at being able
to bore through a wide range of geologic strata without needing frequent
replacement. Even so, in the course of any given oil well,
many bits were required typically in coastal drilling in the Gulf of
Mexico. Especially when encountering locally harder rock units
such as limestone, dolomite, or gravel beds, fishtail bits would typically
either curl backward or break off in the hole, requiring the
time-consuming work of pulling out all drill pipe and “fishing” to
retrieve fragments and clear the hole.
Because of the frequent bit wear and damage, numerous small
blacksmith shops established themselves near drill rigs, dressing or
sharpening bits with a hand forge and hammer. Each bit-forging
shop had its own particular way of shaping bits, producing a wide
variety of designs. Nonstandard bit designs were frequently modified
further as experiments to meet the specific requests of local drillers
encountering specific drilling difficulties in given rock layers.
Speeding the Process
In 1907 and 1908, patents were obtained in New Jersey and
Texas for steel, cone-shaped drill bits incorporating a roller-type
coring device with many serrated teeth. Later in 1908, both patents
were bought by lawyer Howard R. Hughes.
Although comparatively weak rocks such as sands, clays, and
soft shales could be drilled rapidly (at rates exceeding 30 meters per
hour), in harder shales, lime-dolostones, and gravels, drill rates of 1
meter per hour or less were not uncommon. Conventional drill bits
of the time had average operating lives of three to twelve hours.
Economic drilling mandated increases in both bit life and drilling
rate. Directly motivated by his petroleum prospecting interests,
Hughes and his partner, Walter B. Sharp, undertook what were
probably the first recorded systematic studies of drill bit performance
while matched against specific rock layers.
Although many improvements in detail and materials have been
made to the Hughes cone bit since its inception in 1908, its basic design
is still used in rotary drilling. One of Hughes’s major innovations
was the much larger size of the cutters, symmetrically distributed as a large number of small individual teeth on the outer face of
two or more cantilevered bearing pins. In addition, “hard facing”
was employed to drill bit teeth to increase usable life. Hard facing is
a metallurgical process basically consisting of wedding a thin layer
of a hard metal or alloy of special composition to a metal surface to
increase its resistance to abrasion and heat. A less noticeable but
equally essential innovation, not included in other drill bit patents,was an ingeniously designed gauge surface that provided strong
uniform support for all the drill teeth. The force-fed oil lubrication
was another new feature included in Hughes’s patent and prototypes,
reducing the power necessary to rotate the bit by 50 percent
over that of prior mud or water lubricant designs.
Impact
In 1925, the first superhard facing was used on cone drill bits. In
addition, the first so-called self-cleaning rock bits appeared from
Hughes, with significant advances in roller bearings and bit tooth
shape translating into increased drilling efficiency. The much larger
teeth were more adaptable to drilling in a wider variety of geological
formations than earlier models. In 1928, tungsten carbide was
introduced as an additional bit facing hardener by Hughes metallurgists.
This, together with other improvements, resulted in the
Hughes ACME tooth form, which has been in almost continuous
use since 1926.
Many other drilling support technologies, such as drilling mud,
mud circulation pumps, blowout detectors and preventers, and
pipe properties and connectors have enabled rotary drilling rigs to
reach new depths (exceeding 5 kilometers in 1990). The successful
experiments by Hughes in 1908 were critical initiators of these developments.
Nylon
The invention: A resilient, high-strength polymer with applications
ranging from women’s hose to safety nets used in space flights.
The people behind the invention:Wallace Hume Carothers (1896-1937),
an American organic chemist Charles M. A. Stine (1882-1954), an American chemist
and director of chemical research at Du Pont Elmer Keiser Bolton (1886-1968),
an American industrial chemist Pure Research In the twentieth century,
American corporations created industrial research laboratories.
Their directors became the organizers of inventions,
and their scientists served as the sources of creativity.
The research program of
E. I. Du Pont de Nemours and Company
(Du Pont), through its most famous invention—nylon—became the
model for scientifically based industrial research in the chemical
industry.
During World War I (1914-1918), Du Pont tried to diversify,
concerned that after the war it would not be able to expand with
only explosives as a product. Charles M. A. Stine, Du Pont’s director
of chemical research, proposed that Du Pont should move
into fundamental research by hiring first-rate academic scientists
and giving them freedom to work on important problems in
organic chemistry. He convinced company executives that a program
to explore the fundamental science underlying Du Pont’s
technology would ultimately result in discoveries of value to the
company. In 1927, Du Pont gave him a new laboratory for research.
Stine visited universities in search of brilliant, but not-yetestablished,
young scientists. He hired Wallace Hume Carothers.
Stine suggested that Carothers do fundamental research in polymer
chemistry.Before the 1920’s, polymers were a mystery to chemists. Polymeric
materials were the result of ingenious laboratory practice,
and this practice ran far ahead of theory and understanding. German
chemists debated whether polymers were aggregates of smaller
units held together by some unknown special force or genuine molecules
held together by ordinary chemical bonds.
German chemist Hermann Staudinger asserted that they were
large molecules with endlessly repeating units. Carothers shared
this view, and he devised a scheme to prove it by synthesizing very
large molecules by simple reactions in such a way as to leave no
doubt about their structure. Carothers’s synthesis of polymers revealed
that they were ordinary molecules but giant in size.
The Longest Molecule
In April, 1930, Carothers’s research group produced two major
innovations: neoprene synthetic rubber and the first laboratorysynthesized
fiber. Neither result was the goal of their research. Neoprene
was an incidental discovery during a project to study short
polymers of acetylene. During experimentation, an unexpected substance
appeared that polymerized spontaneously. Carothers studied
its chemistry and developed the process into the first successful synthetic
rubber made in the United States.
The other discovery was an unexpected outcome of the group’s
project to synthesize polyesters by the reaction of acids and alcohols.
Their goal was to create a polyester that could react indefinitely
to form a substance with high molecular weight. The scientists
encountered a molecular weight limit of about 5,000 units to the
size of the polyesters, until Carothers realized that the reaction also
produced water, which was decomposing polyesters back into acid
and alcohol. Carothers and his associate Julian Hill devised an apparatus
to remove the water as it formed. The result was a polyester
with a molecular weight of more than 12,000, far higher than any
previous polymer.
Hill, while removing a sample from the apparatus, found that he
could draw it out into filaments that on cooling could be stretched to
form very strong fibers. This procedure, called “cold-drawing,” oriented
the molecules from a random arrangement into a long, linear one of great strength. The polyester fiber, however, was unsuitable
for textiles because of its low melting point.
In June, 1930, Du Pont promoted Stine; his replacement as research
director was Elmer Keiser Bolton. Bolton wanted to control
fundamental research more closely, relating it to projects that would
pay off and not allowing the research group freedom to pursue
purely theoretical questions.
Despite their differences, Carothers and Bolton shared an interest
in fiber research. On May 24, 1934, Bolton’s assistant Donald
Coffman “drew” a strong fiber from a new polyamide. This was the
first nylon fiber, although not the one commercialized by Du Pont.
The nylon fiber was high-melting and tough, and it seemed that a
practical synthetic fiber might be feasible.
By summer of 1934, the fiber project was the heart of the research
group’s activity. The one that had the best fiber properties was nylon
5-10, the number referring to the number of carbon atoms in the
amine and acid chains. Yet the nylon 6-6 prepared on February 28,
1935, became Du Pont’s nylon. Nylon 5-10 had some advantages,
but Bolton realized that its components would be unsuitable for
commercial production, whereas those of nylon 6-6 could be obtained
from chemicals in coal.
A determined Bolton pursued nylon’s practical development,
a process that required nearly four years. Finally, in April, 1937,
Du Pont filed a patent for synthetic fibers, which included a statement
by Carothers that there was no previous work on polyamides;
this was a major breakthrough. After Carothers’s death
on April 29, 1937, the patent was issued posthumously and assigned
to Du Pont. Du Pont made the first public announcement
of nylon on October 27, 1938.
Impact
Nylon was a generic term for polyamides, and several types of
nylon became commercially important in addition to nylon 6-6.
These nylons found widespread use as both a fiber and a moldable
plastic. Since it resisted abrasion and crushing, was nonabsorbent,
was stronger than steel on a weight-for-weight basis, and was almost
nonflammable, it embraced an astonishing range of uses: in laces, screens, surgical sutures, paint, toothbrushes, violin strings,
coatings for electrical wires, lingerie, evening gowns, leotards, athletic
equipment, outdoor furniture, shower curtains, handbags, sails,
luggage, fish nets, carpets, slip covers, bus seats, and even safety
nets on the space shuttle.
The invention of nylon stimulated notable advances in the chemistry
and technology of polymers. Some historians of technology
have even dubbed the postwar period as the “age of plastics,” the
age of synthetic products based on the chemistry of giant molecules
made by ingenious chemists and engineers.
The success of nylon and other synthetics, however, has come at
a cost. Several environmental problems have surfaced, such as those
created by the nondegradable feature of some plastics, and there is
the problem of the increasing utilization of valuable, vanishing resources,
such as petroleum, which contains the essential chemicals
needed to make polymers. The challenge to reuse and recycle these
polymers is being addressed by both scientists and policymakers.
Tuesday, September 8, 2009
Nuclear reactor
The invention:
The first nuclear reactor to produce substantial
quantities of plutonium, making it practical to produce usable
amounts of energy from a chain reaction.
The people behind the invention:
Enrico Fermi (1901-1954), an American physicist
Martin D. Whitaker (1902-1960), the first director of Oak Ridge
National Laboratory
Eugene Paul Wigner (1902-1995), the director of research and
development at Oak Ridge
The Technology to End a War
The construction of the nuclear reactor at Oak Ridge National
Laboratory in 1943 was a vital part of the Manhattan Project, the effort
by the United States during World War II (1939-1945) to develop
an atomic bomb. The successful operation of that reactor
was a major achievement not only for the project itself but also for
the general development and application of nuclear technology.
The first director of the Oak Ridge National Laboratory was Martin
D. Whitaker; the director of research and development was Eugene
Paul Wigner.
The nucleus of an atom is made up of protons and neutrons. “Fission”
is the process by which the nucleus of certain elements is split
in two by a neutron from some material that emits an occasional
neutron naturally. When an atom splits, two things happen: A tremendous
amount of thermal energy is released, and two or three
neutrons, on the average, escape from the nucleus. If all the atoms in
a kilogram of “uranium 235” were to fission, they would produce as
much heat energy as the burning of 3 million kilograms of coal. The
neutrons that are released are important, because if at least one of
them hits another atom and causes it to fission (and thus to release
more energy and more neutrons), the process will continue. It will
become a self-sustaining chain reaction that will produce a continuing
supply of heat.
Inside a reactor, a nuclear chain reaction is controlled so that it
proceeds relatively slowly. The most familiar use for the heat thus
released is to boil water and make steam to turn the turbine generators
that produce electricity to serve industrial, commercial, and
residential needs. The fissioning process in a weapon, however, proceeds
very rapidly, so that all the energy in the atoms is produced
and released virtually at once. The first application of nuclear technology,
which used a rapid chain reaction, was to produce the two
atomic bombs that ended World War II.
Breeding Bomb Fuel
The work that began at Oak Ridge in 1943 was made possible by a
major event that took place in 1942. At the University of Chicago,
Enrico Fermi had demonstrated for the first time that it was possible to
achieve a self-sustaining atomic chain reaction. More important, the reaction
could be controlled: It could be started up, it could generate heat
and sufficient neutrons to keep itself going, and it could be turned off.
That first chain reaction was very slow, and it generated very little heat;
but it demonstrated that controlled fission was possible.
Any heat-producing nuclear reaction is an energy conversion
process that requires fuel. There is only one readily fissionable element
that occurs naturally and can be used as fuel. It is a form of
uranium called uranium 235. It makes up less than 1 percent of all
naturally occurring uranium. The remainder is uranium 238, which
does not fission readily. Even uranium 235, however, must be enriched
before it can be used as fuel.
The process of enrichment increases the concentration of uranium
235 sufficiently for a chain reaction to occur. Enriched uranium is used
to fuel the reactors used by electric utilities. Also, the much more plentiful
uranium 238 can be converted into plutonium 239, a form of the
human-made element plutonium, which does fission readily. That
conversion process is the way fuel is produced for a nuclear weapon.
Therefore, the major objective of the Oak Ridge effort was to develop a
pilot operation for separating plutonium from the uranium in which it
was produced. Large-scale plutonium production, which had never
been attempted before, eventually would be done at the Hanford Engineer
Works in Washington. First, however, plutonium had to be pro-
duced successfully on a small scale at Oak Ridge.
The reactor was started up on November 4, 1943. By March 1,
1944, the Oak Ridge laboratory had produced several grams of plutonium.
The material was sent to the Los Alamos laboratory in New
Mexico for testing. By July, 1944, the reactor operated at four times
its original power level. By the end of that year, however, plutonium
production at Oak Ridge had ceased, and the reactor thereafter was
used principally to produce radioisotopes for physical and biological
research and for medical treatment. Ultimately, the Hanford Engineer
Works’ reactors produced the plutonium for the bomb that
was dropped on Nagasaki, Japan, on August 9, 1945.
The original objectives for which Oak Ridge had been built had
been achieved, and subsequent activity at the facility was directed
toward peacetime missions that included basic studies of the structure
of matter.
Impact
The most immediate impact of the work done at Oak Ridge was
its contribution to ending World War II. When the atomic bombs
were dropped, the war ended, and the United States emerged intact.
The immediate and long-range devastation to the people of Japan,
however, opened the public’s eyes to the almost unimaginable
death and destruction that could be caused by a nuclear war. Fears
of such a war remain to this day, especially as more and more nations
develop the technology to build nuclear weapons.
On the other hand, great contributions to human civilization
have resulted from the development of nuclear energy. Electric
power generation, nuclear medicine, spacecraft power, and ship
propulsion have all profited from the pioneering efforts at the Oak
Ridge National Laboratory. Currently, the primary use of nuclear
energy is to produce electric power. Handled properly, nuclear energy
may help to solve the pollution problems caused by the burning
of fossil fuels.
See also Breeder reactor; Compressed-air-accumulating powerplant; Fuel cell;
Geothermal power; Heat pump; Nuclear power plant; Solar thermal engine; Nuclear reactor
Nuclear power plant
The invention:
The first full-scale commercial nuclear power plant,
which gave birth to the nuclear power industry.
The people behind the invention:
Enrico Fermi (1901-1954), an Italian American physicist who
won the 1938 Nobel Prize in Physics
Otto Hahn (1879-1968), a German physical chemist who won the
1944 Nobel Prize in Chemistry
Lise Meitner (1878-1968), an Austrian Swedish physicist
Hyman G. Rickover (1898-1986), a Polish American naval officer
Discovering Fission
Nuclear fission involves the splitting of an atomic nucleus, leading
to the release of large amounts of energy. Nuclear fission was
discovered in Germany in 1938 by Otto Hahn after he had bombarded
uranium with neutrons and observed traces of radioactive
barium. When Hahn’s former associate, Lise Meitner, heard of this,
she realized that the neutrons may have split the uranium nuclei
(each of which holds 92 protons) into two smaller nuclei to produce
barium (56 protons) and krypton (36 protons). Meitner and her
nephew, Otto Robert Frisch, were able to calculate the enormous energy
that would be released in this type of reaction. They published
their results early in 1939.
Nuclear fission was quickly verified in several laboratories, and
the Danish physicist Niels Bohr soon demonstrated that the rare uranium
235 (U-235) isotope is much more likely to fission than the common
uranium 238 (U-238) isotope, which makes up 99.3 percent of
natural uranium. It was also recognized that fission would produce
additional neutrons that could cause new fissions, producing even
more neutrons and thus creating a self-sustaining chain reaction. In
this process, the fissioning of one gram of U-235 would release about
as much energy as the burning of three million tons of coal.
The first controlled chain reaction was demonstrated on December
2, 1942, in a nuclear reactor at the University of Chicago, under
the leadership of Enrico Fermi. He used a graphite moderator to
slow the neutrons by collisions with carbon atoms. “Critical mass”
was achieved when the mass of graphite and uranium assembled
was large enough that the number of neutrons not escaping from
the pile would be sufficient to sustain a U-235 chain reaction. Cadmium
control rods could be inserted to absorb neutrons and slow
the reaction.
It was also recognized that the U-238 in the reactor would absorb
accelerated neutrons to produce the new element plutonium, which
is also fissionable. During World War II (1939-1945), large reactors
were built to “breed” plutonium, which was easier to separate than
U-235. An experimental breeder reactor at Arco, Idaho, was the first
to use the energy of nuclear fission to produce a small amount of
electricity (about 100 watts) on December 20, 1951.
Nuclear Electricity
Power reactors designed to produce substantial amounts of
electricity use the heat generated by fission to produce steam or
hot gas to drive a turbine connected to an ordinary electric generator.
The first power reactor design to be developed in the United
States was the pressurized water reactor (PWR). In the PWR, water
under high pressure is used both as the moderator and as the coolant.
After circulating through the reactor core, the hot pressurized
water flows through a heat exchanger to produce steam. Reactors
moderated by “heavy water” (in which the hydrogen in the water
is replaced with deuterium, which contains an extra neutron) can
operate with natural uranium.
The pressurized water system was used in the first reactor to
produce substantial amounts of power, the experimental Mark I
reactor. It was started up on May 31, 1953, at the Idaho National
Engineering Laboratory. The Mark I became the prototype for the
reactor used in the first nuclear-powered submarine. Under the
leadership of Hyman G. Rickover, who was head of the Division of
Naval Reactors of the Atomic Energy Commission (AEC), Westinghouse
Electric Corporation was engaged to build a PWR system
to power the submarine USS Nautilus. It began sea trials in January
of 1955 and ran for two years before refueling.
In the meantime, the first experimental nuclear power plant for
generating electricity was completed in the Soviet Union in June of
1954, under the direction of the Soviet physicist Igor Kurchatov. It
produced 5 megawatts of electric power. The first full-scale nuclear
power plant was built in England under the direction of the British
nuclear engineer Sir Christopher Hinton. It began producing about
90 megawatts of electric power in October, 1956.
On December 2, 1957, on the fifteenth anniversary of the first controlled
nuclear chain reaction, the Shippingport Atomic Power Station
in Shippingport, Pennsylvania, became the first full-scale commercial
nuclear power plant in the United States. It produced about
60 megawatts of electric power for the Duquesne Light Company until
1964, when its reactor core was replaced, increasing its power to
100 megawatts with a maximum capacity of 150 megawatts.
Consequences
The opening of the Shippingport Atomic Power Station marked
the beginning of the nuclear power industry in the United States,
with all of its glowing promise and eventual problems. It was predicted
that electrical energy would become too cheap to meter. The
AEC hoped to encourage the participation of industry, with government
support limited to research and development. They encouraged
a variety of reactor types in the hope of extending technical
knowledge.
The Dresden Nuclear Power Station, completed by Commonwealth
Edison in September, 1959, at Morris, Illinois, near Chicago,
was the first full-scale privately financed nuclear power station in
the United States. By 1973, forty-two plants were in operation producing
26,000 megawatts, fifty more were under construction, and
about one hundred were on order. Industry officials predicted that
50 percent of the nation’s electric power would be nuclear by the
end of the twentieth century.
The promise of nuclear energy has not been completely fulfilled.
Growing concerns about safety and waste disposal have led to increased
efforts to delay or block the construction of new plants. The
cost of nuclear plants rose as legal delays and inflation pushed costs
higher, so that many in the planning stages could no longer be competitive.
The 1979 Three Mile Island accident in Pennsylvania and
the much more serious 1986 Chernobyl accident in the Soviet Union
increased concerns about the safety of nuclear power. Nevertheless,
by 1986, more than one hundred nuclear power plants were operating
in the United States, producing about 60,000 megawatts of
power. More than three hundred reactors in twenty-five countries
provide about 200,000 megawatts of electric power worldwide.
Many believe that, properly controlled, nuclear energy offers a
clean-energy solution to the problem of environmental pollution.
See also : Breeder reactor; Compressed-air-accumulating power
plant; Fuel cell; Geothermal power; Nuclear reactor; Solar thermal
engine; Nuclear power plant
Further Reading :
Friday, September 4, 2009
Nuclear magnetic resonance
The invention: Procedure that uses hydrogen atoms in the human body, strong electromagnets, radio waves, and detection equipment to produce images of sections of the brain. The people behind the invention: Raymond Damadian (1936- ), an American physicist and inventor Paul C. Lauterbur (1929- ), an American chemist Peter Mansfield (1933- ), a scientist at the University of Nottingham, England Peering into the Brain Doctors have always wanted the ability to look into the skull and see the human
brain without harming the patient who is being examined. Over the years, various attempts were made to achieve this ability. At one time, the use of X rays, which were first used byWilhelm Conrad Röntgen in 1895, seemed to be an option, but it was found that X rays are absorbed by bone, so the skull made it impossible to use X-ray technology to view the brain. The relatively recent use of computed tomography (CT) scanning, a computer-assisted imaging technology, made it possible to view sections of the head and other areas of the body, but the technique requires that the part of the body being “imaged,” or viewed, be subjected to a small amount of radiation, thereby putting the patient at risk. Positron emission tomography (PET) could also be used, but it requires that small amounts of radiation be injected into the patient, which also puts the patient at risk. Since the early 1940’s, however, a new technology had been developing. This technology, which appears to pose no risk to patients, is called “nuclear magnetic resonance spectroscopy.” It was first used to study the molecular structures of pure samples of chemicals. This method developed until it could be used to follow one chemical as it changed into another, and then another, in a living cell. By 1971, Raymond Damadian had proposed that body images that were more vivid and more useful than X rays could be produced by means of nuclear magnetic resonance spectroscopy. In 1978, he founded his own company, FONAR, which manufactured the scanners that are necessary for the technique. Magnetic Resonance Images The first nuclear magnetic resonance images (MRIs) were published by Paul Lauterbur in 1973. Although there seemed to be no possibility that MRI could be harmful to patients, everyone involved in MRI research was very cautious. In 1976, Peter Mansfield, at the University of Nottingham, England, obtained an MRI of his partner’s finger. The next year, Paul Bottomley, a member ofWaldo Hinshaw’s research group at the same university, put his left wrist into an experimental machine that the group had developed. A vivid cross section that showed layers of skin, muscle, bone, muscle, and skin, in that order, appeared on the machine’s monitor. Studies with animals showed no apparent memory or other brain problems. In 1978, Electrical and Musical Industries (EMI), a British corporate pioneer in electronics that merged with Thorn in 1980, obtained the first MRI of the human head. It took six minutes. An MRI of the brain, or any other part of the body, is made possible by the water content of the body. The gray matter of the brain contains more water than the white matter does. The blood vessels and the blood itself also have water contents that are different from those of other parts of the brain. Therefore, the different structures and areas of the brain can be seen clearly in an MRI. Bone contains very little water, so it does not appear on the monitor. This is why the skull and the backbone cause no interference when the brain or the spinal cord is viewed. Every water molecule contains two hydrogen atoms and one oxygen atom. A strong electromagnetic field causes the hydrogen molecules to line up like marchers in a parade. Radio waves can be used to change the position of these parallel hydrogen molecules. When the radio waves are discontinued, a small radio signal is produced as the molecules return to their marching position. This distinct radio signal is the basis for the production of the image on a computer screen.Hydrogen was selected for use in MRI work because it is very abundant in the human body, it is part of the water molecule, and it has the proper magnetic qualities. The nucleus of the hydrogen atom consists of a single proton, a particle with a positive charge. The signal from the hydrogen’s proton is comparatively strong. There are several methods by which the radio signal from the hydrogen atom can be converted into an image. Each method uses a computer to create first a two-dimensional, then a threedimensional, image. Peter Mansfield’s team at the University of Nottingham holds the patent for the slice-selection technique that makes it possible to excite and image selectively a specific cross section of the brain or any other part of the body. This is the key patent in MRI technology. Damadian was granted a patent that described the use of two coils, one to drive and one to pick up signals across selected portions of the human body. EMI, the company that introduced the X-ray scanner for CT images, developed a commercial prototype for the MRI. The British Technology Group, a state-owned company that helps to bring innovations to the marketplace, has sixteen separate MRIrelated patents. Ten years after EMI produced the first image of the human brain, patents and royalties were still being sorted out. Consequences MRI technology has revolutionized medical diagnosis, especially in regard to the brain and the spinal cord. For example, in multiple sclerosis, the loss of the covering on nerve cells can be detected. Tumors can be identified accurately. The painless and noninvasive use of MRI has almost completely replaced the myelogram, which involves using a needle to inject dye into the spine. Although there is every indication that the use of MRI is very safe, there are some people who cannot benefit from this valuable tool. Those whose bodies contain metal cannot be placed into the MRI machine. No one instrument can meet everyone’s needs. The development of MRI stands as an example of the interaction of achievements in various fields of science. Fundamental physics, biochemistry, physiology, electronic image reconstruction, advances in superconducting wires, the development of computers, and advancements in anatomy all contributed to the development of MRI. Its development is also the result of international efforts. Scientists and laboratories in England and the United States pioneered the technology, but contributions were also made by scientists in France, Switzerland, and Scotland. This kind of interaction and cooperation can only lead to greater understanding of the human brain.
brain without harming the patient who is being examined. Over the years, various attempts were made to achieve this ability. At one time, the use of X rays, which were first used byWilhelm Conrad Röntgen in 1895, seemed to be an option, but it was found that X rays are absorbed by bone, so the skull made it impossible to use X-ray technology to view the brain. The relatively recent use of computed tomography (CT) scanning, a computer-assisted imaging technology, made it possible to view sections of the head and other areas of the body, but the technique requires that the part of the body being “imaged,” or viewed, be subjected to a small amount of radiation, thereby putting the patient at risk. Positron emission tomography (PET) could also be used, but it requires that small amounts of radiation be injected into the patient, which also puts the patient at risk. Since the early 1940’s, however, a new technology had been developing. This technology, which appears to pose no risk to patients, is called “nuclear magnetic resonance spectroscopy.” It was first used to study the molecular structures of pure samples of chemicals. This method developed until it could be used to follow one chemical as it changed into another, and then another, in a living cell. By 1971, Raymond Damadian had proposed that body images that were more vivid and more useful than X rays could be produced by means of nuclear magnetic resonance spectroscopy. In 1978, he founded his own company, FONAR, which manufactured the scanners that are necessary for the technique. Magnetic Resonance Images The first nuclear magnetic resonance images (MRIs) were published by Paul Lauterbur in 1973. Although there seemed to be no possibility that MRI could be harmful to patients, everyone involved in MRI research was very cautious. In 1976, Peter Mansfield, at the University of Nottingham, England, obtained an MRI of his partner’s finger. The next year, Paul Bottomley, a member ofWaldo Hinshaw’s research group at the same university, put his left wrist into an experimental machine that the group had developed. A vivid cross section that showed layers of skin, muscle, bone, muscle, and skin, in that order, appeared on the machine’s monitor. Studies with animals showed no apparent memory or other brain problems. In 1978, Electrical and Musical Industries (EMI), a British corporate pioneer in electronics that merged with Thorn in 1980, obtained the first MRI of the human head. It took six minutes. An MRI of the brain, or any other part of the body, is made possible by the water content of the body. The gray matter of the brain contains more water than the white matter does. The blood vessels and the blood itself also have water contents that are different from those of other parts of the brain. Therefore, the different structures and areas of the brain can be seen clearly in an MRI. Bone contains very little water, so it does not appear on the monitor. This is why the skull and the backbone cause no interference when the brain or the spinal cord is viewed. Every water molecule contains two hydrogen atoms and one oxygen atom. A strong electromagnetic field causes the hydrogen molecules to line up like marchers in a parade. Radio waves can be used to change the position of these parallel hydrogen molecules. When the radio waves are discontinued, a small radio signal is produced as the molecules return to their marching position. This distinct radio signal is the basis for the production of the image on a computer screen.Hydrogen was selected for use in MRI work because it is very abundant in the human body, it is part of the water molecule, and it has the proper magnetic qualities. The nucleus of the hydrogen atom consists of a single proton, a particle with a positive charge. The signal from the hydrogen’s proton is comparatively strong. There are several methods by which the radio signal from the hydrogen atom can be converted into an image. Each method uses a computer to create first a two-dimensional, then a threedimensional, image. Peter Mansfield’s team at the University of Nottingham holds the patent for the slice-selection technique that makes it possible to excite and image selectively a specific cross section of the brain or any other part of the body. This is the key patent in MRI technology. Damadian was granted a patent that described the use of two coils, one to drive and one to pick up signals across selected portions of the human body. EMI, the company that introduced the X-ray scanner for CT images, developed a commercial prototype for the MRI. The British Technology Group, a state-owned company that helps to bring innovations to the marketplace, has sixteen separate MRIrelated patents. Ten years after EMI produced the first image of the human brain, patents and royalties were still being sorted out. Consequences MRI technology has revolutionized medical diagnosis, especially in regard to the brain and the spinal cord. For example, in multiple sclerosis, the loss of the covering on nerve cells can be detected. Tumors can be identified accurately. The painless and noninvasive use of MRI has almost completely replaced the myelogram, which involves using a needle to inject dye into the spine. Although there is every indication that the use of MRI is very safe, there are some people who cannot benefit from this valuable tool. Those whose bodies contain metal cannot be placed into the MRI machine. No one instrument can meet everyone’s needs. The development of MRI stands as an example of the interaction of achievements in various fields of science. Fundamental physics, biochemistry, physiology, electronic image reconstruction, advances in superconducting wires, the development of computers, and advancements in anatomy all contributed to the development of MRI. Its development is also the result of international efforts. Scientists and laboratories in England and the United States pioneered the technology, but contributions were also made by scientists in France, Switzerland, and Scotland. This kind of interaction and cooperation can only lead to greater understanding of the human brain.
Labels:
magnetic,
Nuclear,
Nuclear magnetic resonance,
resonance
Neutrino detector
The invention:Adevice that provided the first direct evidence that the Sun runs on thermonuclear power and challenged existing models of the Sun. The people behind the invention: Raymond Davis, Jr. (1914- ), an American chemist John Norris Bahcall (1934- ), an American astrophysicist Missing Energy In 1871, Hermann von Helmholtz, the German physicist, anatomist, and physiologist, suggested that no ordinary chemical reaction could be responsible for the enormous energy output of the Sun. By the 1920’s, astrophysicists had realized that the energy radiated by the Sun must come from nuclear fusion, in which protons or nuclei combine to form larger nuclei and release energy.
These reactions were assumed to be taking place deep in the interior of the Sun, in an immense thermonuclear furnace, where the pressures and temperatures were high enough to allow fusion to proceed. Conventional astronomical observations could record only the particles of light emitted by the much cooler outer layers of the Sun and could not provide evidence for the existence of a thermonuclear furnace in the interior. Then scientists realized that the neutrino might be used to prove that this huge furnace existed. Of all the particles released in the fusion process, only one type—the neutrino— interacts so infrequently with matter that it can pass through the Sun and reach the earth. These neutrinos provide a way to verify directly the hypothesis of thermonuclear energy generated in stars. The neutrino was “invented” in 1930 by the American physicist Wolfgang Pauli to account for the apparent missing energy in the beta decay, or emission of an electron, from radioactive nuclei. He proposed that an unseen nuclear particle, which he called a neutrino, was also emitted in beta decay, and that it carried off the “missing” energy. To balance the energy but not be observed in the decay process, Pauli’s hypothetical particle had to have no electrical charge, have little or no mass, and interact only very weakly with ordinary matter. Typical neutrinos would have to be able to pass through millions of miles of ordinary matter in order to reach the earth. Scientists’ detectors, and even the whole earth or Sun, were essentially transparent as far as Pauli’s neutrinos were concerned. Because the neutrino is so difficult to detect, it took more than twenty-five years to confirm its existence. In 1956, Clyde Cowan and Frederick Reines, both physicists at the Los Alamos National Laboratory, built the world’s largest scintillation counter, a device to detect the small flash of light given off when the neutrino strikes (“interacts” with) a certain substance in the apparatus. They placed this scintillation counter near the Savannah River Nuclear Reactor, which was producing about 1 trillion neutrinos every second. Although only one neutrino interaction was observed in their detector every twenty minutes, Cowan and Reines were able to confirm the existence of Pauli’s elusive particle. The task of detecting the solar neutrinos was even more formidable. If an apparatus similar to the Cowan and Reines detector were employed to search for the neutrinos from the Sun, only one interaction could be expected every few thousand years. Missing Neutrinos At about the same time that Cowan and Reines performed their experiment, another type of neutrino detector was under development by Raymond Davis, Jr., a chemist at the Brookhaven National Laboratory. Davis employed an idea, originally suggested in 1948 by the nuclear physicist Bruno Pontecorvo, that when a neutrino interacts with a chlorine-37 nucleus, it produces a nucleus of argon 37. Any argon so produced could then be extracted from large volumes of chlorine-rich liquid by passing helium gas through the liquid. Since argon 37 is radioactive, it is relatively easy to detect. Davis tested a version of this neutrino detector, containing about 3,785 liters of carbon tetrachloride liquid, near a nuclear reactor at the Brookhaven National Laboratory from 1954 to 1956. In the scientific paper describing his results, Davis suggested that this type of neutrino detector could be made large enough to permit detection of solar neutrinos.Although Davis’s first attempt to detect solar neutrinos from a limestone mine at Barberton, Ohio, failed, he continued his search with a much larger detector 1,478 meters underground in the Homestake Gold Mine in Lead, South Dakota. The cylindrical tank (6.1 meters in diameter, 16 meters long, and containing 378,540 liters of perchloroethylene) was surrounded by water to shield the detector from neutrons emitted by trace quantities of uranium and thorium in the walls of the mine. The experiment was conducted underground to shield it from cosmic radiation. To describe his results, Davis coined a new unit, the “solar neutrino unit” (SNU), with 1 SNU indicating the production of one atom of argon 37 every six days. Astrophysicist John Norris Bahcall, using the best available astronomical models of the nuclear reactions going on in the sun’s interior, as well as the physical properties of the neutrinos, had predicted a capture rate of 50 SNUs in 1963. The 1967 results from Davis’s detector, however, had an upper limit of only 3 SNUs.The main significance of the detection of solar neutrinos by Davis was the direct confirmation that thermonuclear fusion must be occurring at the center of the Sun. The low number of solar neutrinos Davis detected, however, has called into question some of the fundamental beliefs of astrophysics. As Bahcall explained: “We know more about the Sun than about any other star. . . . The Sun is also in what is believed to be the best-understood stage of stellar evolution. . . . If we are to have confidence in the many astronomical and cosmological applications of the theory of stellar evolution, it ought at least to give the right answers about the Sun.” Many solutions to the problem of the “missing” solar neutrinos have been proposed. Most of these solutions can be divided into two broad classes: those that challenge the model of the sun’s interior and those that challenge the understanding of the behavior of the neutrino. Since the number of neutrinos produced is very sensitive to the temperature of the sun’s interior, some astrophysicists have suggested that the true solar temperature may be lower than expected. Others suggest that the sun’s outer layer may absorb more neutrinos than expected. Some physicists, however, believe neutrinos may occur in several different forms, only one of which can be detected by the chlorine detectors.Davis’s discovery of the low number of neutrinos reaching Earth has focused years of attention on a better understanding of how the Sun generates its energy and how the neutrino behaves. New and more elaborate solar neutrino detectors have been built with the aim of understanding stars, including the Sun, as well as the physics and behavior of the elusive neutrino.
These reactions were assumed to be taking place deep in the interior of the Sun, in an immense thermonuclear furnace, where the pressures and temperatures were high enough to allow fusion to proceed. Conventional astronomical observations could record only the particles of light emitted by the much cooler outer layers of the Sun and could not provide evidence for the existence of a thermonuclear furnace in the interior. Then scientists realized that the neutrino might be used to prove that this huge furnace existed. Of all the particles released in the fusion process, only one type—the neutrino— interacts so infrequently with matter that it can pass through the Sun and reach the earth. These neutrinos provide a way to verify directly the hypothesis of thermonuclear energy generated in stars. The neutrino was “invented” in 1930 by the American physicist Wolfgang Pauli to account for the apparent missing energy in the beta decay, or emission of an electron, from radioactive nuclei. He proposed that an unseen nuclear particle, which he called a neutrino, was also emitted in beta decay, and that it carried off the “missing” energy. To balance the energy but not be observed in the decay process, Pauli’s hypothetical particle had to have no electrical charge, have little or no mass, and interact only very weakly with ordinary matter. Typical neutrinos would have to be able to pass through millions of miles of ordinary matter in order to reach the earth. Scientists’ detectors, and even the whole earth or Sun, were essentially transparent as far as Pauli’s neutrinos were concerned. Because the neutrino is so difficult to detect, it took more than twenty-five years to confirm its existence. In 1956, Clyde Cowan and Frederick Reines, both physicists at the Los Alamos National Laboratory, built the world’s largest scintillation counter, a device to detect the small flash of light given off when the neutrino strikes (“interacts” with) a certain substance in the apparatus. They placed this scintillation counter near the Savannah River Nuclear Reactor, which was producing about 1 trillion neutrinos every second. Although only one neutrino interaction was observed in their detector every twenty minutes, Cowan and Reines were able to confirm the existence of Pauli’s elusive particle. The task of detecting the solar neutrinos was even more formidable. If an apparatus similar to the Cowan and Reines detector were employed to search for the neutrinos from the Sun, only one interaction could be expected every few thousand years. Missing Neutrinos At about the same time that Cowan and Reines performed their experiment, another type of neutrino detector was under development by Raymond Davis, Jr., a chemist at the Brookhaven National Laboratory. Davis employed an idea, originally suggested in 1948 by the nuclear physicist Bruno Pontecorvo, that when a neutrino interacts with a chlorine-37 nucleus, it produces a nucleus of argon 37. Any argon so produced could then be extracted from large volumes of chlorine-rich liquid by passing helium gas through the liquid. Since argon 37 is radioactive, it is relatively easy to detect. Davis tested a version of this neutrino detector, containing about 3,785 liters of carbon tetrachloride liquid, near a nuclear reactor at the Brookhaven National Laboratory from 1954 to 1956. In the scientific paper describing his results, Davis suggested that this type of neutrino detector could be made large enough to permit detection of solar neutrinos.Although Davis’s first attempt to detect solar neutrinos from a limestone mine at Barberton, Ohio, failed, he continued his search with a much larger detector 1,478 meters underground in the Homestake Gold Mine in Lead, South Dakota. The cylindrical tank (6.1 meters in diameter, 16 meters long, and containing 378,540 liters of perchloroethylene) was surrounded by water to shield the detector from neutrons emitted by trace quantities of uranium and thorium in the walls of the mine. The experiment was conducted underground to shield it from cosmic radiation. To describe his results, Davis coined a new unit, the “solar neutrino unit” (SNU), with 1 SNU indicating the production of one atom of argon 37 every six days. Astrophysicist John Norris Bahcall, using the best available astronomical models of the nuclear reactions going on in the sun’s interior, as well as the physical properties of the neutrinos, had predicted a capture rate of 50 SNUs in 1963. The 1967 results from Davis’s detector, however, had an upper limit of only 3 SNUs.The main significance of the detection of solar neutrinos by Davis was the direct confirmation that thermonuclear fusion must be occurring at the center of the Sun. The low number of solar neutrinos Davis detected, however, has called into question some of the fundamental beliefs of astrophysics. As Bahcall explained: “We know more about the Sun than about any other star. . . . The Sun is also in what is believed to be the best-understood stage of stellar evolution. . . . If we are to have confidence in the many astronomical and cosmological applications of the theory of stellar evolution, it ought at least to give the right answers about the Sun.” Many solutions to the problem of the “missing” solar neutrinos have been proposed. Most of these solutions can be divided into two broad classes: those that challenge the model of the sun’s interior and those that challenge the understanding of the behavior of the neutrino. Since the number of neutrinos produced is very sensitive to the temperature of the sun’s interior, some astrophysicists have suggested that the true solar temperature may be lower than expected. Others suggest that the sun’s outer layer may absorb more neutrinos than expected. Some physicists, however, believe neutrinos may occur in several different forms, only one of which can be detected by the chlorine detectors.Davis’s discovery of the low number of neutrinos reaching Earth has focused years of attention on a better understanding of how the Sun generates its energy and how the neutrino behaves. New and more elaborate solar neutrino detectors have been built with the aim of understanding stars, including the Sun, as well as the physics and behavior of the elusive neutrino.
Neoprene
The invention: The first commercially practical synthetic rubber, Neoprene gave a boost to polymer chemistry and the search for new materials. The people behind the invention: Wallace Hume Carothers (1896-1937), an American chemist Arnold Miller Collins (1899- ), an American chemist Elmer Keiser Bolton (1886-1968), an American chemist Julius Arthur Nieuwland (1879-1936), a Belgian American priest, botanist, and chemist Synthetic Rubber: A Mirage? The growing dependence of the industrialized nations upon elastomers (elastic substances) and the shortcomings of natural rubber motivated the twentieth century quest for rubber substitutes. By 1914
, rubber had become nearly as indispensable as coal or iron. The rise of the automobile industry, in particular, had created a strong demand for rubber. Unfortunately, the availability of rubber was limited by periodic shortages and spiraling prices. Furthermore, the particular properties of natural rubber, such as its lack of resistance to oxygen, oils, and extreme temperatures, restrict its usefulness in certain applications. These limitations stimulated a search for special-purpose rubber substitutes. Interest in synthetic rubber dates back to the 1860 discovery by the English chemist Greville Williams that the main constituent of rubber is isoprene, a liquid hydrocarbon. Nineteenth century chemists attempted unsuccessfully to transform isoprene into rubber. The first large-scale production of a rubber substitute occurred duringWorldWar I. ABritish blockade forced Germany to begin to manufacture methyl rubber in 1916, but methyl rubber turned out to be a poor substitute for natural rubber. When the war ended in 1918, a practical synthetic rubber was still only a mirage. Nevertheless, a breakthrough was on the horizon.Mirage Becomes Reality In 1930, chemists at E. I. Du Pont de Nemours discovered the elastomer known as neoprene. Of the more than twenty chemists who helped to make this discovery possible, four stand out: Elmer Bolton, Julius Nieuwland, Wallace Carothers, and Arnold Collins. Bolton directed Du Pont’s drystuffs department in the mid- 1920’s. Largely because of the rapidly increasing price of rubber, he initiated a project to synthesize an elastomer from acetylene, a gaseous hydrocarbon. In December, 1925, Bolton attended the American Chemical Society’s convention in Rochester, New York, and heard a presentation dealing with acetylene reactions. The presenter was Julius Nieuwland, the foremost authority on the chemistry of acetylene. Nieuwland was a professor of organic chemistry at the University of Notre Dame. (One of his students was the legendary football coach Knute Rockne.) The priest-scientist had been investigating acetylene reactions for more than twenty years. Using a copper chloride catalyst he had discovered, he isolated a new compound, divinylacetylene (DVA). He later treated DVA with a vulcanizing (hardening) agent and succeeded in producing a rubberlike substance, but the substance proved to be too soft for practical use. Bolton immediately recognized the importance of Nieuwland’s discoveries and discussed with him the possibility of using DVAas a raw material for a synthetic rubber. Seven months later, an alliance was formed that permitted Du Pont researchers to use Nieuwland’s copper catalyst. Bolton hoped that the catalyst would be the key to making an elastomer from acetylene. As it turned out, Nieuwland’s catalyst was indispensable for manufacturing neoprene. Over the next several years, Du Pont scientists tried unsuccessfully to produce rubberlike materials. Using Nieuwland’s catalyst, they managed to prepare DVA and also to isolate monovinylacetylene (MVA), a new compound that eventually proved to be the vital intermediate chemical in the making of neoprene. Reactions of MVA and DVA, however, produced only hard, brittle materials. In 1928, Du Pont hired a thirty-one-year-old Harvard instructor, Wallace Carothers, to direct the organic chemicals group. He began a systematic exploration of polymers (complex molecules). In early 1930, he accepted an assignment to investigate the chemistry of DVA. He appointed one of his assistants, Arnold Collins, to conduct the laboratory experiments. Carothers suggested that Collins should explore the reaction between MVA and hydrogen chloride. His suggestion would lead to the discovery of neoprene. One of Collins’s experiments yielded a new liquid, and on April 17, 1930, he recorded in his laboratory notebook that the liquid had solidified into a rubbery substance. When he dropped it on a bench, it bounced. This was the first batch of neoprene. Carothers named Collins’s liquid “chloroprene.” Chloroprene is analogous structurally to isoprene, but it polymerizes much more rapidly. Carothers conducted extensive investigations of the chemistry of chloroprene and related compounds. His studies were the foundation for Du Pont’s development of an elastomer that was superior to all previously known synthetic rubbers. Du Pont chemists, including Carothers and Collins, formally introduced neoprene—originally called “DuPrene”—on November 3, 1931, at the meeting of the American Chemical Society in Akron, Ohio. Nine months later, the new elastomer began to be sold. Impact The introduction of neoprene was a milestone in humankind’s development of new materials. It was the first synthetic rubber worthy of the name. Neoprene possessed higher tensile strength than rubber and much better resistance to abrasion, oxygen, heat, oils, and chemicals. Its main applications included jacketing for electric wires and cables, work-shoe soles, gasoline hoses, and conveyor and powertransmission belting. By 1939, when Adolf Hitler’s troops invaded Poland, nearly every major industry in America was using neoprene. After the Japanese bombing of Pearl Harbor, in 1941, the elastomer became even more valuable to the United States. It helped the United States and its allies survive the critical shortage of natural rubber that resulted when Japan seized Malayan rubber plantations. A scientifically and technologically significant side effect of the introduction of neoprene was the stimulus that the breakthrough gave to polymer research. Chemists had long debated whether polymers were mysterious aggregates of smaller units or were genuine molecules. Carothers ended the debate by demonstrating in a series of now-classic papers that polymers were indeed ordinary— but very large—molecules. In the 1930’s, he put polymer studies on a firm footing. The advance of polymer science led, in turn, to the development of additional elastomers and synthetic fibers, including nylon, which was invented by Carothers himself in 1935.
, rubber had become nearly as indispensable as coal or iron. The rise of the automobile industry, in particular, had created a strong demand for rubber. Unfortunately, the availability of rubber was limited by periodic shortages and spiraling prices. Furthermore, the particular properties of natural rubber, such as its lack of resistance to oxygen, oils, and extreme temperatures, restrict its usefulness in certain applications. These limitations stimulated a search for special-purpose rubber substitutes. Interest in synthetic rubber dates back to the 1860 discovery by the English chemist Greville Williams that the main constituent of rubber is isoprene, a liquid hydrocarbon. Nineteenth century chemists attempted unsuccessfully to transform isoprene into rubber. The first large-scale production of a rubber substitute occurred duringWorldWar I. ABritish blockade forced Germany to begin to manufacture methyl rubber in 1916, but methyl rubber turned out to be a poor substitute for natural rubber. When the war ended in 1918, a practical synthetic rubber was still only a mirage. Nevertheless, a breakthrough was on the horizon.Mirage Becomes Reality In 1930, chemists at E. I. Du Pont de Nemours discovered the elastomer known as neoprene. Of the more than twenty chemists who helped to make this discovery possible, four stand out: Elmer Bolton, Julius Nieuwland, Wallace Carothers, and Arnold Collins. Bolton directed Du Pont’s drystuffs department in the mid- 1920’s. Largely because of the rapidly increasing price of rubber, he initiated a project to synthesize an elastomer from acetylene, a gaseous hydrocarbon. In December, 1925, Bolton attended the American Chemical Society’s convention in Rochester, New York, and heard a presentation dealing with acetylene reactions. The presenter was Julius Nieuwland, the foremost authority on the chemistry of acetylene. Nieuwland was a professor of organic chemistry at the University of Notre Dame. (One of his students was the legendary football coach Knute Rockne.) The priest-scientist had been investigating acetylene reactions for more than twenty years. Using a copper chloride catalyst he had discovered, he isolated a new compound, divinylacetylene (DVA). He later treated DVA with a vulcanizing (hardening) agent and succeeded in producing a rubberlike substance, but the substance proved to be too soft for practical use. Bolton immediately recognized the importance of Nieuwland’s discoveries and discussed with him the possibility of using DVAas a raw material for a synthetic rubber. Seven months later, an alliance was formed that permitted Du Pont researchers to use Nieuwland’s copper catalyst. Bolton hoped that the catalyst would be the key to making an elastomer from acetylene. As it turned out, Nieuwland’s catalyst was indispensable for manufacturing neoprene. Over the next several years, Du Pont scientists tried unsuccessfully to produce rubberlike materials. Using Nieuwland’s catalyst, they managed to prepare DVA and also to isolate monovinylacetylene (MVA), a new compound that eventually proved to be the vital intermediate chemical in the making of neoprene. Reactions of MVA and DVA, however, produced only hard, brittle materials. In 1928, Du Pont hired a thirty-one-year-old Harvard instructor, Wallace Carothers, to direct the organic chemicals group. He began a systematic exploration of polymers (complex molecules). In early 1930, he accepted an assignment to investigate the chemistry of DVA. He appointed one of his assistants, Arnold Collins, to conduct the laboratory experiments. Carothers suggested that Collins should explore the reaction between MVA and hydrogen chloride. His suggestion would lead to the discovery of neoprene. One of Collins’s experiments yielded a new liquid, and on April 17, 1930, he recorded in his laboratory notebook that the liquid had solidified into a rubbery substance. When he dropped it on a bench, it bounced. This was the first batch of neoprene. Carothers named Collins’s liquid “chloroprene.” Chloroprene is analogous structurally to isoprene, but it polymerizes much more rapidly. Carothers conducted extensive investigations of the chemistry of chloroprene and related compounds. His studies were the foundation for Du Pont’s development of an elastomer that was superior to all previously known synthetic rubbers. Du Pont chemists, including Carothers and Collins, formally introduced neoprene—originally called “DuPrene”—on November 3, 1931, at the meeting of the American Chemical Society in Akron, Ohio. Nine months later, the new elastomer began to be sold. Impact The introduction of neoprene was a milestone in humankind’s development of new materials. It was the first synthetic rubber worthy of the name. Neoprene possessed higher tensile strength than rubber and much better resistance to abrasion, oxygen, heat, oils, and chemicals. Its main applications included jacketing for electric wires and cables, work-shoe soles, gasoline hoses, and conveyor and powertransmission belting. By 1939, when Adolf Hitler’s troops invaded Poland, nearly every major industry in America was using neoprene. After the Japanese bombing of Pearl Harbor, in 1941, the elastomer became even more valuable to the United States. It helped the United States and its allies survive the critical shortage of natural rubber that resulted when Japan seized Malayan rubber plantations. A scientifically and technologically significant side effect of the introduction of neoprene was the stimulus that the breakthrough gave to polymer research. Chemists had long debated whether polymers were mysterious aggregates of smaller units or were genuine molecules. Carothers ended the debate by demonstrating in a series of now-classic papers that polymers were indeed ordinary— but very large—molecules. In the 1930’s, he put polymer studies on a firm footing. The advance of polymer science led, in turn, to the development of additional elastomers and synthetic fibers, including nylon, which was invented by Carothers himself in 1935.
Subscribe to:
Posts (Atom)