Wednesday, September 30, 2009

Pap test







The invention: A cytologic technique the diagnosing uterine cancer,

the second most common fatal cancer in American women.

The people behind the invention:

George N. Papanicolaou (1883-1962), a Greek-born American

physician and anatomist

Charles Stockard (1879-1939), an American anatomist

Herbert Traut (1894-1972), an American gynecologist

Cancer in History

Cancer, first named by the ancient Greek physician Hippocrates

of Cos, is one of the most painful and dreaded forms of human disease.

It occurs when body cells run wild and interfere with the normal

activities of the body. The early diagnosis of cancer is extremely

important because early detection often makes it possible to effect

successful cures. The modern detection of cancer is usually done by

the microscopic examination of the cancer cells, using the techniques

of the area of biology called “cytology, ” or cell biology.

Development of cancer cytology began in 1867, after L. S. Beale

reported tumor cells in the saliva from

a patient who was afflicted

with cancer of the pharynx. Beale recommended the use in cancer

detection of microscopic examination of cells shed or removed (exfoliated)

from organs including the digestive, the urinary, and the

reproductive tracts. Soon, other scientists identified numerous striking

differences, including cell size and shape, the size of cell nuclei,

and the complexity of cell nuclei.

Modern cytologic detection of cancer evolved from the work of

George N. Papanicolaou, a Greek physician who trained at the University

of Athens Medical School. In 1913, he emigrated to the

United States.

In 1917, he began studying sex determination of guinea pigs with

Charles Stockard at New York’s Cornell Medical College. Papanicolaou’s

efforts required him to obtain ova (egg cells) at a precise

period in their maturation cycle, a process that required an indicator

of the time at which the animals ovulated. In search of this indicator,

Papanicolaou designed a method that involved microscopic examination

of the vaginal discharges from female guinea pigs.

Initially, Papanicolaou sought traces of blood, such as those

seen in the menstrual discharges from both primates and humans.

Papanicolaou found no blood in the guinea pig vaginal discharges.

Instead, he noticed changes in the size and the shape of the uterine

cells shed in these discharges. These changes recurred in a fifteento-

sixteen-day cycle that correlated well with the guinea pig menstrual

cycle.

“New Cancer Detection Method”

Papanicolaou next extended his efforts to the study of humans.

This endeavor was designed originally to identify whether comparable

changes in the exfoliated cells of the human vagina occurred

in women. Its goal was to gain an understanding of the human menstrual

cycle. In the course of this work, Papanicolaou observed distinctive

abnormal cells in the vaginal fluid from a woman afflicted

with cancer of the cervix. This led him to begin to attempt to develop

a cytologic method for the detection of uterine cancer, the second

most common type of fatal cancer in American women of the

time.

In 1928, Papanicolaou published his cytologic method of cancer

detection in the Proceedings of the Third Race Betterment Conference,

held in Battle Creek, Michigan. The work was received well by the

news media (for example, the January 5, 1928, New YorkWorld credited

him with a “new cancer detection method”). Nevertheless, the

publication—and others he produced over the next ten years—was

not very interesting to gynecologists of the time. Rather, they preferred

use of the standard methodology of uterine cancer diagnosis

(cervical biopsy and curettage).

Consequently, in 1932, Papanicolaou turned his energy toward

studying human reproductive endocrinology problems related to

the effects of hormones on cells of the reproductive system. One example

of this work was published in a 1933 issue of The American

Journal of Anatomy, where he described “the sexual cycle in the human

female.” Other such efforts resulted in better understanding of   reproductive problems that include amenorrhea and menopause.

It was not until Papanicolaou’s collaboration with gynecologist

Herbert Traut (beginning in 1939), which led to the publication of

Diagnosis of Uterine Cancer by the Vaginal Smear (1943), that clinical

acceptance of the method began to develop. Their monograph documented

an impressive, irrefutable group of studies of both normal

and disease states that included nearly two hundred cases of cancer

of the uterus.

Soon, many other researchers began to confirm these findings;

by 1948, the newly named American Cancer Society noted that the

“Pap” smear seemed to be a very valuable tool for detecting vaginal

cancer. Wide acceptance of the Pap test followed, and, beginning

in 1947, hundreds of physicians from all over the world

flocked to Papanicolaou’s course on the subject. They learned his

smear/diagnosis techniques and disseminated them around the

world.

Impact

The Pap test has been cited by many physicians as being the most

significant and useful modern discovery in the field of cancer research.

One way of measuring its impact is the realization that the

test allows the identification of uterine cancer in the earliest stages,

long before other detection methods can be used. Moreover, because

of resultant early diagnosis, the disease can be cured in more

than 80 percent of all cases identified by the test. In addition, Pap

testing allows the identification of cancer of the uterine cervix so

early that its cure rate can be nearly 100 percent.

Papanicolaou extended the use of the smear technique from

examination of vaginal discharges to diagnosis of cancer in many

other organs from which scrapings, washings, and discharges

can be obtained. These tissues include the colon, the kidney, the

bladder, the prostate, the lung, the breast, and the sinuses. In

most cases, such examination of these tissues has made it possible

to diagnose cancer much sooner than is possible by using

other existing methods. As a result, the smear method has become

a basis of cancer control in national health programs throughout the

world

Tuesday, September 29, 2009

Pacemaker











The invention: A small device using transistor circuitry that regulates

the heartbeat of the patient in whom it is surgically emplaced.

The people behind the invention:

Ake Senning (1915- ), a Swedish physician

Rune Elmquist, co-inventor of the first pacemaker

Paul Maurice Zoll (1911- ), an American cardiologist

Cardiac Pacing

The fundamentals of cardiac electrophysiology (the electrical activity

of the heart) were determined during the eighteenth century;

the first successful cardiac resuscitation by electrical stimulation occurred

in 1774. The use of artificial pacemakers for resuscitation was

demonstrated in 1929 by Mark Lidwell. Lidwell and his

coworkers

developed a portable apparatus that could be connected to a power

source. The pacemaker was used successfully on several stillborn

infants after other methods of resuscitation failed. Nevertheless,

these early machines were unreliable.

Ake Senning’s first experience with the effect of electrical stimulation

on cardiac physiology was memorable; grasping a radio

ground wire, Senning felt a brief episode of ventricular arrhythmia

(irregular heartbeat). Later, he was able to apply a similar electrical

stimulation to control a heartbeat during surgery.

The principle of electrical regulation of the heart was valid. It was

shown that pacemakers introduced intravenously into the sinus

node area of a dog’s heart could be used to control the heartbeat

rate. Although Paul Maurice Zoll utilized a similar apparatus in

several patients with cardiac arrhythmia, it was not appropriate for

extensive clinical use; it was large and often caused unpleasant sensations

or burns. In 1957, however, Ake Senning observed that attaching

stainless steel electrodes to a child’s heart made it possible

to regulate the heart’s rate of contraction. Senning considered this to

represent the beginning of the era of clinical pacing.

Development of Cardiac Pacemakers

Senning’s observations of the successful use of the cardiac pacemaker

had allowed him to identify the problems inherent in the device.

He realized that the attachment of the device to the lower, ventricular

region of the heart made possible more reliable control, but

other problems remained unsolved. It was inconvenient, for example,

to carry the machine externally; a cord was wrapped around the

patient that allowed the pacemaker to be recharged, which had to be

done frequently. Also, for unknown reasons, heart resistance would

increase with use of the pacemaker, which meant that increasingly

large voltages had to be used to stimulate the heart. Levels as high

as 20 volts could cause quite a “start” in the patient. Furthermore,

there was a continuous threat of infection.

In 1957, Senning and his colleague Rune Elmquist developed a

pacemaker that was powered by rechargeable nickel-cadmium batteries,

which had to be recharged once a month. Although Senning

and Elmquist did not yet consider the pacemaker ready for human

testing, fate intervened.Aforty-three-year-old man was admitted to

the hospital suffering from an atrioventricular block, an inability of

the electrical stimulus to travel along the conductive fibers of the

“bundle of His” (a band of cardiac muscle fibers). As a result of this

condition, the patient required repeated cardiac resuscitation. Similar

types of heart block were associated with a mortality rate higher

than 50 percent per year and nearly 95 percent over five years.

Senning implanted two pacemakers (one failed) into the myocardium

of the patient’s heart, one of which provided a regulatory

rate of 64 beats per minute. Although the pacemakers required periodic

replacement, the patient remained alive and active for twenty

years. (He later became president of the Swedish Association for

Heart and Lung Disease.)

During the next five years, the development of more reliable and

more complex pacemakers continued, and implanting the pacemaker

through the vein rather than through the thorax made it simpler

to use the procedure. The first pacemakers were of the “asynchronous”

type, which generated a regular charge that overrode the

natural pacemaker in the heart. The rate could be set by the physician

but could not be altered if the need arose. In 1963, an atrialtriggered synchronous pacemaker was installed by a Swedish team.

The advantage of this apparatus lay in its ability to trigger a heart

contraction only when the normal heart rhythm was interrupted.

Most of these pacemakers contained a sensing device that detected

the atrial impulse and generated an electrical discharge only when

the heart rate fell below 68 to 72 beats per minute.

The biggest problems during this period lay in the size of the

pacemaker and the short life of the battery. The expiration of the

electrical impulse sometimes caused the death of the patient. In addition,

the most reliable method of checking the energy level of the

battery was to watch for a decreased pulse rate. As improvements

were made in electronics, the pacemaker became smaller, and in

1972, the more reliable lithium-iodine batteries were introduced.

These batteries made it possible to store more energy and to monitor

the energy level more effectively. The use of this type of power

source essentially eliminated the battery as the limiting factor in the

longevity of the pacemaker. The period of time that a pacemaker

could operate continuously in the body increased from a period of

days in 1958 to five to ten years by the 1970’s.

Consequences

The development of electronic heart pacemakers revolutionized

cardiology. Although the initial machines were used primarily to

control cardiac bradycardia, the often life-threatening slowing of

the heartbeat, a wide variety of arrhythmias and problems with cardiac

output can now be controlled through the use of these devices.

The success associated with the surgical implantation of pacemakers

is attested by the frequency of its use. Prior to 1960, only three

pacemakers had been implanted. During the 1990’s, however, some

300,000 were implanted each year throughout the world. In the

United States, the prevalence of implants is on the order of 1 per

1,000 persons in the population.

Pacemaker technology continues to improve. Newer models can

sense pH and oxygen levels in the blood, as well as respiratory rate.

They have become further sensitized to minor electrical disturbances

and can adjust accordingly. The use of easily sterilized circuitry

has eliminated the danger of infection. Once the pacemaker has been installed in the patient, the basic electronics require no additional

attention.With the use of modern pacemakers, many forms

of electrical arrhythmias need no longer be life-threatening.

Monday, September 28, 2009

Orlon





The invention: A synthetic fiber made from polyacrylonitrile that

has become widely used in textiles and in the preparation of

high-strength carbon fibers.

The people behind the invention:

Herbert Rein (1899-1955), a German chemist

Ray C. Houtz (1907- ), an American chemist

A Difficult Plastic

“Polymers” are large molecules that are made up of chains of

many smaller molecules, called “monomers.” Materials that are

made of polymers are also called polymers,

and some polymers,

such as proteins, cellulose, and starch, occur in nature. Most polymers,

however, are synthetic materials, which means that they were

created by scientists.

The twenty-year period beginning in 1930 was the age of great

discoveries in polymers by both chemists and engineers. During

this time, many of the synthetic polymers, which are also known as

plastics, were first made and their uses found. Among these polymers

were nylon, polyester, and polyacrylonitrile. The last of these

materials, polyacrylonitrile (PAN), was first synthesized by German

chemists in the late 1920’s. They linked more than one thousand

of the small, organic molecules of acrylonitrile to make a polymer.

The polymer chains of this material had the properties that

were needed to form strong fibers, but there was one problem. Instead

of melting when heated to a high temperature, PAN simply

decomposed. This made it impossible, with the technology that existed

then, to make fibers.

The best method available to industry at that time was the process

of melt spinning, in which fibers were made by forcing molten

polymer through small holes and allowing it to cool. Researchers realized

that, if PAN could be put into a solution, the same apparatus

could be used to spin PAN fibers. Scientists in Germany and the

United States tried to find a solvent or liquid that would dissolve

PAN, but they were unsuccessful until World War II began.

Fibers for War

In 1938, the German chemist Walter Reppe developed a new

class of organic solvents called “amides.” These new liquids were

able to dissolve many materials, including some of the recently discovered

polymers. WhenWorldWar II began in 1940, both the Germans

and the Allies needed to develop new materials for the war effort.

Materials such as rubber and fibers were in short supply. Thus,

there was increased governmental support for chemical and industrial

research on both sides of the war. This support was to result in

two independent solutions to the PAN problem.

In 1942, Herbert Rein, while working for I. G. Farben in Germany,

discovered that PAN fibers could be produced from a solution of

polyacrylonitrile dissolved in the newly synthesized solvent dimethylformamide.

At the same time Ray C. Houtz, who was working for E.

I. Du Pont de Nemours inWilmington, Delaware, found that the related

solvent dimethylacetamide would also form excellent PAN fibers.

His work was patented, and some fibers were produced for use

by the military during the war. In 1950, Du Pont began commercial

production of a form of polyacrylonitrile fibers called Orlon. The

Monsanto Company followed with a fiber called Acrilon in 1952, and

other companies began to make similar products in 1958.

There are two ways to produce PAN fibers. In both methods,

polyacrylonitrile is first dissolved in a suitable solvent. The solution

is next forced through small holes in a device called a “spinneret.”

The solution emerges from the spinneret as thin streams of a thick,

gooey liquid. In the “wet spinning method,” the streams then enter

another liquid (usually water or alcohol), which extracts the solvent

from the solution, leaving behind the pure PAN fiber. After air drying,

the fiber can be treated like any other fiber. The “dry spinning

method” uses no liquid. Instead, the solvent is evaporated from the

emerging streams by means of hot air, and again the PANfiber is left

behind.

In 1944, another discovery was made that is an important part of

the polyacrylonitrile fiber story. W. P. Coxe of Du Pont and L. L.

Winter at Union Carbide Corporation found that, when PAN fibers

are heated under certain conditions, the polymer decomposes and

changes into graphite (one of the elemental forms of carbon) but still

keeps its fiber form. In contrast to most forms of graphite, these fibers

were exceptionally strong. These were the first carbon fibers

ever made. Originally known as “black Orlon,” they were first produced

commercially by the Japanese in 1964, but they were too

weak to find many uses. After new methods of graphitization were

developed jointly by labs in Japan, Great Britain, and the United

States, the strength of the carbon fibers was increased, and the fibers

began to be used in many fields.

Impact

As had been predicted earlier, PAN fibers were found to have

some very useful properties. Their discovery and commercialization

helped pave the way for the acceptance and wide use of polymers.

The fibers derive their properties from the stiff, rodlike structure

of polyacrylonitrile. Known as acrylics, these fibers are more

durable than cotton, and they are the best alternative to wool for

sweaters. Acrylics are resistant to heat and chemicals, can be dyed

easily, resist fading or wrinkling, and are mildew-resistant. Thus, after

their introduction, PAN fibers were very quickly made into

yarns, blankets, draperies, carpets, rugs, sportswear, and various

items of clothing. Often, the fibers contain small amounts of other

polymers that give them additional useful properties.

A significant amount of PAN fiber is used in making carbon fibers.

These lightweight fibers are stronger for their weight than any

known material, and they are used to make high-strength composites

for applications in aerospace, the military, and sports. A “fiber

composite” is a material made from two parts: a fiber, such as carbon

or glass, and something to hold the fibers together, which is

usually a plastic called an “epoxy.” Fiber composites are used in

products that require great strength and light weight. Their applications

can be as ordinary as a tennis racket or fishing pole or as exotic

as an airplane tail or the body of a spacecraft.

Thursday, September 24, 2009

Optical disk







The invention:Anonmagnetic storage medium for computers that

can hold much greater quantities of data than similar size magnetic

media, such as hard and floppy disks.

The people behind the invention:

Klaas Compaan, a Dutch physicist

Piet Kramer, head of Philips’ optical research laboratory

Lou F. Ottens, director of product development for Philips’

musical equipment division

George T. de Kruiff, manager of Philips’ audio-product

development department

Joop Sinjou, a Philips project leader

Holograms Can Be Copied Inexpensively

Holography is a lensless photographic method that uses laser

light to produce three-dimensional images. This is done by splitting

a laser beam into two beams. One of the beams

is aimed at the object

whose image is being reproduced so that the laser light will reflect

from the object and strike a photographic plate or film. The second

beam of light is reflected from a mirror near the object and also

strikes the photographic plate or film. The “interference pattern,”

which is simply the pattern created by the differences between the

two reflected beams of light, is recorded on the photographic surface.

The recording that is made in this way is called a “hologram.”

When laser light or white light strikes the hologram, an image is created

that appears to be a three-dimensional object.

Early in 1969, Radio Corporation of America (RCA) engineers

found a way to copy holograms inexpensively by impressing interference

patterns on a nickel sheet that then became a mold from

which copies could be made. Klaas Compaan, a Dutch physicist,

learned of this method and had the idea that images could be recorded

in a similar way and reproduced on a disk the size of a phonograph

record. Once the images were on the disk, they could be

projected onto a screen in any sequence. Compaan saw the possibilities

of such a technology in the fields of training and education.

Computer Data Storage Breakthrough

In 1969, Compaan shared his idea with Piet Kramer, who was the

head of Philips’ optical research laboratory. The idea intrigued

Kramer. Between 1969 and 1971, Compaan spent much of his time

working on the development of a prototype.

By September, 1971, Compaan and Kramer, together with a handful

of others, had assembled a prototype that could read a blackand-

white video signal from a spinning glass disk. Three months

later, they demonstrated it for senior managers at Philips. In July,

1972, a color prototype was demonstrated publicly. After the demonstration,

Philips began to consider putting sound, rather than images,

on the disks. The main attraction of that idea was that the 12-

inch (305-millimeter) disks would hold up to forty-eight hours of

music. Very quickly, however, Lou F. Ottens, director of product development

for Philips’ musical equipment division, put an end to

any talk of a long-playing audio disk.

Ottens had developed the cassette-tape cartridge in the 1960’s.

He had plenty of experience with the recording industry, and he had

no illusions that the industry would embrace that new medium. He

was convinced that the recording companies would consider fortyeight

hours of music unmarketable. He also knew that any new

medium would have to offer a dramatic improvement over existing

vinyl records.

In 1974, only three years after the first microprocessor (the basic

element of computers) was invented, designing a digital consumer

product—rather than an analog product such as those that were already

commonly accepted—was risky. (Digital technology uses

numbers to represent information, whereas analog technology represents

information by mechanical or physical means.) When

George T. de Kruiff became Ottens’s manager of audio-product

development in June, 1974, he was amazed that there were no

digital circuit specialists in the audio department. De Kruiff recruited

new digital engineers, bought computer-aided design

tools, and decided that the project should go digital.

Within a few months, Ottens’s engineers had rigged up a digital

system. They used an audio signal that was representative of an

acoustical wave, sampled it to change it to digital form,

and encoded it as a series of pulses.

On the disk itself, they varied the

length of the “dimples” that were used to represent the sound so

that the rising and falling edges of the series of pulses corresponded

to the dimples’ walls. A helium-neon laser was reflected from

the dimples to photodetectors that were connected to a digital-toanalog

converter.

In 1978, Philips demonstrated a prototype for Polygram (a West

German company) and persuaded Polygram to develop an inexpensive

disk material with the appropriate optical qualities. Most

important was that the material could not warp. Polygram spent

about $150,000 and three months to develop the disk. In addition, it

was determined that the gallium-arsenide (GaAs) laser would be

used in the project. Sharp Corporation agreed to manufacture a

long-life GaAs diode laser to Philips’ specifications.

The optical-system designers wanted to reduce the number

of parts in order to decrease manufacturing costs and improve

reliability. Therefore, the lenses were simplified and considerable

work was devoted to developing an error-correction code.

Philips and Sony engineers also worked together to create a standard

format. In 1983, Philips made almost 100,000 units of optical

disks.

Consequences

In 1983, one of the most successful consumer products of all time

was introduced: the optical-disk system. The overwhelming success

of optical-disk reproduction led to the growth of a multibillion-dollar

industry around optical information and laid the groundwork

for a whole crop of technologies that promise to revolutionize computer

data storage. Common optical-disk products are the compact

disc (CD), the compact disc read-only memory (CD-ROM), the

write-once, read-many (WORM) erasable disk, and CD-I (interactive

CD).

The CD-ROM, the WORM, and the erasable optical disk, all of

which are used in computer applications, can hold more than 550

megabytes, from 200 to 800 megabytes, and 650 megabytes of data,

respectively.

The CD-ROM is a nonerasable disc that is used to store computer

data. After the write-once operation is performed, a WORM becomes

a read-only optical disk. An erasable optical disk can be

erased and rewritten easily. CD-ROMs, coupled with expert-system

technology, are expected to make data retrieval easier. The CD-ROM,

the WORM, and the erasable optical disk may replace magnetic

hard and floppy disks as computer data storage devices.

Tuesday, September 22, 2009

Oil-well drill bit







The invention: Arotary cone drill bit that enabled oil-well drillers

to penetrate hard rock formations.

The people behind the invention:

Howard R. Hughes (1869-1924), an American lawyer, drilling

engineer, and inventor

Walter B. Sharp (1860-1912), an American drilling engineer,

inventor, and partner to Hughes

Digging for Oil

Arotary drill rig of the 1990’s is basically unchanged in its essential

components from its earlier versions of the 1900’s. A drill bit is

attached to a line of hollow drill pipe. The latter passes through a

hole on a rotary table, which acts essentially as a horizontal gear

wheel and is driven by an engine. As the rotary table turns, so do the

pipe and drill bit.

During drilling operations, mud-laden water is pumped under

high pressure down the sides of the drill pipe and jets out with great

force through the small holes in the rotary drill bit against the bottom

of the borehole. This fluid then returns outside the drill pipe to

the surface, carrying with it rock material cuttings from below. Circulated

rock cuttings and fluids are regularly examined at the surface

to determine the precise type and age of rock formation and for

signs of oil and gas.

Akey part of the total rotary drilling system is the drill bit, which

has sharp cutting edges that make direct contact with the geologic

formations to be drilled. The first bits used in rotary drilling were

paddlelike “fishtail” bits, fairly successful for softer formations, and

tubular coring bits for harder surfaces. In 1893, M. C. Baker and C. E.

Baker brought a rotary water-well drill rig to Corsicana, Texas, for

modification to deeper oil drilling. This rig led to the discovery of

the large Corsicana-Powell oil field in Navarro County, Texas. This

success also motivated its operators, the American Well and Prospecting

Company, to begin the first large-scale manufacture of rotary

drilling rigs for commercial sale.In the earliest rotary drilling for oil, short fishtail bits were the

tool of choice, insofar as they were at that time the best at being able

to bore through a wide range of geologic strata without needing frequent

replacement. Even so, in the course of any given oil well,

many bits were required typically in coastal drilling in the Gulf of

Mexico. Especially when encountering locally harder rock units

such as limestone, dolomite, or gravel beds, fishtail bits would typically

either curl backward or break off in the hole, requiring the

time-consuming work of pulling out all drill pipe and “fishing” to

retrieve fragments and clear the hole.

Because of the frequent bit wear and damage, numerous small

blacksmith shops established themselves near drill rigs, dressing or

sharpening bits with a hand forge and hammer. Each bit-forging

shop had its own particular way of shaping bits, producing a wide

variety of designs. Nonstandard bit designs were frequently modified

further as experiments to meet the specific requests of local drillers

encountering specific drilling difficulties in given rock layers.

Speeding the Process

In 1907 and 1908, patents were obtained in New Jersey and

Texas for steel, cone-shaped drill bits incorporating a roller-type

coring device with many serrated teeth. Later in 1908, both patents

were bought by lawyer Howard R. Hughes.

Although comparatively weak rocks such as sands, clays, and

soft shales could be drilled rapidly (at rates exceeding 30 meters per

hour), in harder shales, lime-dolostones, and gravels, drill rates of 1

meter per hour or less were not uncommon. Conventional drill bits

of the time had average operating lives of three to twelve hours.

Economic drilling mandated increases in both bit life and drilling

rate. Directly motivated by his petroleum prospecting interests,

Hughes and his partner, Walter B. Sharp, undertook what were

probably the first recorded systematic studies of drill bit performance

while matched against specific rock layers.

Although many improvements in detail and materials have been

made to the Hughes cone bit since its inception in 1908, its basic design

is still used in rotary drilling. One of Hughes’s major innovations

was the much larger size of the cutters, symmetrically distributed as a large number of small individual teeth on the outer face of

two or more cantilevered bearing pins. In addition, “hard facing”

was employed to drill bit teeth to increase usable life. Hard facing is

a metallurgical process basically consisting of wedding a thin layer

of a hard metal or alloy of special composition to a metal surface to

increase its resistance to abrasion and heat. A less noticeable but

equally essential innovation, not included in other drill bit patents,was an ingeniously designed gauge surface that provided strong

uniform support for all the drill teeth. The force-fed oil lubrication

was another new feature included in Hughes’s patent and prototypes,

reducing the power necessary to rotate the bit by 50 percent

over that of prior mud or water lubricant designs.

Impact

In 1925, the first superhard facing was used on cone drill bits. In

addition, the first so-called self-cleaning rock bits appeared from

Hughes, with significant advances in roller bearings and bit tooth

shape translating into increased drilling efficiency. The much larger

teeth were more adaptable to drilling in a wider variety of geological

formations than earlier models. In 1928, tungsten carbide was

introduced as an additional bit facing hardener by Hughes metallurgists.

This, together with other improvements, resulted in the

Hughes ACME tooth form, which has been in almost continuous

use since 1926.

Many other drilling support technologies, such as drilling mud,

mud circulation pumps, blowout detectors and preventers, and

pipe properties and connectors have enabled rotary drilling rigs to

reach new depths (exceeding 5 kilometers in 1990). The successful

experiments by Hughes in 1908 were critical initiators of these developments.

Nylon















The invention: A resilient, high-strength polymer with applications

ranging from women’s hose to safety nets used in space flights.

The people behind the invention:Wallace Hume Carothers (1896-1937),

an American organic chemist Charles M. A. Stine (1882-1954), an American chemist

and director of chemical research at Du Pont Elmer Keiser Bolton (1886-1968),

an American industrial chemist Pure Research In the twentieth century,

American corporations created industrial research laboratories.

Their directors became the organizers of inventions,

and their scientists served as the sources of creativity.

The research program of

E. I. Du Pont de Nemours and Company

(Du Pont), through its most famous invention—nylon—became the

model for scientifically based industrial research in the chemical

industry.

During World War I (1914-1918), Du Pont tried to diversify,

concerned that after the war it would not be able to expand with

only explosives as a product. Charles M. A. Stine, Du Pont’s director

of chemical research, proposed that Du Pont should move

into fundamental research by hiring first-rate academic scientists

and giving them freedom to work on important problems in

organic chemistry. He convinced company executives that a program

to explore the fundamental science underlying Du Pont’s

technology would ultimately result in discoveries of value to the

company. In 1927, Du Pont gave him a new laboratory for research.

Stine visited universities in search of brilliant, but not-yetestablished,

young scientists. He hired Wallace Hume Carothers.

Stine suggested that Carothers do fundamental research in polymer

chemistry.Before the 1920’s, polymers were a mystery to chemists. Polymeric

materials were the result of ingenious laboratory practice,

and this practice ran far ahead of theory and understanding. German

chemists debated whether polymers were aggregates of smaller

units held together by some unknown special force or genuine molecules

held together by ordinary chemical bonds.

German chemist Hermann Staudinger asserted that they were

large molecules with endlessly repeating units. Carothers shared

this view, and he devised a scheme to prove it by synthesizing very

large molecules by simple reactions in such a way as to leave no

doubt about their structure. Carothers’s synthesis of polymers revealed

that they were ordinary molecules but giant in size.

The Longest Molecule

In April, 1930, Carothers’s research group produced two major

innovations: neoprene synthetic rubber and the first laboratorysynthesized

fiber. Neither result was the goal of their research. Neoprene

was an incidental discovery during a project to study short

polymers of acetylene. During experimentation, an unexpected substance

appeared that polymerized spontaneously. Carothers studied

its chemistry and developed the process into the first successful synthetic

rubber made in the United States.

The other discovery was an unexpected outcome of the group’s

project to synthesize polyesters by the reaction of acids and alcohols.

Their goal was to create a polyester that could react indefinitely

to form a substance with high molecular weight. The scientists

encountered a molecular weight limit of about 5,000 units to the

size of the polyesters, until Carothers realized that the reaction also

produced water, which was decomposing polyesters back into acid

and alcohol. Carothers and his associate Julian Hill devised an apparatus

to remove the water as it formed. The result was a polyester

with a molecular weight of more than 12,000, far higher than any

previous polymer.

Hill, while removing a sample from the apparatus, found that he

could draw it out into filaments that on cooling could be stretched to

form very strong fibers. This procedure, called “cold-drawing,” oriented

the molecules from a random arrangement into a long, linear one of great strength. The polyester fiber, however, was unsuitable

for textiles because of its low melting point.

In June, 1930, Du Pont promoted Stine; his replacement as research

director was Elmer Keiser Bolton. Bolton wanted to control

fundamental research more closely, relating it to projects that would

pay off and not allowing the research group freedom to pursue

purely theoretical questions.

Despite their differences, Carothers and Bolton shared an interest

in fiber research. On May 24, 1934, Bolton’s assistant Donald

Coffman “drew” a strong fiber from a new polyamide. This was the

first nylon fiber, although not the one commercialized by Du Pont.

The nylon fiber was high-melting and tough, and it seemed that a

practical synthetic fiber might be feasible.

By summer of 1934, the fiber project was the heart of the research

group’s activity. The one that had the best fiber properties was nylon

5-10, the number referring to the number of carbon atoms in the

amine and acid chains. Yet the nylon 6-6 prepared on February 28,

1935, became Du Pont’s nylon. Nylon 5-10 had some advantages,

but Bolton realized that its components would be unsuitable for

commercial production, whereas those of nylon 6-6 could be obtained

from chemicals in coal.

A determined Bolton pursued nylon’s practical development,

a process that required nearly four years. Finally, in April, 1937,

Du Pont filed a patent for synthetic fibers, which included a statement

by Carothers that there was no previous work on polyamides;

this was a major breakthrough. After Carothers’s death

on April 29, 1937, the patent was issued posthumously and assigned

to Du Pont. Du Pont made the first public announcement

of nylon on October 27, 1938.

Impact

Nylon was a generic term for polyamides, and several types of

nylon became commercially important in addition to nylon 6-6.

These nylons found widespread use as both a fiber and a moldable

plastic. Since it resisted abrasion and crushing, was nonabsorbent,

was stronger than steel on a weight-for-weight basis, and was almost

nonflammable, it embraced an astonishing range of uses: in laces, screens, surgical sutures, paint, toothbrushes, violin strings,

coatings for electrical wires, lingerie, evening gowns, leotards, athletic

equipment, outdoor furniture, shower curtains, handbags, sails,

luggage, fish nets, carpets, slip covers, bus seats, and even safety

nets on the space shuttle.

The invention of nylon stimulated notable advances in the chemistry

and technology of polymers. Some historians of technology

have even dubbed the postwar period as the “age of plastics,” the

age of synthetic products based on the chemistry of giant molecules

made by ingenious chemists and engineers.

The success of nylon and other synthetics, however, has come at

a cost. Several environmental problems have surfaced, such as those

created by the nondegradable feature of some plastics, and there is

the problem of the increasing utilization of valuable, vanishing resources,

such as petroleum, which contains the essential chemicals

needed to make polymers. The challenge to reuse and recycle these

polymers is being addressed by both scientists and policymakers.

Tuesday, September 8, 2009

Nuclear reactor







The invention: 



The first nuclear reactor to produce substantial

quantities of plutonium, making it practical to produce usable

amounts of energy from a chain reaction.



The people behind the invention:



Enrico Fermi (1901-1954), an American physicist

Martin D. Whitaker (1902-1960), the first director of Oak Ridge

National Laboratory

Eugene Paul Wigner (1902-1995), the director of research and

development at Oak Ridge









The Technology to End a War



The construction of the nuclear reactor at Oak Ridge National

Laboratory in 1943 was a vital part of the Manhattan Project, the effort

by the United States during World War II (1939-1945) to develop

an atomic bomb. The successful operation of that reactor

was a major achievement not only for the project itself but also for

the general development and application of nuclear technology.

The first director of the Oak Ridge National Laboratory was Martin

D. Whitaker; the director of research and development was Eugene

Paul Wigner.

The nucleus of an atom is made up of protons and neutrons. “Fission”

is the process by which the nucleus of certain elements is split

in two by a neutron from some material that emits an occasional

neutron naturally. When an atom splits, two things happen: A tremendous

amount of thermal energy is released, and two or three

neutrons, on the average, escape from the nucleus. If all the atoms in

a kilogram of “uranium 235” were to fission, they would produce as

much heat energy as the burning of 3 million kilograms of coal. The

neutrons that are released are important, because if at least one of

them hits another atom and causes it to fission (and thus to release

more energy and more neutrons), the process will continue. It will

become a self-sustaining chain reaction that will produce a continuing

supply of heat.

Inside a reactor, a nuclear chain reaction is controlled so that it

proceeds relatively slowly. The most familiar use for the heat thus

released is to boil water and make steam to turn the turbine generators

that produce electricity to serve industrial, commercial, and

residential needs. The fissioning process in a weapon, however, proceeds

very rapidly, so that all the energy in the atoms is produced

and released virtually at once. The first application of nuclear technology,

which used a rapid chain reaction, was to produce the two

atomic bombs that ended World War II.





Breeding Bomb Fuel



The work that began at Oak Ridge in 1943 was made possible by a

major event that took place in 1942. At the University of Chicago,

Enrico Fermi had demonstrated for the first time that it was possible to

achieve a self-sustaining atomic chain reaction. More important, the reaction

could be controlled: It could be started up, it could generate heat

and sufficient neutrons to keep itself going, and it could be turned off.

That first chain reaction was very slow, and it generated very little heat;

but it demonstrated that controlled fission was possible.

Any heat-producing nuclear reaction is an energy conversion

process that requires fuel. There is only one readily fissionable element

that occurs naturally and can be used as fuel. It is a form of

uranium called uranium 235. It makes up less than 1 percent of all

naturally occurring uranium. The remainder is uranium 238, which

does not fission readily. Even uranium 235, however, must be enriched

before it can be used as fuel.

The process of enrichment increases the concentration of uranium

235 sufficiently for a chain reaction to occur. Enriched uranium is used

to fuel the reactors used by electric utilities. Also, the much more plentiful

uranium 238 can be converted into plutonium 239, a form of the

human-made element plutonium, which does fission readily. That

conversion process is the way fuel is produced for a nuclear weapon.

Therefore, the major objective of the Oak Ridge effort was to develop a

pilot operation for separating plutonium from the uranium in which it

was produced. Large-scale plutonium production, which had never

been attempted before, eventually would be done at the Hanford Engineer

Works in Washington. First, however, plutonium had to be pro-

duced successfully on a small scale at Oak Ridge.

The reactor was started up on November 4, 1943. By March 1,

1944, the Oak Ridge laboratory had produced several grams of plutonium.

The material was sent to the Los Alamos laboratory in New

Mexico for testing. By July, 1944, the reactor operated at four times

its original power level. By the end of that year, however, plutonium

production at Oak Ridge had ceased, and the reactor thereafter was

used principally to produce radioisotopes for physical and biological

research and for medical treatment. Ultimately, the Hanford Engineer

Works’ reactors produced the plutonium for the bomb that

was dropped on Nagasaki, Japan, on August 9, 1945.

The original objectives for which Oak Ridge had been built had

been achieved, and subsequent activity at the facility was directed

toward peacetime missions that included basic studies of the structure

of matter.



Impact



The most immediate impact of the work done at Oak Ridge was

its contribution to ending World War II. When the atomic bombs

were dropped, the war ended, and the United States emerged intact.

The immediate and long-range devastation to the people of Japan,

however, opened the public’s eyes to the almost unimaginable

death and destruction that could be caused by a nuclear war. Fears

of such a war remain to this day, especially as more and more nations

develop the technology to build nuclear weapons.

On the other hand, great contributions to human civilization

have resulted from the development of nuclear energy. Electric

power generation, nuclear medicine, spacecraft power, and ship

propulsion have all profited from the pioneering efforts at the Oak

Ridge National Laboratory. Currently, the primary use of nuclear

energy is to produce electric power. Handled properly, nuclear energy

may help to solve the pollution problems caused by the burning

of fossil fuels.



See also Breeder reactor; Compressed-air-accumulating powerplant; Fuel cell;

Geothermal power; Heat pump; Nuclear power plant; Solar thermal engine; Nuclear reactor