Wednesday, March 25, 2009

Cloning




The invention: Experimental technique for creating exact duplicates
of living organisms by recreating their DNA.
The people behind the invention:
Ian Wilmut, an embryologist with the Roslin Institute
Keith H. S. Campbell, an experiment supervisor with the Roslin
Institute
J. McWhir, a researcher with the Roslin Institute
W. A. Ritchie, a researcher with the Roslin Institute
Making Copies
On February 22, 1997, officials of the Roslin Institute, a biological
research institution near Edinburgh, Scotland, held a press conference
to announce startling news: They had succeeded in creating
a clone—a biologically identical copy—from cells taken from
an adult sheep. Although cloning had been performed previously
with simpler organisms, the Roslin Institute experiment marked
the first time that a large, complex mammal had been successfully
cloned.
Cloning, or the production of genetically identical individuals,
has long been a staple of science fiction and other popular literature.
Clones do exist naturally, as in the example of identical twins. Scientists
have long understood the process by which identical twins
are created, and agricultural researchers have often dreamed of a
method by which cheap identical copies of superior livestock could
be created.
The discovery of the double helix structure of deoxyribonucleic
acid (DNA), or the genetic code, by JamesWatson and Francis Crick
in the 1950’s led to extensive research into cloning and genetic engineering.
Using the discoveries ofWatson and Crick, scientists were
soon able to develop techniques to clone laboratory mice; however,
the cloning of complex, valuable animals such as livestock proved
to be hard going.
Early versions of livestock cloning were technical attempts at duplicating the natural process of fertilized egg splitting that leads to the
birth of identical twins. Artificially inseminated eggs were removed,
split, and then reinserted into surrogate mothers. This method proved
to be overly costly for commercial purposes, a situation aggravated by
a low success rate.
Nuclear Transfer
Researchers at the Roslin Institute found these earlier attempts to
be fundamentally flawed. Even if the success rate could be improved,
the number of clones created (of sheep, in this case) would
still be limited. The Scots, led by embryologist Ian Wilmut and experiment
supervisor Keith Campbell, decided to take an entirely
different approach. The result was the first live birth of a mammal
produced through a process known as “nuclear transfer.”
Nuclear transfer involves the replacement of the nucleus of an
immature egg with a nucleus taken from another cell. Previous attempts
at nuclear transfer had cells from a single embryo divided
up and implanted into an egg. Because a sheep embryo has only
about forty usable cells, this method also proved limiting.
The Roslin team therefore decided to grow their own cells in a
laboratory culture. They took more mature embryonic cells than
those previously used, and they experimented with the use of a nutrient
mixture. One of their breakthroughs occurred when they discovered
that these “cell lines” grew much more quickly when certain
nutrients were absent.Using this technique, the Scots were able to produce a theoretically
unlimited number of genetically identical cell lines. The next
step was to transfer the cell lines of the sheep into the nucleus of unfertilized
sheep eggs.
First, 277 nuclei with a full set of chromosomes were transferred
to the unfertilized eggs. An electric shock was then used to cause the
eggs to begin development, the shock performing the duty of fertilization.
Of these eggs, twenty-nine developed enough to be inserted
into surrogate mothers.
All the embryos died before birth except one: a ewe the scientists
named “Dolly.” Her birth on July 5, 1996, was witnessed by only a
veterinarian and a few researchers. Not until the clone had survived
the critical earliest stages of life was the success of the experiment
disclosed; Dolly was more than seven months old by the time her
birth was announced to a startled world.Impact
The news that the cloning of sophisticated organisms had left the
realm of science fiction and become a matter of accomplished scientific
fact set off an immediate uproar. Ethicists and media commentators
quickly began to debate the moral consequences of the use—
and potential misuse—of the technology. Politicians in numerous
countries responded to the news by calling for legal restrictions on
cloning research. Scientists, meanwhile, speculated about the possible
benefits and practical limitations of the process.
The issue that stirred the imagination of the broader public and
sparked the most spirited debate was the possibility that similar experiments
might soon be performed using human embryos. Although
most commentators seemed to agree that such efforts would
be profoundly immoral, many experts observed that they would be
virtually impossible to prevent. “Could someone do this tomorrow
morning on a human embryo?” Arthur L. Caplan, the director of the
University of Pennsylvania’s bioethics center, asked reporters. “Yes.
It would not even take too much science. The embryos are out
there.”
Such observations conjured visions of a future that seemed marvelous
to some, nightmarish to others. Optimists suggested that the best and brightest of humanity could be forever perpetuated, creating
an endless supply of Albert Einsteins and Wolfgang Amadeus
Mozarts. Pessimists warned of a world overrun by clones of selfserving
narcissists and petty despots, or of the creation of a secondary
class of humans to serve as organ donors for their progenitors.
The Roslin Institute’s researchers steadfastly proclaimed their
own opposition to human experimentation. Moreover, most scientists
were quick to point out that such scenarios were far from realization,
noting the extremely high failure rate involved in the creation
of even a single sheep. In addition, most experts emphasized
more practical possible uses of the technology: improving agricultural
stock by cloning productive and disease-resistant animals, for
example, or regenerating endangered or even extinct species. Even
such apparently benign schemes had their detractors, however, as
other observers remarked on the potential dangers of thus narrowing
a species’ genetic pool.
Even prior to the Roslin Institute’s announcement, most European
nations had adopted a bioethics code that flatly prohibited genetic
experiments on human subjects. Ten days after the announcement,
U.S. president Bill Clinton issued an executive order that
banned the use of federal money for human cloning research, and
he called on researchers in the private sector to refrain from such experiments
voluntarily. Nevertheless, few observers doubted that
Dolly’s birth marked only the beginning of an intriguing—and possibly
frightening—new chapter in the history of science.

Friday, March 20, 2009

Cell phone









The invention: 



Mobile telephone system controlled by computers

to use a region’s radio frequencies, or channels, repeatedly,

thereby accommodating large numbers of users.



The people behind the invention:



William Oliver Baker (1915- ), the president of Bell

Laboratories

Richard H. Fefrenkiel, the head of the mobile systems

engineering department at Bell











The First Radio Telephones



The first recorded attempt to use radio technology to provide direct

access to a telephone system took place in 1920. It was not until

1946, however, that Bell Telephone established the first such commercial

system in St. Louis. The system had a number of disadvantages;

users had to contact an operator who did the dialing and the

connecting, and the use of a single radio frequency prevented simultaneous

talking and listening. In 1949, a system was developed

that used two radio frequencies (a “duplex pair”), permitting both

the mobile unit and the base station to transmit and receive simultaneously

and making a more normal sort of telephone conversation

possible. This type of service, known as Mobile Telephone Service

(MTS), was the norm in the field for many years.

The history of MTS is one of continuously increasing business usage.

The development of the transistor made possible the design and

manufacture of reasonably light, compact, and reliable equipment,

but the expansion of MTS was slowed by the limited number of radio

frequencies; there is nowhere near enough space on the radio spectrum

for each user to have a separate frequency. In New York City, for

example, New York Telephone Company was limited to just twelve

channels for its more than seven hundred mobile subscribers, meaning

that only twelve conversations could be carried on at once. In addition,

because of possible interference, none of those channels could

be reused in nearby cities; only fifty-four channels were available na-

tionwide. By the late 1970’s, most of the systems in major cities were considered full,

and new subscribers were placed on a waiting list; some people had been waiting

for as long as ten years to become subscribers.

Mobile phone users commonly experienced long delays in getting poor-quality

channels.



The Cellular Breakthrough



In 1968, the Federal CommunicationsCommission (FCC) requested proposals for the

creation of high-capacity, spectrum- efficient mobile systems.

Bell Telephone had already been lobbying for the creation of such a system for some years.

In the early 1970’s, both Motorola and Bell Telephone proposed the use of cellular

technology to solve the problems posed by mobile telephone service.

Cellular systems involve the use of a computer to make it possible to use an area’s

frequencies, or channels, repeatedly, allowing such systems to accommodate many

more users.

A two-thousand-customer, 2100-square-mile cellular telephone

system called the Advanced Mobile Phone Service, built by the

AMPS Corporation, an AT&T subsidiary, became operational in

Chicago in 1978. The Illinois Bell Telephone Company was allowed

to make a limited commercial offering and obtained about fourteen

hundred subscribers. American Radio Telephone Service was allowed

to conduct a similar test in the Baltimore/Washington area.

These first systems showed the technological feasibility and affordability

of cellular service.

In 1979, Bell Labs of Murray Hill, New Jersey, received a patent for such a system.

The inventor was Richard H. Fefrenkiel, head of the mobile systems engineering

department under the leadership of Labs president William Baker.

The patented method divides a city into small coverage areas called “cells,” each served

by lowpower transmitter-receivers. When a vehicle leaves the coverage of one cell,

 calls are switched to the antenna and channels of an adjacent

cell; a conversation underway is automatically transferred

and continues without interruption. Achannel used in one cell can

be reused a few cells away for a different conversation. In this way,

a few hundred channels can serve hundreds of thousands of users.

Computers control the call-transfer process, effectively reducing

the amount of radio spectrum required. Cellular systems thus actually

use radio frequencies to transmit conversations, but because

the equipment is so telephone-like, “cellular telephone” (or “cell

phone”) became the accepted term for the new technology.

Each AMPS cell station is connected by wire to a central switching

office, which determines when a mobile phone should be transferred

to another cell as the transmitter moves out of range during a

conversation. It does this by monitoring the strength of signals received

from the mobile unit by adjacent cells, “handing off” the call

when a new cell receives a stronger signal; this change is imperceptible

to the user.



Impact



In 1982, the FCC began accepting applications for cellular system

licenses in the thirty largest U.S. cities. By the end of 1984, there

were about forty thousand cellular customers in nearly two dozen

cities. Cellular telephone ownership boomed to 9 million by 1992.

As cellular telephones became more common, they also became

cheaper and more convenient to buy and to use. New systems

developed in the 1990’s continued to make smaller, lighter, and

cheaper cellular phones even more accessible. Since the cellular telephone

was made possible by the marriage of communications and

computers, advances in both these fields have continued to change

the industry at a rapid rate.

Cellular phones have proven ideal for many people who need or

want to keep in touch with others at all times. They also provide

convenient emergency communication devices for travelers and

field-workers. On the other hand, ownership of a cellular phone can

also have its drawbacks; many users have found that they can never

be out of touch—even when they would rather be.



William Oliver Baker



For great discoveries and inventions to be possible in the

world of high technology, inventors need great facilities—laboratories

and workshops—with brilliant colleagues. These must

be managed by imaginative administrators.

One of the best wasWilliam Oliver Baker (b. 1915), who rose

to become president of the legendary Bell Labs. Baker started out

as one of the most promising scientists of his generation. After

earning a Ph.D. in chemistry at Princeton University, he joined

the research section at Bell Telephone Laboratories in 1939. He

studied the physics and chemistry of polymers, especially for use

in electronics and telecommunications. During his research career

he helped develop synthetic rubber and radar, found uses

for polymers in communications and power cables, and participated

in the discovery of microgels. In 1954 he ranked among the

top-ten scientists in American industry and asked to chair a National

Research Council committee studying heat shields for

missiles and satellites.

Administration suited him. The following year he took over

as leader of research at Bell Labs and served as president from

1973 until 1979. Under his direction, basic discoveries and inventions

poured out of the lab that later transformed the way

people live and work: satellite communications, principles for

programming high-speed computers, the technology for modern

electronic communications, the superconducting solenoid,

the maser, and the laser. His scientists won Nobel Prizes and legions

of other honors, as did Baker himself, who received dozens

of medals, awards, and honorary degrees. Moreover, he

was an original member of the President’s Science Advisory

Board, became the first chair of the National Science Information

Council, and served on the National Science Board. His

influence on American science and technology was deep and

lasting.



See also : Internet; Long-distance telephone; Rotary dial telephone;

Telephone switching; Touch-tone telephone.







Tuesday, March 10, 2009

CAT scanner





The invention: A technique that collects X-ray data from solid,
opaque masses such as human bodies and uses a computer to
construct a three-dimensional image.
The people behind the invention:
Godfrey Newbold Hounsfield (1919- ), an English
electronics engineer who shared the 1979 Nobel Prize in
Physiology or Medicine
Allan M. Cormack (1924-1998), a South African-born American
physicist who shared the 1979 Nobel Prize in Physiology or
Medicine
James Ambrose, an English radiologist
A Significant Merger
Computerized axial tomography (CAT) is a technique that collects
X-ray data from an opaque, solid mass such as a human body
and uses a sophisticated computer to assemble those data into a
three-dimensional image. This sophisticated merger of separate
technologies led to another name for CAT, computer-assisted tomography
(it came to be called computed tomography, or CT). CAT
is a technique of medical radiology, an area of medicine that began
after the German physicistWilhelm Conrad Röntgen’s 1895 discovery
of the high-energy electromagnetic radiations he named “X
rays.” Röntgen and others soon produced X-ray images of parts of
the human body, and physicians were quick to learn that these images
were valuable diagnostic aids.
In the late 1950’s and early 1960’s, Allan M. Cormack, a physicist
at Tufts University in Massachusetts, pioneered a mathematical
method for obtaining detailed X-ray absorption patterns in opaque
samples meant to model biological samples. His studies used narrow
X-ray beams that were passed through samples at many different angles.
Because the technique probed test samples from many different
points of reference, it became possible—by using the proper mathematics—
to reconstruct the interior structure of a thin slice of the object
being studied.computers that could analyze the data in an effective fashion
had not yet been developed. Nevertheless, X-ray tomography—
the process of using X-rays to produce detailed images of thin
sections of solid objects—had been born. It remained for Godfrey
Newbold Hounsfield of England’s Electrical and Musical Instruments
(EMI) Limited (independently, and reportedly with no
knowledge of Cormack’s work) to design the first practical CAT
scanner.
A Series of Thin Slices
Hounsfield, like Cormack, realized that X-ray tomography was
the most practical approach to developing a medical body imager. It
could be used to divide any three-dimensional object into a series of
thin slices that could be reconstructed into images by using appropriate
computers. Hounsfield developed another mathematical approach
to the method. He estimated that the technique would make
possible the very accurate reconstruction of images of thin body sections
with a sensitivity well above that of the X-ray methodology
then in use. Moreover, he proposed that his method would enable researchers and physicians to distinguish between normal and diseased
tissue. Hounsfield was correct about that.
The prototype instrument that Hounsfield developed was quite
slow, requiring nine days to scan an object. Soon, he modified the
scanner so that its use took only nine hours, and he obtained successful
tomograms of preserved human brains and the fresh brains
of cattle. The further development of the CAT scanner then proceeded quickly, yielding an instrument that required four and onehalf
minutes to gather tomographic data and twenty minutes to
produce the tomographic image.
In late 1971, the first clinical CAT scanner was installed at Atkinson
Morley’s Hospital in Wimbledon, England. By early 1972,
the first patient, a woman with a suspected brain tumor, had been
examined, and the resultant tomogram identified a dark, circular
cyst in her brain. Additional data collection from other patients
soon validated the technique. Hounsfield and EMI patented the
CAT scanner in 1972, and the findings were reported at that year’s
annual meeting of the British Institute of Radiology.
Hounsfield published a detailed description of the instrument in
1973. Hounsfield’s clinical collaborator, James Ambrose, published
on the clinical aspects of the technique. Neurologists all around the
world were ecstatic about the new tool that allowed them to locate
tissue abnormalities with great precision.
The CAT scanner consisted of an X-ray generator, a scanner unit
composed of an X-ray tube and a detector in a circular chamber
about which they could be rotated, a computer that could process
all the data obtained, and a cathode-ray tube on which tomograms
were viewed. To produce tomograms, the patient was placed on a
couch, head inside the scanner chamber, and the emitter-detector
was rotated 1 degree at a time. At each position, 160 readings were
taken, converted to electrical signals, and fed into the computer. In
the 180 degrees traversed, 28,800 readings were taken and processed.
The computer then converted the data into a tomogram (a
cross-sectional representation of the brain that shows the differences
in tissue density). A Polaroid picture of the tomogram was
then taken and interpreted by the physician in charge.Consequences
Many neurologists agree that CAT is the most important method
developed in the twentieth century to facilitate diagnosis of disorders
of the brain. Even the first scanners could distinguish between
brain tumors and blood clots and help physicians to diagnose a variety
of brain-related birth defects. In addition, the scanners are believed
to have saved many lives by allowing physicians to avoid the dangerous exploratory brain surgery once required in many
cases and by replacing more dangerous techniques, such as pneumoencephalography,
which required a physician to puncture the
head for diagnostic purposes.
By 1975, improvements, including quicker reaction time and
more complex emitter-detector systems, made it possible for EMI to
introduce full-body CAT scanners to the world market. Then it became
possible to examine other parts of the body—including the
lungs, the heart, and the abdominal organs—for cardiovascular
problems, tumors, and other structural health disorders. The technique
became so ubiquitous that many departments of radiology
changed their names to departments of medical imaging.
The use of CAT scanners has not been problem-free. Part of
the reason for this is the high cost of the devices—ranging from
about $300,000 for early models to $1 million for modern instruments—
and resultant claims by consumer advocacy groups that
the scanners are unnecessarily expensive toys for physicians.
Still, CAT scanners have become important everyday diagnostic
tools in many areas of medicine. Furthermore, continuation of the
efforts of Hounsfield and others has led to more improvements of
CAT scanners and to the use of nonradiologic nuclear magnetic resonance
imaging in such diagnoses.

Cassette recording





The invention: Self-contained system making it possible to record
and repeatedly play back sound without having to thread tape
through a machine.
The person behind the invention:
Fritz Pfleumer, a German engineer whose work on audiotapes
paved the way for audiocassette production
Smaller Is Better
The introduction of magnetic audio recording tape in 1929 was
met with great enthusiasm, particularly in the entertainment industry,
and specifically among radio broadcasters. Although somewhat
practical methods for recording and storing sound for later playback
had been around for some time, audiotape was much easier to
use, store, and edit, and much less expensive to produce.
It was Fritz Pfleumer, a German engineer, who in 1929 filed the
first audiotape patent. His detailed specifications indicated that
tape could be made by bonding a thin coating of oxide to strips of either
paper or film. Pfleumer also suggested that audiotape could be
attached to filmstrips to provide higher-quality sound than was
available with the film sound technologies in use at that time. In
1935, the German electronics firm AEG produced a reliable prototype
of a record-playback machine based on Pfleumer’s idea. By
1947, the American company 3M had refined the concept to the
point where it was able to produce a high-quality tape using a plastic-
based backing and red oxide. The tape recorded and reproduced
sound with a high degree of clarity and dynamic range and would
soon become the standard in the industry.
Still, the tape was sold and used in a somewhat inconvenient
open-reel format. The user had to thread it through a machine and
onto a take-up reel. This process was somewhat cumbersome and
complicated for the layperson. For many years, sound-recording
technology remained a tool mostly for professionals.
In 1963, the first audiocassette was introduced by the Netherlands-based PhilipsNVcompany. This device could be inserted into
a machine without threading. Rewind and fast-forward were faster,
and it made no difference where the tape was stopped prior to the
ejection of the cassette. By contrast, open-reel audiotape required
that the tape be wound fully onto one or the other of the two reels
before it could be taken off the machine.
Technical advances allowed the cassette tape to be much narrower
than the tape used in open reels and also allowed the tape
speed to be reduced without sacrificing sound quality. Thus, the
cassette was easier to carry around, and more sound could be recorded
on a cassette tape. In addition, the enclosed cassette decreased
wear and tear on the tape and protected it from contamination.
Creating a Market
One of the most popular uses for audiocassettes was to record
music from radios and other audio sources for later playback. During
the 1970’s, many radio stations developed “all music” formats
in which entire albums were often played without interruption.
That gave listeners an opportunity to record the music for later
playback. At first, the music recording industry complained about
this practice, charging that unauthorized recording of music from
the radio was a violation of copyright laws. Eventually, the issue
died down as the same companies began to recognize this new, untapped
market for recorded music on cassette.
Audiocassettes, all based on the original Philips design, were being
manufactured by more than sixty companies within only a few
years of their introduction. In addition, spin-offs of that design were
being used in many specialized applications, including dictation,
storage of computer information, and surveillance. The emergence
of videotape resulted in a number of formats for recording and
playing back video based on the same principle. Although each is
characterized by different widths of tape, each uses the same technique
for tape storage and transport.
The cassette has remained a popular means of storing and retrieving
information on magnetic tape for more than a quarter of a
century. During the early 1990’s, digital technologies such as audio
CDs (compact discs) and the more advanced CD-ROM (compact discs that reproduce sound, text, and images via computer) were beginning
to store information in revolutionary new ways. With the
development of this increasingly sophisticated technology, need for
the audiocassette, once the most versatile, reliable, portable, and
economical means of recording, storing, and playing-back sound,
became more limited.
Consequences
The cassette represented a new level of convenience for the audiophile,
resulting in a significant increase in the use of recording
technology in all walks of life. Even small children could operate
cassette recorders and players, which led to their use in schools for a
variety of instructional tasks and in the home for entertainment. The
recording industry realized that audiotape cassettes would allow
consumers to listen to recorded music in places where record players
were impractical: in automobiles, at the beach, even while camping.
The industry also saw the need for widespread availability of
music and information on cassette tape. It soon began distributing
albums on audiocassette in addition to the long-play vinyl discs,
and recording sales increased substantially. This new technology
put recorded music into automobiles for the first time, again resulting
in a surge in sales for recorded music. Eventually, information,
including language instruction and books-on-tape, became popular
commuter fare.
With the invention of the microchip, audiotape players became
available in smaller and smaller sizes, making them truly portable.
Audiocassettes underwent another explosion in popularity during
the early 1980’s, when the Sony Corporation introduced the
Walkman, an extremely compact, almost weightless cassette player
that could be attached to clothing and used with lightweight earphones
virtually anywhere. At the same time, cassettes were suddenly
being used with microcomputers for backing up magnetic
data files.
Home video soon exploded onto the scene, bringing with it new
applications for cassettes. As had happened with audiotape, video
camera-recorder units, called “camcorders,” were miniaturized to
the point where 8-millimeter videocassettes capable of recording up to 90 minutes of live action and sound were widely available. These
cassettes closely resembled the audiocassette first introduced in
1963.

Carbon dating







The invention: Atechnique that measures the radioactive decay of
carbon 14 in organic substances to determine the ages of artifacts
as old as ten thousand years.
The people behind the invention:
Willard Frank Libby (1908-1980), an American chemist who won
the 1960 Nobel Prize in Chemistry
Charles Wesley Ferguson (1922-1986), a scientist who
demonstrated that carbon 14 dates before 1500 b.c. needed to
be corrected
One in a Trillion
Carbon dioxide in the earth’s atmosphere contains a mixture of
three carbon isotopes (isotopes are atoms of the same element that
contain different numbers of neutrons), which occur in the following
percentages: about 99 percent carbon 12, about 1 percent carbon
13, and approximately one atom in a trillion of radioactive carbon
14. Plants absorb carbon dioxide from the atmosphere during photosynthesis,
and then animals eat the plants, so all living plants and
animals contain a small amount of radioactive carbon.
When a plant or animal dies, its radioactivity slowly decreases as
the radioactive carbon 14 decays. The time it takes for half of any radioactive
substance to decay is known as its “half-life.” The half-life
for carbon 14 is known to be about fifty-seven hundred years. The
carbon 14 activity will drop to one-half after one half-life, onefourth
after two half-lives, one-eighth after three half-lives, and so
forth. After ten or twenty half-lives, the activity becomes too low to
be measurable. Coal and oil, which were formed from organic matter
millions of years ago, have long since lost any carbon 14 activity.
Wood samples from an Egyptian tomb or charcoal from a prehistoric
fireplace a few thousand years ago, however, can be dated with
good reliability from the leftover radioactivity.
In the 1940’s, the properties of radioactive elements were still
being discovered and were just beginning to be used to solve problems.
Scientists still did not know the half-life of carbon 14, and archaeologists still depended mainly on historical evidence to determine
the ages of ancient objects.
In early 1947,Willard Frank Libby started a crucial experiment in
testing for radioactive carbon. He decided to test samples of methane
gas from two different sources. One group of samples came
from the sewage disposal plant at Baltimore, Maryland, which was
rich in fresh organic matter. The other sample of methane came from
an oil refinery, which should have contained only ancient carbon
from fossils whose radioactivity should have completely decayed.
The experimental results confirmed Libby’s suspicions: The methane
from fresh sewage was radioactive, but the methane from oil
was not. Evidently, radioactive carbon was present in fresh organic
material, but it decays away eventually.
Tree-Ring Dating
In order to establish the validity of radiocarbon dating, Libby analyzed
known samples of varying ages. These included tree-ring
samples from the years 575 and 1075 and one redwood from 979
b.c.e., as well as artifacts from Egyptian tombs going back to about
3000 b.c.e. In 1949, he published an article in the journal Science that
contained a graph comparing the historical ages and the measured
radiocarbon ages of eleven objects. The results were accurate within
10 percent, which meant that the general method was sound.
The first archaeological object analyzed by carbon dating, obtained
from the Metropolitan Museum of Art in New York, was a
piece of cypress wood from the tomb of King Djoser of Egypt. Based
on historical evidence, the age of this piece of wood was about fortysix
hundred years. A small sample of carbon obtained from this
wood was deposited on the inside of Libby’s radiation counter, giving
a count rate that was about 40 percent lower than that of modern
organic carbon. The resulting age of the wood calculated from its residual
radioactivity was about thirty-eight hundred years, a difference
of eight hundred years. Considering that this was the first object
to be analyzed, even such a rough agreement with the historic
age was considered to be encouraging.
The validity of radiocarbon dating depends on an important assumption—
namely, that the abundance of carbon 14 in nature has been constant for many thousands of years. If carbon 14 was less
abundant at some point in history, organic samples from that era
would have started with less radioactivity. When analyzed today,
their reduced activity would make them appear to be older than
they really are.Charles Wesley Ferguson from the Tree-Ring Research Laboratory
at the University of Arizona tackled this problem. He measured
the age of bristlecone pine trees both by counting the rings and by
using carbon 14 methods. He found that carbon 14 dates before
1500 b.c.e. needed to be corrected. The results show that radiocarbon
dates are older than tree-ring counting dates by as much as several
hundred years for the oldest samples. He knew that the number
of tree rings had given him the correct age of the pines, because trees
accumulate one ring of growth for every year of life. Apparently, the
carbon 14 content in the atmosphere has not been constant. Fortunately,
tree-ring counting gives reliable dates that can be used to
correct radiocarbon measurements back to about 6000 b.c.e.
Impact
Some interesting samples were dated by Libby’s group. The
Dead Sea Scrolls had been found in a cave by an Arab shepherd in
1947, but some Bible scholars at first questioned whether they were
genuine. The linen wrapping from the Book of Isaiah was tested for
carbon 14, giving a date of 100 b.c.e., which helped to establish its
authenticity. Human hair from an Egyptian tomb was determined
to be nearly five thousand years old.Well-preserved sandals from a
cave in eastern Oregon were determined to be ninety-three hundred
years old. A charcoal sample from a prehistoric site in western
South Dakota was found to be about seven thousand years old.
The Shroud of Turin, located in Turin, Italy, has been a controversial
object for many years. It is a linen cloth, more than four meters
long, which shows the image of a man’s body, both front and back.
Some people think it may have been the burial shroud of Jesus
Christ after his crucifixion. Ateam of scientists in 1978 was permitted
to study the shroud, using infrared photography, analysis of
possible blood stains, microscopic examination of the linen fibers,
and other methods. The results were ambiguous. A carbon 14 test
was not permitted because it would have required cutting a piece
about the size of a handkerchief from the shroud.
Anew method of measuring carbon 14 was developed in the late
1980’s. It is called “accelerator mass spectrometry,” or AMS. Unlike
Libby’s method, it does not count the radioactivity of carbon. Instead, a mass spectrometer directly measures the ratio of carbon 14
to ordinary carbon. The main advantage of this method is that the
sample size needed for analysis is about a thousand times smaller
than before. The archbishop of Turin permitted three laboratories
with the appropriate AMS apparatus to test the shroud material.
The results agreed that the material was from the fourteenth century,
not from the time of Christ. The figure on the shroud may be a
watercolor painting on linen.
Since Libby’s pioneering experiments in the late 1940’s, carbon
14 dating has established itself as a reliable dating technique for archaeologists
and cultural historians. Further improvements are expected
to increase precision, to make it possible to use smaller samples,
and to extend the effective time range of the method back to
fifty thousand years or earlier.

Thursday, March 5, 2009

CAD/CAM





The invention: Computer-Aided Design (CAD) and Computer-
Aided Manufacturing (CAM) enhanced flexibility in engineering
design, leading to higher quality and reduced time for manufacturing
The people behind the invention:
Patrick Hanratty, a General Motors Research Laboratory
worker who developed graphics programs
Jack St. Clair Kilby (1923- ), a Texas Instruments employee
who first conceived of the idea of the integrated circuit
Robert Noyce (1927-1990), an Intel Corporation employee who
developed an improved process of manufacturing
integrated circuits on microchips
Don Halliday, an early user of CAD/CAM who created the
Made-in-America car in only four months by using CAD
and project management software
Fred Borsini, an early user of CAD/CAM who demonstrated
its power
Summary of Event
Computer-Aided Design (CAD) is a technique whereby geometrical
descriptions of two-dimensional (2-D) or three-dimensional (3-
D) objects can be created and stored, in the form of mathematical
models, in a computer system. Points, lines, and curves are represented
as graphical coordinates. When a drawing is requested from
the computer, transformations are performed on the stored data,
and the geometry of a part or a full view from either a two- or a
three-dimensional perspective is shown. CAD systems replace the
tedious process of manual drafting, and computer-aided drawing
and redrawing that can be retrieved when needed has improved
drafting efficiency. A CAD system is a combination of computer
hardware and software that facilitates the construction of geometric
models and, in many cases, their analysis. It allows a wide variety of
visual representations of those models to be displayed.Computer-Aided Manufacturing (CAM) refers to the use of computers
to control, wholly or partly, manufacturing processes. In
practice, the term is most often applied to computer-based developments
of numerical control technology; robots and flexible manufacturing
systems (FMS) are included in the broader use of CAM
systems. A CAD/CAM interface is envisioned as a computerized
database that can be accessed and enriched by either design or manufacturing
professionals during various stages of the product development
and production cycle.
In CAD systems of the early 1990’s, the ability to model solid objects
became widely available. The use of graphic elements such as
lines and arcs and the ability to create a model by adding and subtracting
solids such as cubes and cylinders are the basic principles of
CADand of simulating objects within a computer.CADsystems enable
computers to simulate both taking things apart (sectioning)
and putting things together for assembly. In addition to being able
to construct prototypes and store images of different models, CAD
systems can be used for simulating the behavior of machines, parts,
and components. These abilities enable CAD to construct models
that can be subjected to nondestructive testing; that is, even before
engineers build a physical prototype, the CAD model can be subjected
to testing and the results can be analyzed. As another example,
designers of printed circuit boards have the ability to test their
circuits on a CAD system by simulating the electrical properties of
components.
During the 1950’s, the U.S. Air Force recognized the need for reducing
the development time for special aircraft equipment. As a
result, the Air Force commissioned the Massachusetts Institute of
Technology to develop numerically controlled (NC) machines that
were programmable. A workable demonstration of NC machines
was made in 1952; this began a new era for manufacturing. As the
speed of an aircraft increased, the cost of manufacturing also increased
because of stricter technical requirements. This higher cost
provided a stimulus for the further development of NC technology,
which promised to reduce errors in design before the prototype
stage.
The early 1960’s saw the development of mainframe computers.
Many industries valued computing technology for its speed and for its accuracy in lengthy and tedious numerical operations in design,
manufacturing, and other business functional areas. Patrick
Hanratty, working for General Motors Research Laboratory, saw
other potential applications and developed graphics programs for
use on mainframe computers. The use of graphics in software aided
the development of CAD/CAM, allowing visual representations of
models to be presented on computer screens and printers.
The 1970’s saw an important development in computer hardware,
namely the development and growth of personal computers
(PCs). Personal computers became smaller as a result of the development
of integrated circuits. Jack St. Clair Kilby, working for Texas
Instruments, first conceived of the integrated circuit; later, Robert
Noyce, working for Intel Corporation, developed an improved process
of manufacturing integrated circuits on microchips. Personal
computers using these microchips offered both speed and accuracy
at costs much lower than those of mainframe computers.
Five companies offered integrated commercial computer-aided
design and computer-aided manufacturing systems by the first half
of 1973. Integration meant that both design and manufacturing
were contained in one system. Of these five companies—Applicon,
Computervision, Gerber Scientific, Manufacturing and Consulting
Services (MCS), and United Computing—four offered turnkey systems
exclusively. Turnkey systems provide design, development,
training, and implementation for each customer (company) based
on the contractual agreement; they are meant to be used as delivered,
with no need for the purchaser to make significant adjustments
or perform programming.
The 1980’s saw a proliferation of mini- and microcomputers with
a variety of platforms (processors) with increased speed and better
graphical resolution. This made the widespread development of
computer-aided design and computer-aided manufacturing possible
and practical. Major corporations spent large research and development
budgets developing CAD/CAM systems that would
automate manual drafting and machine tool movements. Don Halliday,
working for Truesports Inc., provided an early example of the
benefits of CAD/CAM. He created the Made-in-America car in only
four months by using CAD and project management software. In
the late 1980’s, Fred Borsini, the president of Leap Technologies in Michigan, brought various products to market in record time through
the use of CAD/CAM.
In the early 1980’s, much of theCAD/CAMindustry consisted of
software companies. The cost for a relatively slow interactive system
in 1980 was close to $100,000. The late 1980’s saw the demise of
minicomputer-based systems in favor of Unix work stations and
PCs based on 386 and 486 microchips produced by Intel. By the time
of the International Manufacturing Technology show in September,
1992, the industry could show numerous CAD/CAM innovations
including tools, CAD/CAM models to evaluate manufacturability
in early design phases, and systems that allowed use of the same
data for a full range of manufacturing functions.
Impact
In 1990, CAD/CAM hardware sales by U.S. vendors reached
$2.68 billion. In software alone, $1.42 billion worth of CAD/CAM
products and systems were sold worldwide by U.S. vendors, according
to International Data Corporation figures for 1990. CAD/
CAM systems were in widespread use throughout the industrial
world. Development lagged in advanced software applications,
particularly in image processing, and in the communications software
and hardware that ties processes together.
A reevaluation of CAD/CAM systems was being driven by the
industry trend toward increased functionality of computer-driven
numerically controlled machines. Numerical control (NC) software
enables users to graphically define the geometry of the parts in a
product, develop paths that machine tools will follow, and exchange
data among machines on the shop floor. In 1991, NC configuration
software represented 86 percent of total CAM sales. In 1992,
the market shares of the five largest companies in the CAD/CAM
market were 29 percent for International Business Machines, 17 percent
for Intergraph, 11 percent for Computervision, 9 percent for
Hewlett-Packard, and 6 percent for Mentor Graphics.
General Motors formed a joint venture with Ford and Chrysler to
develop a common computer language in order to make the next
generation of CAD/CAM systems easier to use. The venture was
aimed particularly at problems that posed barriers to speeding up the design of new automobiles. The three car companies all had sophisticated
computer systems that allowed engineers to design
parts on computers and then electronically transmit specifications
to tools that make parts or dies.
CAD/CAM technology was expected to advance on many fronts.
As of the early 1990’s, different CAD/CAM vendors had developed
systems that were often incompatible with one another, making it
difficult to transfer data from one system to another. Large corporations,
such as the major automakers, developed their own interfaces
and network capabilities to allow different systems to communicate.
Major users of CAD/CAM saw consolidation in the industry
through the establishment of standards as being in their interests.
Resellers of CAD/CAM products also attempted to redefine
their markets. These vendors provide technical support and service
to users. The sale of CAD/CAM products and systems offered substantial
opportunities, since demand remained strong. Resellers
worked most effectively with small and medium-sized companies,
which often were neglected by the primary sellers of CAD/CAM
equipment because they did not generate a large volume of business.
Some projections held that by 1995 half of all CAD/CAM systems
would be sold through resellers, at a cost of $10,000 or less for
each system. The CAD/CAM market thus was in the process of dividing
into two markets: large customers (such as aerospace firms
and automobile manufacturers) that would be served by primary
vendors, and small and medium-sized customers that would be serviced
by resellers.
CAD will find future applications in marketing, the construction
industry, production planning, and large-scale projects such as shipbuilding
and aerospace. Other likely CAD markets include hospitals,
the apparel industry, colleges and universities, food product
manufacturers, and equipment manufacturers. As the linkage between
CAD and CAM is enhanced, systems will become more productive.
The geometrical data from CAD will be put to greater use
by CAM systems.
CAD/CAM already had proved that it could make a big difference
in productivity and quality. Customer orders could be changed
much faster and more accurately than in the past, when a change
could require a manual redrafting of a design. Computers could do automatically in minutes what once took hours manually. CAD/
CAM saved time by reducing, and in some cases eliminating, human
error. Many flexible manufacturing systems (FMS) had machining
centers equipped with sensing probes to check the accuracy
of the machining process. These self-checks can be made part of numerical
control (NC) programs. With the technology of the early
1990’s, some experts estimated that CAD/CAM systems were in
many cases twice as productive as the systems they replaced; in the
long run, productivity is likely to improve even more, perhaps up to
three times that of older systems or even higher. As costs for CAD/
CAM systems concurrently fall, the investment in a system will be
recovered more quickly. Some analysts estimated that by the mid-
1990’s, the recovery time for an average system would be about
three years.
Another frontier in the development of CAD/CAM systems is
expert (or knowledge-based) systems, which combine data with a
human expert’s knowledge, expressed in the form of rules that the
computer follows. Such a system will analyze data in a manner
mimicking intelligence. For example, a 3-D model might be created
from standard 2-D drawings. Expert systems will likely play a
pivotal role in CAM applications. For example, an expert system
could determine the best sequence of machining operations to produce
a component.
Continuing improvements in hardware, especially increased
speed, will benefit CAD/CAM systems. Software developments,
however, may produce greater benefits. Wider use of CAD/CAM
systems will depend on the cost savings from improvements in
hardware and software as well as on the productivity of the systems
and the quality of their product. The construction, apparel,
automobile, and aerospace industries have already experienced
increases in productivity, quality, and profitability through the use
of CAD/CAM. A case in point is Boeing, which used CAD from
start to finish in the design of the 757.

Buna rubber





The invention: The first practical synthetic rubber product developed,
Buna inspired the creation of other other synthetic substances
that eventually replaced natural rubber in industrial applications.
The people behind the invention:
Charles de la Condamine (1701-1774), a French naturalist
Charles Goodyear (1800-1860), an American inventor
Joseph Priestley (1733-1804), an English chemist
Charles Greville Williams (1829-1910), an English chemist
A New Synthetic Rubber
The discovery of natural rubber is often credited to the French
scientist Charles de la Condamine, who, in 1736, sent the French
Academy of Science samples of an elastic material used by Peruvian
Indians to make balls that bounced. The material was primarily a
curiosity until 1770, when Joseph Priestley, an English chemist, discovered
that it rubbed out pencil marks, after which he called it
“rubber.” Natural rubber, made from the sap of the rubber tree
(Hevea brasiliensis), became important after Charles Goodyear discovered
in 1830 that heating rubber with sulfur (a process called
“vulcanization”) made it more elastic and easier to use. Vulcanized
natural rubber came to be used to make raincoats, rubber bands,
and motor vehicle tires.
Natural rubber is difficult to obtain (making one tire requires
the amount of rubber produced by one tree in two years), and wars
have often cut off supplies of this material to various countries.
Therefore, efforts to manufacture synthetic rubber began in the
late eighteenth century. Those efforts followed the discovery by
English chemist Charles GrevilleWilliams and others in the 1860’s
that natural rubber was composed of thousands of molecules of a
chemical called isoprene that had been joined to form giant, necklace-
like molecules. The first successful synthetic rubber, Buna,
was patented by Germany’s I. G. Farben Industrie in 1926. The success of this rubber led to the development of many other synthetic
rubbers, which are now used in place of natural rubber in many
applications.From Erasers to Gas Pumps
Natural rubber belongs to the group of chemicals called “polymers.”
Apolymer is a giant molecule that is made up of many simpler
chemical units (“monomers”) that are attached chemically to
form long strings. In natural rubber, the monomer is isoprene
(dimethylbutadiene). The first efforts to make a synthetic rubber
used the discovery that isoprene could be made and converted
into an elastic polymer. The synthetic rubber that was created from
isoprene was, however, inferior to natural rubber. The first Buna
rubber, which was patented by I. G. Farben in 1926, was better, but it
was still less than ideal. Buna rubber was made by polymerizing the
monomer butadiene in the presence of sodium. The name Buna
comes from the first two letters of the words “butadiene” and “natrium”
(German for sodium). Natural and Buna rubbers are called
homopolymers because they contain only one kind of monomer.
The ability of chemists to make Buna rubber, along with its successful
use, led to experimentation with the addition of other monomers
to isoprene-like chemicals used to make synthetic rubber.
Among the first great successes were materials that contained two
alternating monomers; such materials are called “copolymers.” If
the two monomers are designated Aand B, part of a polymer molecule
can be represented as (ABABABABABABABABAB). Numerous
synthetic copolymers, which are often called “elastomers,” now
replace natural rubber in applications where they have superior
properties. All elastomers are rubbers, since objects made from
them both stretch greatly when pulled and return quickly to their
original shape when the tension is released.
Two other well-known rubbers developed by I. G. Farben are the
copolymers called Buna-N and Buna-S. These materials combine butadiene
and the monomers acrylonitrile and styrene, respectively.
Many modern motor vehicle tires are made of synthetic rubber that
differs little from Buna-S rubber. This rubber was developed after
the United States was cut off in the 1940’s, during World War II,
from its Asian source of natural rubber. The solution to this problem
was the development of a synthetic rubber industry based on GR-S
rubber (government rubber plus styrene), which was essentially
Buna-S rubber. This rubber is still widely used.Buna-S rubber is often made by mixing butadiene and styrene in
huge tanks of soapy water, stirring vigorously, and heating the mixture.
The polymer contains equal amounts of butadiene and styrene
(BSBSBSBSBSBSBSBS). When the molecules of the Buna-S polymer
reach the desired size, the polymerization is stopped and the rubber
is coagulated (solidified) chemically. Then, water and all the unused
starting materials are removed, after which the rubber is dried and
shipped to various plants for use in tires and other products. The
major difference between Buna-S and GR-S rubber is that the method
of making GR-S rubber involves the use of low temperatures.
Buna-N rubber is made in a fashion similar to that used for Buna-
S, using butadiene and acrylonitrile. Both Buna-N and the related
neoprene rubber, invented by Du Pont, are very resistant to gasoline
and other liquid vehicle fuels. For this reason, they can be used in
gas-pump hoses. All synthetic rubbers are vulcanized before they
are used in industry.
Impact
Buna rubber became the basis for the development of the other
modern synthetic rubbers. These rubbers have special properties
that make them suitable for specific applications. One developmental
approach involved the use of chemically modified butadiene in
homopolymers such as neoprene. Made of chloroprene (chlorobutadiene),
neoprene is extremely resistant to sun, air, and chemicals.
It is so widely used in machine parts, shoe soles, and hoses that
more than 400 million pounds are produced annually.
Another developmental approach involved copolymers that alternated
butadiene with other monomers. For example, the successful
Buna-N rubber (butadiene and acrylonitrile) has properties
similar to those of neoprene. It differs sufficiently from neoprene,
however, to be used to make items such as printing press rollers.
About 200 million pounds of Buna-N are produced annually. Some
4 billion pounds of the even more widely used polymer Buna-S/
GR-S are produced annually, most of which is used to make tires.
Several other synthetic rubbers have significant industrial applications,
and efforts to make copolymers for still other purposes continue.