Monday, June 29, 2009
Genetic “fingerprinting”
The invention: Atechnique for using the unique characteristics of
each human being’s DNA to identify individuals, establish connections
among relatives, and identify criminals.
The people behind the invention:
Alec Jeffreys (1950- ), an English geneticist
Victoria Wilson (1950- ), an English geneticist
Swee Lay Thein (1951- ), a biochemical geneticist
Microscopic Fingerprints
In 1985, Alec Jeffreys, a geneticist at the University of Leicester in
England, developed a method of deoxyribonucleic acid (DNA)
analysis that provides a visual representation of the human genetic
structure. Jeffreys’s discovery had an immediate, revolutionary impact
on problems of human identification, especially the identification
of criminals. Whereas earlier techniques, such as conventional
blood typing, provide evidence that is merely exclusionary (indicating
only whether a suspect could or could not be the perpetrator of a
crime), DNA fingerprinting provides positive identification.
For example, under favorable conditions, the technique can establish
with virtual certainty whether a given individual is a murderer
or rapist. The applications are not limited to forensic science;
DNA fingerprinting can also establish definitive proof of parenthood
(paternity or maternity), and it is invaluable in providing
markers for mapping disease-causing genes on chromosomes. In
addition, the technique is utilized by animal geneticists to establish
paternity and to detect genetic relatedness between social groups.
DNAfingerprinting (also referred to as “genetic fingerprinting”)
is a sophisticated technique that must be executed carefully to produce
valid results. The technical difficulties arise partly from the
complex nature of DNA. DNA, the genetic material responsible for
heredity in all higher forms of life, is an enormously long, doublestranded
molecule composed of four different units called “bases.”
The bases on one strand of DNApair with complementary bases on the other strand. A human being contains twenty-three pairs of
chromosomes; one member of each chromosome pair is inherited
fromthe mother, the other fromthe father. The order, or sequence, of
bases forms the genetic message, which is called the “genome.” Scientists
did not know the sequence of bases in any sizable stretch of
DNA prior to the 1970’s because they lacked the molecular tools to
split DNA into fragments that could be analyzed. This situation
changed with the advent of biotechnology in the mid-1970’s.
The door toDNAanalysis was opened with the discovery of bacterial
enzymes called “DNA restriction enzymes.” A restriction enzyme
binds to DNA whenever it finds a specific short sequence of
base pairs (analogous to a code word), and it splits the DNAat a defined
site within that sequence. A single enzyme finds millions of
cutting sites in human DNA, and the resulting fragments range in
size from tens of base pairs to hundreds or thousands. The fragments
are exposed to a radioactive DNA probe, which can bind to
specific complementary DNA sequences in the fragments. X-ray
film detects the radioactive pattern. The developed film, called an
“autoradiograph,” shows a pattern of DNA fragments, which is
similar to a bar code and can be compared with patterns from
known subjects.
The Presence of Minisatellites
The uniqueness of a DNA fingerprint depends on the fact that,
with the exception of identical twins, no two human beings have
identical DNA sequences. Of the three billion base pairs in human
DNA, many will differ from one person to another.
In 1985, Jeffreys and his coworkers, Victoria Wilson at the University
of Leicester and Swee Lay Thein at the John Radcliffe Hospital
in Oxford, discovered a way to produce a DNA fingerprint.
Jeffreys had found previously that human DNA contains many repeated
minisequences called “minisatellites.” Minisatellites consist
of sequences of base pairs repeated in tandem, and the number of
repeated units varies widely from one individual to another. Every
person, with the exception of identical twins, has a different number
of tandem repeats and, hence, different lengths of minisatellite
DNA. By using two labeled DNA probes to detect two different minisatellite sequences, Jeffreys obtained a unique fragment band
pattern that was completely specific for an individual.
The power of the technique derives from the law of chance,
which indicates that the probability (chance) that two or more unrelated
events will occur simultaneously is calculated as the multiplication
product of the two separate probabilities. As Jeffreys discovered,
the likelihood of two unrelated people having completely
identical DNAfingerprints is extremely small—less than one in ten
trillion. Given the population of the world, it is clear that the technique
can distinguish any one person from everyone else. Jeffreys
called his band patterns “DNAfingerprints” because of their ability
to individualize. As he stated in his landmark research paper, published
in the English scientific journal Nature in 1985, probes to
minisatellite regions of human DNA produce “DNA ‘fingerprints’
which are completely specific to an individual (or to his or her identical
twin) and can be applied directly to problems of human identification,
including parenthood testing.”
Consequences
In addition to being used in human identification, DNA fingerprinting
has found applications in medical genetics. In the search
for a cause, a diagnostic test for, and ultimately the treatment of an
inherited disease, it is necessary to locate the defective gene on a human
chromosome. Gene location is accomplished by a technique
called “linkage analysis,” in which geneticists use marker sections
of DNA as reference points to pinpoint the position of a defective
gene on a chromosome. The minisatellite DNA probes developed
by Jeffreys provide a potent and valuable set of markers that are of
great value in locating disease-causing genes. Soon after its discovery,
DNA fingerprinting was used to locate the defective genes responsible
for several diseases, including fetal hemoglobin abnormality
and Huntington’s disease.
Genetic fingerprinting also has had a major impact on genetic
studies of higher animals. BecauseDNAsequences are conserved in
evolution, humans and other vertebrates have many sequences in
common. This commonality enabled Jeffreys to use his probes to
human minisatellites to bind to the DNA of many different vertebrates, ranging from mammals to birds, reptiles, amphibians, and
fish; this made it possible for him to produce DNA fingerprints of
these vertebrates. In addition, the technique has been used to discern
the mating behavior of birds, to determine paternity in zoo primates,
and to detect inbreeding in imperiled wildlife. DNA fingerprinting
can also be applied to animal breeding problems, such as
the identification of stolen animals, the verification of semen samples
for artificial insemination, and the determination of pedigree.
The technique is not foolproof, however, and results may be far
from ideal. Especially in the area of forensic science, there was a
rush to use the tremendous power of DNA fingerprinting to identify
a purported murderer or rapist, and the need for scientific standards
was often neglected. Some problems arose because forensic
DNA fingerprinting in the United States is generally conducted in
private, unregulated laboratories. In the absence of rigorous scientific
controls, the DNA fingerprint bands of two completely unknown
samples cannot be matched precisely, and the results may be
unreliable.
Geiger counter
The invention: the first electronic device able to detect and measure
radioactivity in atomic particles.
The people behind the invention:
Hans Geiger (1882-1945), a German physicist
Ernest Rutherford (1871-1937), a British physicist
Sir John Sealy Edward Townsend (1868-1957), an Irish physicist
Sir William Crookes (1832-1919), an English physicist
Wilhelm Conrad Röntgen (1845-1923), a German physicist
Antoine-Henri Becquerel (1852-1908), a French physicist
Discovering Natural Radiation
When radioactivity was discovered and first studied, the work
was done with rather simple devices. In the 1870’s, Sir William
Crookes learned how to create a very good vacuum in a glass tube.
He placed electrodes in each end of the tube and studied the passage
of electricity through the tube. This simple device became known as
the “Crookes tube.” In 1895, Wilhelm Conrad Röntgen was experimenting
with a Crookes tube. It was known that when electricity
went through a Crookes tube, one end of the glass tube might glow.
Certain mineral salts placed near the tube would also glow. In order
to observe carefully the glowing salts, Röntgen had darkened the
room and covered most of the Crookes tube with dark paper. Suddenly,
a flash of light caught his eye. It came from a mineral sample
placed some distance from the tube and shielded by the dark paper;
yet when the tube was switched off, the mineral sample went dark.
Experimenting further, Röntgen became convinced that some ray
from the Crookes tube had penetrated the mineral and caused it to
glow. Since light rays were blocked by the black paper, he called the
mystery ray an “X ray,” with “X” standing for unknown.
Antoine-Henri Becquerel heard of the discovery of X rays and, in
February, 1886, set out to discover if glowing minerals themselves
emitted X rays. Some minerals, called “phosphorescent,” begin to
glow when activated by sunlight. Becquerel’s experiment involved wrapping photographic film in black paper and setting various
phosphorescent minerals on top and leaving them in the sun. He
soon learned that phosphorescent minerals containing uranium
would expose the film.
Aseries of cloudy days, however, brought a great surprise. Anxious
to continue his experiments, Becquerel decided to develop film
that had not been exposed to sunlight. He was astonished to discover
that the film was deeply exposed. Some emanations must be
coming from the uranium, he realized, and they had nothing to do
with sunlight. Thus, natural radioactivity was discovered by accident
with a simple piece of photographic film.
Rutherford and Geiger
Ernest Rutherford joined the world of international physics at
about the same time that radioactivity was discovered. Studying the
“Becquerel rays” emitted by uranium, Rutherford eventually distinguished
three different types of radiation, which he named “alpha,”
“beta,” and “gamma” after the first three letters of the Greek alphabet.
He showed that alpha particles, the least penetrating of the three, are
the nuclei of helium atoms (a group of two neutrons and a proton
tightly bound together). It was later shown that beta particles are electrons.
Gamma rays, which are far more penetrating than either alpha
or beta particles, were shown to be similar to X rays, but with higher
energies.
Rutherford became director of the associated research laboratory
at Manchester University in 1907. Hans Geiger became an assistant.
At this time, Rutherford was trying to prove that alpha particles
carry a double positive charge. The best way to do this was to measure
the electric charge that a stream of alpha particles would bring
to a target. By dividing that charge by the total number of alpha particles
that fell on the target, one could calculate the charge of a single
alpha particle. The problem lay in counting the particles and in
proving that every particle had been counted.
Basing their design upon work done by Sir John Sealy Edward
Townsend, a former colleague of Rutherford, Geiger and Rutherford
constructed an electronic counter. It consisted of a long brass
tube sealed at both ends from which most of the air had been pumped. A thin wire, insulated from the brass, was suspended
down the middle of the tube. This wire was connected to batteries
producing about thirteen hundred volts and to an electrometer, a
device that could measure the voltage of the wire. This voltage
could be increased until a spark jumped between the wire and the
tube. If the voltage was turned down a little, the tube was ready to
operate. An alpha particle entering the tube would ionize (knock
some electrons away from) at least a few atoms. These electrons
would be accelerated by the high voltage and, in turn, would ionize
more atoms, freeing more electrons. This process would continue
until an avalanche of electrons struck the central wire and the electrometer
registered the voltage change. Since the tube was nearly
ready to arc because of the high voltage, every alpha particle, even if
it had very little energy, would initiate a discharge. The most complex
of the early radiation detection devices—the forerunner of the
Geiger counter—had just been developed. The two physicists reported
their findings in February, 1908.
Impact
Their first measurements showed that one gram of radium
emitted 34 thousand million alpha particles per second. Soon, the
number was refined to 32.8 thousand million per second. Next,
Geiger and Rutherford measured the amount of charge emitted
by radium each second. Dividing this number by the previous
number gave them the charge on a single alpha particle. Just as
Rutherford had anticipated, the charge was double that of a hydrogen
ion (a proton). This proved to be the most accurate determination
of the fundamental charge until the American physicist
Robert Andrews Millikan conducted his classic oil-drop experiment
in 1911.
Another fundamental result came froma careful measurement of
the volume of helium emitted by radium each second. Using that
value, other properties of gases, and the number of helium nuclei
emitted each second, they were able to calculate Avogadro’s number
more directly and accurately than had previously been possible.
(Avogadro’s number enables one to calculate the number of atoms
in a given amount of material.)The true Geiger counter evolved when Geiger replaced the central
wire of the tube with a needle whose point lay just inside a thin
entrance window. This counter was much more sensitive to alpha
and beta particles and also to gamma rays. By 1928, with the assistance
of Walther Müller, Geiger made his counter much more efficient,
responsive, durable, and portable. There are probably few radiation
facilities in the world that do not have at least one Geiger
counter or one of its compact modern relatives.
Friday, June 26, 2009
Gas-electric car
The invention:
A hybrid automobile with both an internal combustion
engine and an electric motor.
The people behind the invention:
Victor Wouk - an American engineer
Tom Elliott, executive vice president of
American Honda Motor
Company
Hiroyuki Yoshino - president and chief executive officer of
Honda Motor Company
Fujio Cho - president of Toyota Motor Corporation
Announcing Hybrid Vehicles
At the 2000 North American International Auto Show in Detroit,
not only did the Honda Motor Company show off its new Insight
model, it also announced expanded use of its new technology. Hiroyuki
Yoshino, president and chief executive officer, said that Honda’s integrated
motor assist (IMA) system would be expanded to other mass market
models.
The system basically fits a small electric motor directly
on a one-liter, three-cylinder internal combustion engine. The two
share the workload of powering the car, but the gasoline engine does
not start up until it is needed.
The electric motor is powered by a
nickel-metal hydride (Ni-MH) battery pack, with the IMA system automatically
recharging the energy pack during braking.
Tom Elliott, Honda’s executive vice-president, said the vehicle
was a continuation of the company’s philosophy of making the latest
environmental technology accessible to consumers.
The $18,000
Insight was a two-seat sporty car that used many innovations to reduce
its weight and improve its performance.
Fujio Cho, president of Toyota, also spoke at the Detroit show,
where his company showed off its new $20,000 hybrid Prius.
The Toyota
Prius relied more on the electric motor and had more energy storage
capacity than the Insight, but was a four-door, five-seat model.
The Toyota Hybrid System divided the power from its 1.5-liter gasoline
engine and directed it to drive the wheels and a generator.
The generator alternately powered the motor and recharged the batteries.
The electric motor was coupled with the gasoline engine to
power the wheels under normal driving. The gasoline engine supplied
average power needs, with the electric motor helping the
peaks; at low speeds, it was all electric. A variable transmission
seamlessly switched back and forth between the gasoline engine
and electric motor or applied both of them.
Variations on an Idea
Automobiles generally use gasoline or diesel engines for driving,
electric motors that start the main motors, and a means of recharging
the batteries that power starter motors and other devices. In
solely electric cars, gasoline engines are eliminated entirely, and the
batteries that power the vehicles are recharged from stationary
sources. In hybrid cars, the relationship between gasoline engines
and electric motors is changed so that electric motors handle some
or all of the driving. This is at the expense of an increased number of
batteries or other energy-storage devices.
Possible in many combinations, “hybrids” couple the low-end
torque and regenerative braking potential of electric motors with
the range and efficient packaging of gasoline, natural gas, or even
hydrogen fuel power plants. The return is greater energy efficiency
and reduced pollution.
With sufficient energy-storage capacity, an electric motor can
actually propel a car from a standing start to a moving speed. In
hybrid vehicles, the gasoline engines—which are more energy efficient
at higher speeds, then kick in. However, the gasoline engines
in these vehicles are smaller, lighter, and more efficient than
ordinary gas engines. Designed for average—not peak—driving
conditions, they reduce air pollution and considerably improve
fuel economy.
Batteries in hybrid vehicles are recharged partly by the gas engines
and partly by regenerative braking; a third of the energy from
slowing the car is turned into electricity. What has finally made hybrids
feasible at reasonable cost are the new developments in computer
technology, allowing sophisticated controls to coordinate electrical
and mechanical power.One way to describe hybrids is to separate them into two types:
parallel, in which either of the two power plants can propel the vehicle,
and series, in which the auxiliary power plant is used to
charge the battery, rather than propel the vehicle.
Honda’s Insight is a simplified parallel hybrid that uses a small
but efficient gasoline engine. The electric motor assists the engine,
providing extra power for acceleration or hill climbing, helps provide
regenerative braking, and starts the engine. However, it cannot
run the car by itself.
Toyota’s Prius is a parallel hybrid whose power train allows
some series features. Its engine runs only at an efficient speed and
load and is combined with a unique power splitting device. It allows
the car to operate like a parallel hybrid, motor alone, engine
alone, or both. It can act as a series hybrid with the engine charging
the batteries rather than powering the vehicle. It also provides a
continually variable transmission using a planetary gear set that allows
interaction between the engine, the motor, and the differential
which drives the wheels.
Impact
In 2001 Honda and Toyota marketed gas-electric hybrids that offered
better than 60-mile-per-gallon fuel economy and met California’s
stringent standards for “super ultra-low emissions” vehicles.
Both companies achieved these standards without the inconvenience
of fully electric cars which could go only about a hundred
miles on a single battery charge and required such gimmicks as kerosene-
powered heaters. As a result, other manufacturers were beginning
to follow suit. Ford, for example, promised a hybrid sport
utility vehicle (SUV) by 2003. Other automakers, including General
Motors and Daimler Chrysler, also have announced development of
alternative fuel and low emission vehicles. An example is the ESX3
concept car using a 1.5-liter, direct injection diesel combined with a
electric motor and a lithium-ion battery
While American automakers were planning to offer some “full
hybrids”—cars capable of running on battery power alone at low
speeds—they were focusing more enthusiastically on electrically
assisted gasoline engines called “mild hybrids.” Full hybrids typically increase gas mileage by up to 60 percent; mild hybrids by only
10 or 20 percent. The “mild hybrid” approach uses regenerative
braking with electrical systems of a much lower voltage and storage
capacity than for full hybrids, a much cheaper approach. But there
still is enough energy available to allow the gasoline engine to turn
off automatically when a vehicle stops and turn on instantly when
the accelerator is touched. Because the “mild hybrid” approach
adds only $1000 to $1500 to a vehicle’s price, it is likely to be used in
many models. Full hybrids cost much more, but achieve more benefits.
Victor Wouk
H. Piper, an American engineer, filed the first patent for a
hybrid gas-electric powered car in 1905, and from then until
1915 they were popular, although not common, because they
could accelerate faster than plain gas-powered cars. Then the
gas-only models became as swift. Their hybrid cousins fells by
the wayside.
Interest in hybrids revived with the unheard-of gasoline
prices during the 1973 oil crisis. The champion of their comeback—
the father of the modern hybrid electric vehicle (HEV)—
was VictorWouk. Born in 1919 in New York City,Wouk earned
a math and physics degree from Columbia University in 1939
and a doctorate in electrical engineering from the California Institute
of Technology in 1942. In 1946 he founded Beta Electric
Corporation, which he led until 1959, when he founded and
was president of another company, Electronic Energy Conversion
Corporation. After 1970, he became an independent consultant,
hoping to build an HEV that people would prefer to
gas-guzzlers.
With his partner, Charles Rosen, Wouk gutted the engine
compartment of a Buick Skylark and installed batteries designed
for police cars, a 20-watt direct-current electric motor,
and an RX-2 Mazda rotary engine. Only a test vehicle, it still got
better gas mileage (thirty miles per gallon) than the original
Skylark and met the requirements for emissions control set by
the Clean Air Act of 1970, unlike all American automobiles of
the era. Moreover,Wouk designed an HEV that would get fifty
miles per gallon and pollute one-eighth as much as gas-powered
automobiles. However, the oil crisis ended, gas prices went
down, and consumers and the government lost interest. Wouk
continued to publish, lecture, and design; still, it was not until
the 1990’s that high gas prices and concerns over pollution
made HEV’s attractive yet again.
Wouk holds twelve patents, mostly for speed and braking
controls in electric vehicles but also for air conditioning, high
voltage direct-current power sources, and life extenders for incandescent
lamps.
See also:
Airplane; Diesel locomotive; Hovercraft; Internal combustion
engine; Supersonic passenger plane;
Labels:
car,
Electric,
Fujio Cho,
Gas,
Gas-electric car,
Hiroyuki Yoshino,
info,
informations,
invention,
inventor,
Victor Wouk
Tuesday, June 23, 2009
Fuel cell
The invention: An electrochemical cell that directly converts energy
from reactions between oxidants and fuels, such as liquid
hydrogen, into electrical energy.
The people behind the invention:
Francis Thomas Bacon (1904-1992), an English engineer
Sir William Robert Grove (1811-1896), an English inventor
Georges Leclanché (1839-1882), a French engineer
Alessandro Volta (1745-1827), an Italian physicist
The Earth’s Resources
Because of the earth’s rapidly increasing population and the
dwindling of fossil fuels (natural gas, coal, and petroleum), there is
a need to design and develop new ways to obtain energy and to encourage
its intelligent use. The burning of fossil fuels to create energy
causes a slow buildup of carbon dioxide in the atmosphere,
creating pollution that poses many problems for all forms of life on
this planet. Chemical and electrical studies can be combined to create
electrochemical processes that yield clean energy.
Because of their very high rate of efficiency and their nonpolluting
nature, fuel cells may provide the solution to the problem of
finding sufficient energy sources for humans. The simple reaction of
hydrogen and oxygen to form water in such a cell can provide an
enormous amount of clean (nonpolluting) energy. Moreover, hydrogen
and oxygen are readily available.
Studies by Alessandro Volta, Georges Leclanché, and William
Grove preceded the work of Bacon in the development of the fuel
cell. Bacon became interested in the idea of a hydrogen-oxygen fuel
cell in about 1932. His original intent was to develop a fuel cell that
could be used in commercial applications.
The Fuel Cell Emerges
In 1800, the Italian physicist Alessandro Volta experimented
with solutions of chemicals and metals that were able to conduct electricity. He found that two pieces of metal and such a solution
could be arranged in such a way as to produce an electric current.
His creation was the first electrochemical battery, a device that produced
energy from a chemical reaction. Studies in this area were
continued by various people, and in the late nineteenth century,
Georges Leclanché invented the dry cell battery, which is now commonly
used.
The work of William Grove followed that of Leclanché. His first
significant contribution was the Grove cell, an improved form of the
cells described above, which became very popular. Grove experimented
with various forms of batteries and eventually invented the
“gas battery,” which was actually the earliest fuel cell. It is worth
noting that his design incorporated separate test tubes of hydrogen
and oxygen, which he placed over strips of platinum.
After studying the design of Grove’s fuel cell, Bacon decided
that, for practical purposes, the use of platinum and other precious
metals should be avoided. By 1939, he had constructed a cell in
which nickel replaced the platinum used.
The theory behind the fuel cell can be described in the following
way. If a mixture of hydrogen and oxygen is ignited, energy is released
in the form of a violent explosion. In a fuel cell, however, the
reaction takes place in a controlled manner. Electrons lost by the hydrogen
gas flow out of the fuel cell and return to be taken up by the
oxygen in the cell. The electron flow provides electricity to any device
that is connected to the fuel cell, and the water that the fuel cell
produces can be purified and used for drinking.
Bacon’s studies were interrupted byWorldWar II. After the war
was over, however, Bacon continued his work. Sir Eric Keightley
Rideal of Cambridge University in England supported Bacon’s
studies; later, others followed suit. In January, 1954, Bacon wrote an
article entitled “Research into the Properties of the Hydrogen/ Oxygen
Fuel Cell” for a British journal. He was surprised at the speed
with which news of the article spread throughout the scientific
world, particularly in the United States.
After a series of setbacks, Bacon demonstrated a forty-cell unit
that had increased power. This advance showed that the fuel cell
was not merely an interesting toy; it had the capacity to do useful
work. At this point, the General Electric Company (GE), an American corporation, sent a representative to England to offer employment
in the United States to senior members of Bacon’s staff. Three scientists
accepted the offer.
A high point in Bacon’s career was the announcement that the
American Pratt and Whitney Aircraft company had obtained an order
to build fuel cells for the Apollo project, which ultimately put
two men on the Moon in 1969. Toward the end of his career in 1978,
Bacon hoped that commercial applications for his fuel cells would
be found.Impact
Because they are lighter and more efficient than batteries, fuel
cells have proved to be useful in the space program. Beginning with
the Gemini 5 spacecraft, alkaline fuel cells (in which a water solution
of potassium hydroxide, a basic, or alkaline, chemical, is placed)
have been used for more than ten thousand hours in space. The fuel
cells used aboard the space shuttle deliver the same amount of power
as batteries weighing ten times as much. On a typical seven-day
mission, the shuttle’s fuel cells consume 680 kilograms (1,500 pounds)
of hydrogen and generate 719 liters (190 gallons) of water that can
be used for drinking.
Major technical and economic problems must be overcome in order
to design fuel cells for practical applications, but some important
advancements have been made.Afew test vehicles that use fuel cells as a source of power have been constructed. Fuel cells using
hydrogen as a fuel and oxygen to burn the fuel have been used in a
van built by General Motors Corporation. Thirty-two fuel cells are
installed below the floorboards, and tanks of liquid oxygen are carried
in the back of the van. A power plant built in New York City
contains stacks of hydrogen-oxygen fuel cells, which can be put on
line quickly in response to power needs. The Sanyo Electric Company
has developed an electric car that is partially powered by a
fuel cell.
These tremendous technical advances are the result of the singleminded
dedication of Francis Thomas Bacon, who struggled all of
his life with an experiment he was convinced would be successful.
Freeze-drying
The invention:
Method for preserving foods and other organic
matter by freezing them and using a vacuum to remove their
water content without damaging their solid matter.
The people behind the invention:
Earl W. Flosdorf (1904- ), an American physician
Ronald I. N. Greaves (1908- ), an English pathologist
Jacques Arsène d’Arsonval (1851-1940), a French physicist
Freeze-Drying for Preservation
Drying, or desiccation, is known to preserve biomaterials, including
foods. In freeze-drying, water is evaporated in a frozen
state in a vacuum, by means of sublimation (the process of changing
a solid to a vapor without first changing it to a liquid).
In 1811, John Leslie had first caused freezing by means of the
evaporation and sublimation of ice. In 1813, William Wollaston
demonstrated this process to the Royal Society of London. It does
not seem to have occurred to either Leslie orWollaston to use sublimation
for drying. That distinction goes to Richard Altmann, a
German histologist, who dried pieces of frozen tissue in 1890.
Later, in 1903, Vansteenberghe freeze-dried the rabies virus. In
1906, Jacques Arsène d’Arsonval removed water at a low temperature
for distillation.
Since water removal is the essence of drying, d’Arsonval is often
credited with the discovery of freeze-drying, but the first clearly recorded
use of sublimation for preservation was by Leon Shackell in
1909. His work was widely recognized, and he freeze-dried a variety
of biological materials. The first patent for freeze-drying was issued
to Henri Tival, a French inventor, in 1927. In 1934, William
Elser received patents for a modern freeze-drying apparatus that
supplied heat for sublimation.
In 1933, Earl W. Flosdorf had freeze-dried human blood serum
and plasma for clinical use. The subsequent efforts of Flosdorf led to
commercial freeze-drying applications in the United States.
Freeze-Drying of Foods
With the freeze-drying technique fairly well established for biological
products, it was a natural extension for Flosdorf to apply the
technique to the drying of foods. As early as 1935, Flosdorf experimented
with the freeze-drying of fruit juices and milk. An early British
patent was issued to Franklin Kidd, a British inventor, in 1941 for
the freeze-drying of foods. An experimental program on the freezedrying
of food was also initiated at the Low Temperature Research
Station at Cambridge University in England, but untilWorldWar II,
freeze-drying was only an occasionally used scientific tool.
It was the desiccation of blood plasma from the frozen state, performed
by the American Red Cross for the U.S. armed forces, that
provided the first spectacular, extensive use of freeze-drying. This
work demonstrated the vast potential of freeze-drying for commercial
applications. In 1949, Flosdorf published the first book on
freeze-drying, which laid the foundation for freeze-drying of foods
and remains one of the most important contributions to large-scale
operations in the field. In the book, Flosdorf described the freezedrying
of fruit juices, milk, meats, oysters, clams, fish fillets, coffee
and tea extracts, fruits, vegetables, and other products. Flosdorf also
devoted an entire chapter to describing the equipment used for both
batch and continuous processing, and he discussed cost analysis.
The holder of more than fifteen patents covering various aspects of
freeze-drying, Flosdorf dominated the move toward commercialization
in the United States.
Simultaneously, researchers in England were developing freezedrying
applications under the leadership of Ronald I. N. Greaves.
The food crisis duringWorldWar II had led to the recognition that
dried foods cut the costs of transporting, storing, and packaging
foods in times of emergency. Thus, in 1951, the British Ministry of
Food Research was established at Aberdeen, Scotland. Scientists at
Aberdeen developed a vacuum contact plate freeze-dryer that improved
product quality and reduced the time required for rehydration
(replacement of the water removed in the freeze-drying
process so that the food can be used).
In 1954, trials of initial freeze-drying, followed by the ordinary
process of vacuum drying, were carried out. The abundance of
membranes within plant and animal tissues was a major obstacle to
the movement of water vapor, thus limiting the drying rate. In 1956,
two Canadian scientists developed a new method of improving the
freeze-drying rate for steaks by impaling the steaks on spiked heater
plates. This idea was adapted in 1957 by interposing sheets of expanded
metal, instead of spikes, between the drying surfaces of the
frozen food and the heating platens. Because of the substantially
higher freeze-drying rates that it achieved, the process was called
“accelerated freeze-drying.”
In 1960, Greaves described an ingenious method of freeze-drying
liquids. It involved continuously scraping the dry layer during its
formation. This led to a continuous process for freeze-drying liquids.
During the remainder of the 1960’s, freeze-drying applications
proliferated with the advent of several techniques for controlling
and improving the effectiveness of the freeze-drying process.
Impact
Flosdorf’s vision and ingenuity in applying freeze-drying to
foods has revolutionized food preservation. He was also responsible
for making a laboratory technique a tremendous commercial
success.
Freeze-drying is important because it stops the growth of microorganisms,
inhibits deleterious chemical reactions, and facilitates
distribution and storage. Freeze-dried foods are easily prepared for
consumption by adding water (rehydration). When freeze-dried
properly, most foods, either raw or cooked, can be rehydrated
quickly to yield products that are equal in quality to their frozen
counterparts. Freeze-dried products retain most of their nutritive
qualities and have a long storage life, even at room temperature.
Freeze-drying is not, however, without disadvantages. The major
disadvantage is the high cost of processing. Thus, to this day, the
great potential of freeze-drying has not been fully realized. The drying
of cell-free materials, such as coffee and tea extracts, has been extremely
successful, but the obstacles imposed by the cell membranes
in foods such as fruits, vegetables, and meats have limited
the application to expensive specialty items such as freeze-dried
soups and to foods for armies, campers, and astronauts. Future eco-
nomic changes may create a situation in which the high cost of
freeze-drying is more than offset by the cost of transportation and
storage.
See also : Electric refrigerator; Food freezing; Polystyrene;
FORTRAN programming language
The invention: The first major computer programming language,
FORTRAN supported programming in a mathematical language
that was natural to scientists and engineers and achieved unsurpassed
success in scientific computation.
The people behind the invention:
John Backus (1924- ), an American software engineer and
manager
John W. Mauchly (1907-1980), an American physicist and
engineer
Herman Heine Goldstine (1913- ), a mathematician and
computer scientist
John von Neumann (1903-1957), a Hungarian American
mathematician and physicist
Talking to Machines
Formula Translation, or FORTRAN—the first widely accepted
high-level computer language—was completed by John Backus
and his coworkers at the International Business Machines (IBM)
Corporation in April, 1957. Designed to support programming
in a mathematical language that was natural to scientists and engineers,
FORTRAN achieved unsurpassed success in scientific
computation.
Computer languages are means of specifying the instructions
that a computer should execute and the order of those instructions.
Computer languages can be divided into categories of progressively
higher degrees of abstraction. At the lowest level is binary
code, or machine code: Binary digits, or “bits,” specify in
complete detail every instruction that the machine will execute.
This was the only language available in the early days of computers,
when such machines as the ENIAC (Electronic Numerical Integrator
and Calculator) required hand-operated switches and
plugboard connections. All higher levels of language are implemented by having a program translate instructions written in the
higher language into binary machine language (also called “object
code”). High-level languages (also called “programming languages”)
are largely or entirely independent of the underlying
machine structure. FORTRAN was the first language of this type
to win widespread acceptance.
The emergence of machine-independent programming languages
was a gradual process that spanned the first decade of electronic
computation. One of the earliest developments was the invention of
“flowcharts,” or “flow diagrams,” by Herman Heine Goldstine and
John von Neumann in 1947. Flowcharting became the most influential
software methodology during the first twenty years of
computing.
Short Code was the first language to be implemented that contained
some high-level features, such as the ability to use mathematical
equations. The idea came from JohnW. Mauchly, and it was
implemented on the BINAC (Binary Automatic Computer) in 1949
with an “interpreter”; later, it was carried over to the UNIVAC (Universal
Automatic Computer) I. Interpreters are programs that do
not translate commands into a series of object-code instructions; instead,
they directly execute (interpret) those commands. Every time
the interpreter encounters a command, that command must be interpreted
again. “Compilers,” however, convert the entire command
into object code before it is executed.
Much early effort went into creating ways to handle commonly
encountered problems—particularly scientific mathematical
calculations. A number of interpretive languages arose to
support these features. As long as such complex operations had
to be performed by software (computer programs), however, scientific
computation would be relatively slow. Therefore, Backus
lobbied successfully for a direct hardware implementation of these
operations on IBM’s new scientific computer, the 704. Backus then
started the Programming Research Group at IBM in order to develop
a compiler that would allow programs to be written in a
mathematically oriented language rather than a machine-oriented
language. In November of 1954, the group defined an initial version
of FORTRAN.A More Accessible Language
Before FORTRAN was developed, a computer had to perform a
whole series of tasks to make certain types of mathematical calculations.
FORTRAN made it possible for the same calculations to be
performed much more easily. In general, FORTRAN supported constructs
with which scientists were already acquainted, such as functions
and multidimensional arrays. In defining a powerful notation
that was accessible to scientists and engineers, FORTRAN opened
up programming to a much wider community.
Backus’s success in getting the IBM 704’s hardware to support
scientific computation directly, however, posed a major challenge:
Because such computation would be much faster, the object code
produced by FORTRAN would also have to be much faster. The
lower-level compilers preceding FORTRAN produced programs
that were usually five to ten times slower than their hand-coded
counterparts; therefore, efficiency became the primary design objective
for Backus. The highly publicized claims for FORTRAN met
with widespread skepticism among programmers. Much of the
team’s efforts, therefore, went into discovering ways to produce the
most efficient object code.
The efficiency of the compiler produced by Backus, combined
with its clarity and ease of use, guaranteed the system’s success. By
1959, many IBM 704 users programmed exclusively in FORTRAN.
By 1963, virtually every computer manufacturer either had delivered
or had promised a version of FORTRAN.
Incompatibilities among manufacturers were minimized by the
popularity of IBM’s version of FORTRAN; every company wanted
to be able to support IBM programs on its own equipment. Nevertheless,
there was sufficient interest in obtaining a standard for
FORTRAN that the American National Standards Institute adopted
a formal standard for it in 1966. Arevised standard was adopted in
1978, yielding FORTRAN 77.
Consequences
In demonstrating the feasibility of efficient high-level languages,
FORTRAN inaugurated a period of great proliferation of programming languages. Most of these languages attempted to provide similar
or better high-level programming constructs oriented toward a
different, nonscientific programming environment. COBOL, for example,
stands for “Common Business Oriented Language.”
FORTRAN, while remaining the dominant language for scientific
programming, has not found general acceptance among nonscientists.
An IBM project established in 1963 to extend FORTRAN
found the task too unwieldy and instead ended up producing an entirely
different language, PL/I, which was delivered in 1966. In the
beginning, Backus and his coworkers believed that their revolutionary
language would virtually eliminate the burdens of coding and
debugging. Instead, FORTRAN launched software as a field of
study and an industry in its own right.
In addition to stimulating the introduction of new languages,
FORTRAN encouraged the development of operating systems. Programming
languages had already grown into simple operating systems
called “monitors.” Operating systems since then have been
greatly improved so that they support, for example, simultaneously
active programs (multiprogramming) and the networking (combining)
of multiple computers.
Labels:
FORTRAN,
FORTRAN programming language,
language,
programming
Sunday, June 21, 2009
Food freezing
The invention: It was long known that low temperatures helped to
protect food against spoiling; the invention that made frozen
food practical was a method of freezing items quickly. Clarence
Birdseye’s quick-freezing technique made possible a revolution
in food preparation, storage, and distribution.
The people behind the invention:
Clarence Birdseye (1886-1956), a scientist and inventor
Donald K. Tressler (1894-1981), a researcher at Cornell
University
Amanda Theodosia Jones (1835-1914), a food-preservation
pioneer
Feeding the Family
In 1917, Clarence Birdseye developed a means of quick-freezing
meat, fish, vegetables, and fruit without substantially changing
their original taste. His system of freezing was called by Fortune
magazine “one of the most exciting and revolutionary ideas in the
history of food.” Birdseye went on to refine and perfect his method
and to promote the frozen foods industry until it became a commercial
success nationwide.
It was during a trip to Labrador, where he worked as a fur trader,
that Birdseye was inspired by this idea. Birdseye’s new wife and
five-week-old baby had accompanied him there. In order to keep
his family well fed, he placed barrels of fresh cabbages in salt water
and then exposed the vegetables to freezing winds. Successful at
preserving vegetables, he went on to freeze a winter’s supply of
ducks, caribou, and rabbit meat.
In the following years, Birdseye experimented with many freezing
techniques. His equipment was crude: an electric fan, ice, and salt
water. His earliest experiments were on fish and rabbits, which he
froze and packed in old candy boxes. By 1924, he had borrowed
money against his life insurance and was lucky enough to find three
partners willing to invest in his new General Seafoods Company (later renamed General Foods), located in Gloucester, Massachusetts.
Although it was Birdseye’s genius that put the principles of
quick-freezing to work, he did not actually invent quick-freezing.
The scientific principles involved had been known for some time.
As early as 1842, a patent for freezing fish had been issued in England.
Nevertheless, the commercial exploitation of the freezing
process could not have happened until the end of the 1800’s, when
mechanical refrigeration was invented. Even then, Birdseye had to
overcome major obstacles.
Finding a Niche
By the 1920’s, there still were few mechanical refrigerators in
American homes. It would take years before adequate facilities for
food freezing and retail distribution would be established across the
United States. By the late 1930’s, frozen foods had, indeed, found its
role in commerce but still could not compete with canned or fresh
foods. Birdseye had to work tirelessly to promote the industry, writing
and delivering numerous lectures and articles to advance its
popularity. His efforts were helped by scientific research conducted
at Cornell University by Donald K. Tressler and by C. R. Fellers of
what was then Massachusetts State College. Also, during World
War II (1939-1945), more Americans began to accept the idea: Rationing,
combined with a shortage of canned foods, contributed to
the demand for frozen foods. The armed forces made large purchases
of these items as well.
General Foods was the first to use a system of extremely rapid
freezing of perishable foods in packages. Under the Birdseye system,
fresh foods, such as berries or lobster, were packaged snugly in convenient
square containers. Then, the packages were pressed between
refrigerated metal plates under pressure at 50 degrees below zero.
Two types of freezing machines were used. The “double belt” freezer
consisted of two metal belts that moved through a 15-meter freezing
tunnel, while a special salt solution was sprayed on the surfaces of
the belts. This double-belt freezer was used only in permanent installations
and was soon replaced by the “multiplate” freezer, which was
portable and required only 11.5 square meters of floor space compared
to the double belt’s 152 square meters.The multiplate freezer also made it possible to apply the technique
of quick-freezing to seasonal crops. People were able to transport
these freezers easily from one harvesting field to another,
where they were used to freeze crops such as peas fresh off the vine.
The handy multiplate freezer consisted of an insulated cabinet
equipped with refrigerated metal plates. Stacked one above the
other, these plates were capable of being opened and closed to receive
food products and to compress them with evenly distributed
pressure. Each aluminum plate had internal passages through which
ammonia flowed and expanded at a temperature of -3.8 degrees
Celsius, thus causing the foods to freeze.
A major benefit of the new frozen foods was that their taste and vitamin content were not lost. Ordinarily, when food is frozen
slowly, ice crystals form, which slowly rupture food cells, thus altering
the taste of the food. With quick-freezing, however, the food
looks, tastes, and smells like fresh food. Quick-freezing also cuts
down on bacteria.
Impact
During the months between one food harvest and the next, humankind
requires trillions of pounds of food to survive. In many
parts of the world, an adequate supply of food is available; elsewhere,
much food goes to waste and many go hungry. Methods of
food preservation such as those developed by Birdseye have done
much to help those who cannot obtain proper fresh foods. Preserving
perishable foods also means that they will be available in
greater quantity and variety all year-round. In all parts of the world,
both tropical and arctic delicacies can be eaten in any season of the
year.
With the rise in popularity of frozen “fast” foods, nutritionists
began to study their effect on the human body. Research has shown
that fresh is the most beneficial. In an industrial nation with many
people, the distribution of fresh commodities is, however, difficult.
It may be many decades before scientists know the long-term effects
on generations raised primarily on frozen foods.
FM radio
The invention: A method of broadcasting radio signals by modulating
the frequency, rather than the amplitude, of radio waves,
FM radio greatly improved the quality of sound transmission.
The people behind the invention:
Edwin H. Armstrong (1890-1954), the inventor of FM radio
broadcasting
David Sarnoff (1891-1971), the founder of RCA
An Entirely New System
Because early radio broadcasts used amplitude modulation (AM)
to transmit their sounds, they were subject to a sizable amount of interference
and static. Since goodAMreception relies on the amount
of energy transmitted, energy sources in the atmosphere between
the station and the receiver can distort or weaken the original signal.
This is particularly irritating for the transmission of music.
Edwin H. Armstrong provided a solution to this technological
constraint. A graduate of Columbia University, Armstrong made a
significant contribution to the development of radio with his basic
inventions for circuits for AM receivers. (Indeed, the monies Armstrong
received from his earlier inventions financed the development
of the frequency modulation, or FM, system.) Armstrong was
one among many contributors to AM radio. For FM broadcasting,
however, Armstrong must be ranked as the most important inventor.
During the 1920’s, Armstrong established his own research laboratory
in Alpine, New Jersey, across the Hudson River from New
York City. With a small staff of dedicated assistants, he carried out
research on radio circuitry and systems for nearly three decades. At
that time, Armstrong also began to teach electrical engineering at
Columbia University.
From 1928 to 1933, Armstrong worked diligently at his private
laboratory at Columbia University to construct a working model of
an FM radio broadcasting system. With the primitive limitations
then imposed on the state of vacuum tube technology, a number of Armstrong’s experimental circuits required as many as one hundred
tubes. Between July, 1930, and January, 1933, Armstrong filed
four basic FM patent applications. All were granted simultaneously
on December 26, 1933.
Armstrong sought to perfectFMradio broadcasting, not to offer
radio listeners better musical reception but to create an entirely
new radio broadcasting system. On November 5, 1935, Armstrong
made his first public demonstration of FM broadcasting in New
York City to an audience of radio engineers. An amateur station
based in suburban Yonkers, New York, transmitted these first signals.
The scientific world began to consider the advantages and
disadvantages of Armstrong’s system; other laboratories began to
craft their own FM systems.
Corporate Conniving
Because Armstrong had no desire to become a manufacturer or
broadcaster, he approached David Sarnoff, head of the Radio Corporation
of America (RCA). As the owner of the top manufacturer
of radio sets and the top radio broadcasting network, Sarnoff was
interested in all advances of radio technology. Armstrong first demonstrated
FM radio broadcasting for Sarnoff in December, 1933.
This was followed by visits from RCA engineers, who were sufficiently
impressed to recommend to Sarnoff that the company conduct
field tests of the Armstrong system.
In 1934, Armstrong, with the cooperation of RCA, set up a test
transmitter at the top of the Empire State Building, sharing facilities
with the experimental RCAtelevision transmitter. From 1934 through
1935, tests were conducted using the Empire State facility, to mixed
reactions of RCA’s best engineers. AM radio broadcasting already
had a performance record of nearly two decades. The engineers
wondered if this new technology could replace something that had
worked so well.
This less-than-enthusiastic evaluation fueled the skepticism of
RCA lawyers and salespeople. RCA had too much invested in the
AM system, both as a leading manufacturer and as the dominant
owner of the major radio network of the time, the National Broadcasting
Company (NBC). Sarnoff was in no rush to adopt FM. To change systems would risk the millions of dollars RCAwas making
as America emerged from the Great Depression.
In 1935, Sarnoff advised Armstrong that RCA would cease any
further research and development activity in FM radio broadcasting.
(Still, engineers at RCA laboratories continued to work on FM
to protect the corporate patent position.) Sarnoff declared to the
press that his company would push the frontiers of broadcasting by
concentrating on research and development of radio with pictures,
that is, television. As a tangible sign, Sarnoff ordered that Armstrong’s
FM radio broadcasting tower be removed from the top of
the Empire State Building.
Armstrong was outraged. By the mid-1930’s, the development of
FM radio broadcasting had become a mission for Armstrong. For
the remainder of his life, Armstrong devoted his considerable talents
to the promotion of FM radio broadcasting.
Impact
After the break with Sarnoff, Armstrong proceeded with plans to
develop his own FM operation. Allied with two of RCA’s biggest
manufacturing competitors, Zenith and General Electric, Armstrong
pressed ahead. In June of 1936, at a Federal Communications Commission
(FCC) hearing, Armstrong proclaimed that FM broadcasting
was the only static-free, noise-free, and uniform system—both
day and night—available. He argued, correctly, thatAMradio broadcasting
had none of these qualities.
During World War II (1939-1945), Armstrong gave the military
permission to use FM with no compensation. That patriotic gesture
cost Armstrong millions of dollars when the military soon became
all FM. It did, however, expand interest in FM radio broadcasting.
World War II had provided a field test of equipment and use.
By the 1970’s, FM radio broadcasting had grown tremendously.
By 1972, one in three radio listeners tuned into an FM station some
time during the day. Advertisers began to use FM radio stations to
reach the young and affluent audiences that were turning to FM stations
in greater numbers.
By the late 1970’s, FM radio stations were outnumberingAMstations.
By 1980, nearly half of radio listeners tuned into FM stations on a regular basis. Adecade later, FM radio listening accounted for
more than two-thirds of audience time. Armstrong’s predictions
that listeners would prefer the clear, static-free sounds offered by
FM radio broadcasting had come to pass by the mid-1980’s, nearly
fifty years after Armstrong had commenced his struggle to make
FM radio broadcasting a part of commercial radio.
Fluorescent lighting
lighting
The invention: A form of electrical lighting that uses a glass tube
coated with phosphor that gives off a cool bluish light and emits
ultraviolet radiation.
The people behind the invention:
Vincenzo Cascariolo (1571-1624), an Italian alchemist and
shoemaker
Heinrich Geissler (1814-1879), a German glassblower
Peter Cooper Hewitt (1861-1921), an American electrical
engineer
Celebrating the “Twelve Greatest Inventors”
On the night of November 23, 1936, more than one thousand industrialists,
patent attorneys, and scientists assembled in the main
ballroom of the Mayflower Hotel in Washington, D.C., to celebrate
the one hundredth anniversary of the U.S. Patent Office.Atransport
liner over the city radioed the names chosen by the Patent Office as
America’s “Twelve Greatest Inventors,” and, as the distinguished
group strained to hear those names, “the room was flooded for a
moment by the most brilliant light yet used to illuminate a space
that size.”
Thus did The New York Times summarize the commercial introduction
of the fluorescent lamp. The twelve inventors present were
Thomas Alva Edison, Robert Fulton, Charles Goodyear, Charles
Hall, Elias Howe, Cyrus Hall McCormick, Ottmar Mergenthaler,
Samuel F. B. Morse, George Westinghouse, Wilbur Wright, and Eli
Whitney. There was, however, no name to bear the honor for inventing
fluorescent lighting. That honor is shared by many who participated
in a very long series of discoveries.
The fluorescent lamp operates as a low-pressure, electric discharge
inside a glass tube that contains a droplet of mercury and a
gas, commonly argon. The inside of the glass tube is coated with
fine particles of phosphor. When electricity is applied to the gas, the
mercury gives off a bluish light and emits ultraviolet radiation.When bathed in the strong ultraviolet radiation emitted by the mercury,
the phosphor fluoresces (emits light).
The setting for the introduction of the fluorescent lamp began at
the beginning of the 1600’s, when Vincenzo Cascariolo, an Italian
shoemaker and alchemist, discovered a substance that gave off a
bluish glow in the dark after exposure to strong sunlight. The fluorescent
substance was apparently barium sulfide and was so unusual
for that time and so valuable that its formulation was kept secret
for a long time. Gradually, however, scholars became aware of
the preparation secrets of the substance and studied it and other luminescent
materials.
Further studies in fluorescent lighting were made by the German
physicist Johann Wilhelm Ritter. He observed the luminescence of
phosphors that were exposed to various “exciting” lights. In 1801,
he noted that some phosphors shone brightly when illuminated by
light that the eye could not see (ultraviolet light). Ritter thus discovered
the ultraviolet region of the light spectrum. The use of phosphors
to transform ultraviolet light into visible light was an important
step in the continuing development of the fluorescent lamp.
Further studies in fluorescent lighting were made by the German
physicist Johann Wilhelm Ritter. He observed the luminescence of
phosphors that were exposed to various “exciting” lights. In 1801,
he noted that some phosphors shone brightly when illuminated by
light that the eye could not see (ultraviolet light). Ritter thus discovered
the ultraviolet region of the light spectrum. The use of phosphors
to transform ultraviolet light into visible light was an important
step in the continuing development of the fluorescent lamp.
The British mathematician and physicist Sir George Gabriel Stokes
studied the phenomenon as well. It was he who, in 1852, termed the
afterglow “fluorescence.”
Geissler Tubes
While these advances were being made, other workers were trying
to produce a practical form of electric light. In 1706, the English
physicist Francis Hauksbee devised an electrostatic generator, which
is used to accelerate charged particles to very high levels of electrical
energy. He then connected the device to a glass “jar,” used a vacuum pump to evacuate the jar to a low pressure, and tested his
generator. In so doing, Hauksbee obtained the first human-made
electrical glow discharge by “capturing lightning” in a jar.
In 1854, Heinrich Geissler, a glassblower and apparatus maker,
opened his shop in Bonn, Germany, to make scientific instruments;
in 1855, he produced a vacuum pump that used liquid mercury as
an evacuation fluid. That same year, Geissler made the first gaseous
conduction lamps while working in collaboration with the German
scientist Julius Plücker. Plücker referred to these lamps as “Geissler
tubes.” Geissler was able to create red light with neon gas filling a
lamp and light of nearly all colors by using certain types of gas
within each of the lamps. Thus, both the neon sign business and the
science of spectroscopy were born.
Geissler tubes were studied extensively by a variety of workers.
At the beginning of the twentieth century, the practical American
engineer Peter Cooper Hewitt put these studies to use by marketing
the first low-pressure mercury vapor lamps. The lamps were quite
successful, although they required high voltage for operation, emitted
an eerie blue-green, and shone dimly by comparison with their
eventual successor, the fluorescent lamp. At about the same time,
systematic studies of phosphors had finally begun.
By the 1920’s, a number of investigators had discovered that the
low-pressure mercury vapor discharge marketed by Hewitt was an
extremely efficient method for producing ultraviolet light, if the
mercury and rare gas pressures were properly adjusted. With a
phosphor to convert the ultraviolet light back to visible light, the
Hewitt lamp made an excellent light source.
Impact
The introduction of fluorescent lighting in 1936 presented the
public with a completely new form of lighting that had enormous
advantages of high efficiency, long life, and relatively low cost.
By 1938, production of fluorescent lamps was well under way. By
April, 1938, four sizes of fluorescent lamps in various colors had
been offered to the public and more than two hundred thousand
lamps had been sold.
During 1939 and 1940, two great expositions—the New York World’s Fair and the San Francisco International Exposition—
helped popularize fluorescent lighting. Thousands of tubular fluorescent
lamps formed a great spiral in the “motor display salon,”
the car showroom of the General Motors exhibit at the New York
World’s Fair. Fluorescent lamps lit the Polish Restaurant and hung
in vertical clusters on the flagpoles along theAvenue of the Flags at
the fair, while two-meter-long, upright fluorescent tubes illuminated
buildings at the San Francisco International Exposition.
When the United States entered World War II (1939-1945), the
demand for efficient factory lighting soared. In 1941, more than
twenty-one million fluorescent lamps were sold. Technical advances
continued to improve the fluorescent lamp. By the 1990’s,
this type of lamp supplied most of the world’s artificial lighting.
Saturday, June 20, 2009
Floppy disk
The invention: Inexpensive magnetic medium for storing and
moving computer data.
The people behind the invention:
Andrew D. Booth (1918- ), an English inventor who
developed paper disks as a storage medium
Reynold B. Johnson (1906-1998), a design engineer at IBM’s
research facility who oversaw development of magnetic disk
storage devices
Alan Shugart (1930- ), an engineer at IBM’s research
laboratory who first developed the floppy disk as a means of
mass storage for mainframe computers
First Tries
When the International Business Machines (IBM) Corporation
decided to concentrate on the development of computers for business
use in the 1950’s, it faced a problem that had troubled the earliest
computer designers: how to store data reliably and inexpensively.
In the early days of computers (the early 1940’s), a number of
ideas were tried. The English inventor Andrew D. Booth produced
spinning paper disks on which he stored data by means of punched
holes, only to abandon the idea because of the insurmountable engineering
problems he foresaw.
The next step was “punched” cards, an idea first used when the
French inventor Joseph-Marie Jacquard invented an automatic weaving
loom for which patterns were stored in pasteboard cards. The
idea was refined by the English mathematician and inventor Charles
Babbage for use in his “analytical engine,” an attempt to build a kind
of computing machine. Although it was simple and reliable, it was
not fast enough, nor did it store enough data, to be truly practical.
The Ampex Corporation demonstrated its first magnetic audiotape
recorder after World War II (1939-1945). Shortly after that, the
Binary Automatic Computer (BINAC) was introduced with a storage
device that appeared to be a large tape recorder. A more advanced machine, the Universal Automatic Computer (UNIVAC),
used metal tape instead of plastic (plastic was easily stretched or
even broken). Unfortunately, metal tape was considerably heavier,
and its edges were razor-sharp and thus dangerous. Improvements
in plastic tape eventually produced sturdy media, and magnetic
tape became (and remains) a practical medium for storage of computer
data.
Still later designs combined Booth’s spinning paper disks with
magnetic technology to produce rapidly rotating “drums.” Whereas
a tape might have to be fast-forwarded nearly to its end to locate a
specific piece of data, a drum rotating at speeds up to 12,500 revolutions
per minute (rpm) could retrieve data very quickly and
could store more than 1 million bits (or approximately 125 kilobytes)
of data.
In May, 1955, these drums evolved, under the direction of Reynold
B. Johnson, into IBM’s hard disk unit. The hard disk unit consisted
of fifty platters, each 2 feet in diameter, rotating at 1,200 rpm. Both
sides of the disk could be used to store information. When the operator
wished to access the disk, at his or her command a read/write
head was moved to the right disk and to the side of the disk that
held the desired data. The operator could then read data from or record
data onto the disk. To speed things even more, the next version
of the device, similar in design, employed one hundred read/write
heads—one for each of its fifty double-sided disks. The only remaining
disadvantage was its size, which earned IBM’s first commercial
unit the nickname “jukebox.”
The First Floppy
The floppy disk drive developed directly from hard disk technology.
It did not take shape until the late 1960’s under the direction of
Alan Shugart (it was announced by IBM as a ready product in 1970).
First created to help restart the operating systems of mainframe
computers that had gone dead, the floppy seemed in some ways to
be a step back, for it operated more slowly than a hard disk drive
and did not store as much data. Initially, it consisted of a single thin
plastic disk eight inches in diameter and was developed without the
protective envelope in which it is now universally encased. The addition of that jacket gave the floppy its single greatest advantage
over the hard disk: portability with reliability.
Another advantage soon became apparent: The floppy is resilient
to damage. In a hard disk drive, the read/write heads must
hover thousandths of a centimeter over the disk surface in order to
attain maximum performance. Should even a small particle of dust
get in the way, or should the drive unit be bumped too hard, the
head may “crash” into the surface of the disk and ruin its magnetic
coating; the result is a permanent loss of data. Because the floppy
operates with the read-write head in contact with the flexible plastic
disk surface, individual particles of dust or other contaminants are
not nearly as likely to cause disaster.
As a result of its advantages, the floppy disk was the logical
choice for mass storage in personal computers (PCs), which were
developed a few years after the floppy disk’s introduction. The
floppy is still an important storage device even though hard disk
drives for PCs have become less expensive. Moreover, manufacturers
continually are developing new floppy formats and new floppy
disks that can hold more data.Consequences
Personal computing would have developed very differently were
it not for the availability of inexpensive floppy disk drives. When
IBM introduced its PC in 1981, the machine provided as standard
equipment a connection for a cassette tape recorder as a storage device;
a floppy disk was only an option (though an option few did not
take). The awkwardness of tape drives—their slow speed and sequential
nature of storing data—presented clear obstacles to the acceptance
of the personal computer as a basic information tool. By
contrast, the floppy drive gives computer users relatively fast storage
at low cost.
Floppy disks provided more than merely economical data storage.
Since they are built to be removable (unlike hard drives), they
represented a basic means of transferring data between machines.
Indeed, prior to the popularization of local area networks (LANs),
the floppy was known as a “sneaker” network: One merely carried
the disk by foot to another computer.
Floppy disks were long the primary means of distributing new
software to users. Even the very flexible floppy showed itself to be
quite resilient to the wear and tear of postal delivery. Later, the 3.5-
inch disk improved upon the design of the original 8-inch and 5.25-
inch floppies by protecting the disk medium within a hard plastic
shell and by using a sliding metal door to protect the area where the
read/write heads contact the disk.
By the late 1990’s, floppy disks were giving way to new datastorage
media, particularly CD-ROMs—durable laser-encoded disks
that hold more than 700 megabytes of data. As the price of blank
CDs dropped dramatically, floppy disks tended to be used mainly
for short-term storage of small amounts of data. Floppy disks were
also being used less and less for data distribution and transfer, as
computer users turned increasingly to sending files via e-mail on
the Internet, and software providers made their products available
for downloading on Web sites.
Friday, June 19, 2009
Field ion microscope
The invention:Amicroscope that uses ions formed in high-voltage
electric fields to view atoms on metal surfaces.
The people behind the invention:
Erwin Wilhelm Müller (1911-1977), a physicist, engineer, and
research professor
J. Robert Oppenheimer (1904-1967), an American physicist
To See Beneath the Surface
In the early twentieth century, developments in physics, especially
quantum mechanics, paved the way for the application of
new theoretical and experimental knowledge to the problem of
viewing the atomic structure of metal surfaces. Of primary importance
were American physicist George Gamow’s 1928 theoretical
explanation of the field emission of electrons by quantum mechanical
means and J. Robert Oppenheimer’s 1928 prediction of the
quantum mechanical ionization of hydrogen in a strong electric
field.
In 1936, ErwinWilhelm Müller developed his field emission microscope,
the first in a series of instruments that would exploit
these developments. It was to be the first instrument to view
atomic structures—although not the individual atoms themselves—
directly. Müller’s subsequent field ion microscope utilized the
same basic concepts used in the field emission microscope yet
proved to be a much more powerful and versatile instrument. By
1956, Müller’s invention allowed him to view the crystal lattice
structure of metals in atomic detail; it actually showed the constituent
atoms.
The field emission and field ion microscopes make it possible to
view the atomic surface structures of metals on fluorescent screens.
The field ion microscope is the direct descendant of the field emission
microscope. In the case of the field emission microscope, the
images are projected by electrons emitted directly from the tip of a
metal needle, which constitutes the specimen under investigation.These electrons produce an image of the atomic lattice structure of
the needle’s surface. The needle serves as the electron-donating
electrode in a vacuum tube, also known as the “cathode.” Afluorescent
screen that serves as the electron-receiving electrode, or “anode,”
is placed opposite the needle. When sufficient electrical voltage
is applied across the cathode and anode, the needle tip emits
electrons, which strike the screen. The image produced on the
screen is a projection of the electron source—the needle surface’s
atomic lattice structure.
Müller studied the effect of needle shape on the performance of
the microscope throughout much of 1937. When the needles had
been properly shaped, Müller was able to realize magnifications of
up to 1 million times. This magnification allowed Müller to view
what he called “maps” of the atomic crystal structure of metals,
since the needles were so small that they were often composed of
only one simple crystal of the material. While the magnification
may have been great, however, the resolution of the instrument was
severely limited by the physics of emitted electrons, which caused
the images Müller obtained to be blurred.
Improving the View
In 1943, while working in Berlin, Müller realized that the resolution
of the field emission microscope was limited by two factors.
The electron velocity, a particle property, was extremely high and
uncontrollably random, causing the micrographic images to be
blurred. In addition, the electrons had an unsatisfactorily high wavelength.
When Müller combined these two factors, he was able to determine
that the field emission microscope could never depict single
atoms; it was a physical impossibility for it to distinguish one
atom from another.
By 1951, this limitation led him to develop the technology behind
the field ion microscope. In 1952, Müller moved to the United States
and founded the Pennsylvania State University Field Emission Laboratory.
He perfected the field ion microscope between 1952 and
1956.
The field ion microscope utilized positive ions instead of electrons
to create the atomic surface images on the fluorescent screen.When an easily ionized gas—at first hydrogen, but usually helium,
neon, or argon—was introduced into the evacuated tube, the emitted
electrons ionized the gas atoms, creating a stream of positively
charged particles, much as Oppenheimer had predicted in 1928.
Müller’s use of positive ions circumvented one of the resolution
problems inherent in the use of imaging electrons. Like the electrons,
however, the positive ions traversed the tube with unpredictably random velocities. Müller eliminated this problem by cryogenically
cooling the needle tip with a supercooled liquefied gas such as
nitrogen or hydrogen.
By 1956, Müller had perfected the means of supplying imaging
positive ions by filling the vacuum tube with an extremely small
quantity of an inert gas such as helium, neon, or argon. By using
such a gas, Müller was assured that no chemical reaction would occur
between the needle tip and the gas; any such reaction would alter
the surface atomic structure of the needle and thus alter the resulting
microscopic image. The imaging ions allowed the field ion
microscope to image the emitter surface to a resolution of between
two and three angstroms, making it ten times more accurate than its
close relative, the field emission microscope.
Consequences
The immediate impact of the field ion microscope was its influence
on the study of metallic surfaces. It is a well-known fact of materials
science that the physical properties of metals are influenced
by the imperfections in their constituent lattice structures. It was not
possible to view the atomic structure of the lattice, and thus the finest
detail of any imperfection, until the field ion microscope was developed.
The field ion microscope is the only instrument powerful
enough to view the structural flaws of metal specimens in atomic
detail.
Although the instrument may be extremely powerful, the extremely
large electrical fields required in the imaging process preclude
the instrument’s application to all but the heartiest of metallic
specimens. The field strength of 500 million volts per centimeter
exerts an average stress on metal specimens in the range of almost
1 ton per square millimeter. Metals such as iron and platinum can
withstand this strain because of the shape of the needles into which
they are formed. Yet this limitation of the instrument makes it extremely
difficult to examine biological materials, which cannot withstand
the amount of stress that metals can. Apractical by-product in
the study of field ionization—field evaporation—eventually permitted
scientists to view large biological molecules.
Field evaporation also allowed surface scientists to view the atomic structures of biological molecules. By embedding molecules
such as phthalocyanine within the metal needle, scientists have
been able to view the atomic structures of large biological molecules
by field evaporating much of the surrounding metal until the biological
material remains at the needle’s surface.
Thursday, June 18, 2009
Fiber-optics
The invention: The application of glass fibers to electronic communications
and other fields to carry large volumes of information
quickly, smoothly, and cheaply over great distances.
The people behind the invention:
Samuel F. B. Morse (1791-1872), the American artist and
inventor who developed the electromagnetic telegraph
system
Alexander Graham Bell (1847-1922), the Scottish American
inventor and educator who invented the telephone and the
photophone
Theodore H. Maiman (1927- ), the American physicist and
engineer who invented the solid-state laser
Charles K. Kao (1933- ), a Chinese-born electrical engineer
Zhores I. Alferov (1930- ), a Russian physicist and
mathematician
The Singing Sun
In 1844, Samuel F. B. Morse, inventor of the telegraph, sent his famous
message, “What hath God wrought?” by electrical impulses
traveling at the speed of light over a 66-kilometer telegraph wire
strung between Washington, D.C., and Baltimore. Ever since that
day, scientists have worked to find faster, less expensive, and more
efficient ways to convey information over great distances.
At first, the telegraph was used to report stock-market prices and
the results of political elections. The telegraph was quite important
in the American Civil War (1861-1865). The first transcontinental
telegraph message was sent by Stephen J. Field, chief justice of the
California Supreme Court, to U.S. president Abraham Lincoln on
October 24, 1861. The message declared that California would remain
loyal to the Union. By 1866, telegraph lines had reached all
across the North American continent and a telegraph cable had
been laid beneath the Atlantic Ocean to link the OldWorld with the
New World.Another American inventor made the leap from the telegraph to
the telephone. Alexander Graham Bell, a teacher of the deaf, was interested
in the physical way speech works. In 1875, he started experimenting
with ways to transmit sound vibrations electrically. He realized
that an electrical current could be adjusted to resemble the vibrations of speech. Bell patented his invention on March 7, 1876.
On July 9, 1877, he founded the Bell Telephone Company.
In 1880, Bell invented a device called the “photophone.” He used
it to demonstrate that speech could be transmitted on a beam of
light. Light is a form of electromagnetic energy. It travels in a vibrating
wave. When the amplitude (height) of the wave is adjusted, a
light beam can be made to carry messages. Bell’s invention included
a thin mirrored disk that converted sound waves directly into a
beam of light. At the receiving end, a selenium resistor connected to
a headphone converted the light back into sound. “I have heard a
ray of sun laugh and cough and sing,” Bell wrote of his invention.
Although Bell proved that he could transmit speech over distances
of several hundred meters with the photophone, the device
was awkward and unreliable, and it never became popular as the
telephone did. Not until one hundred years later did researchers find
important practical uses for Bell’s idea of talking on a beam of light.
Two other major discoveries needed to be made first: developdevelopment
of the laser and of high-purity glass. Theodore H. Maiman, an
American physicist and electrical engineer at Hughes Research Laboratories
in Malibu, California, built the first laser. The laser produces
an intense, narrowly focused beam of light that can be adjusted to
carry huge amounts of information. The word itself is an acronym for
light amplification by the stimulated emission of radiation.
It soon became clear, though, that even bright laser light can be
broken up and absorbed by smog, fog, rain, and snow. So in 1966,
Charles K. Kao, an electrical engineer at the Standard Telecommunications
Laboratories in England, suggested that glass fibers could
be used to transmit message-carrying beams of laser light without
disruption from weather.
Fiber Optics Are Tested
Optical glass fiber is made from common materials, mostly silica,
soda, and lime. The inside of a delicate silica glass tube is coated
with a hundred or more layers of extremely thin glass. The tube is
then heated to 2,000 degrees Celsius and collapsed into a thin glass
rod, or preform. The preform is then pulled into thin strands of fiber.
The fibers are coated with plastic to protect them from being nicked
or scratched, and then they are covered in flexible cable.The earliest glass fibers
contained many impurities
and defects, so they did not
carry light well. Signal repeaters
were needed every
few meters to energize
(amplify) the fading pulses
of light. In 1970, however,
researchers at the Corning
Glass Works in New York
developed a fiber pure
enough to carry light at
least one kilometer without
amplification.
The telephone industry
quickly became involved in the new fiber-optics technology. Researchers
believed that a bundle of optical fibers as thin as a pencil
could carry several hundred telephone calls at the same time. Optical
fibers were first tested by telephone companies in big cities,
where the great volume of calls often overloaded standard underground
phone lines.
On May 11, 1977, American Telephone & Telegraph Company
(AT&T), along with Illinois Bell Telephone, Western Electric, and
Bell Telephone Laboratories, began the first commercial test of fiberoptics
telecommunications in downtown Chicago. The system consisted
of a 2.4-kilometer cable laid beneath city streets. The cable,
only 1.3 centimeters in diameter, linked an office building in the
downtown business district with two telephone exchange centers.
Voice and video signals were coded into pulses of laser light and
transmitted through the hair-thin glass fibers. The tests showed that
a single pair of fibers could carry nearly six hundred telephone conversations
at once very reliably and at a reasonable cost.
Six years later, in October, 1983, Bell Laboratories succeeded in
transmitting the equivalent of six thousand telephone signals through
an optical fiber cable that was 161 kilometers long. Since that time,
countries all over the world, fromEngland to Indonesia, have developed
optical communications systems.Consequences
Fiber optics has had a great impact on telecommunications. Asingle
fiber can now carry thousands of conversations with no electrical
interference. These fibers are less expensive, weigh less, and take up
much less space than copper wire. As a result, people can carry on
conversations over long distances without static and at a low cost.
One of the first uses of fiber optics and perhaps its best-known
application is the fiberscope, a medical instrument that permits internal
examination of the human body without surgery or X-ray
techniques. The fiberscope, or endoscope, consists of two fiber
bundles. One of the fiber bundles transmits bright light into the patient,
while the other conveys a color image back to the eye of the
physician. The fiberscope has been used to look for ulcers, cancer,
and polyps in the stomach, intestine, and esophagus of humans.
Medical instruments, such as forceps, can be attached to the fiberscope,
allowing the physician to perform a range of medical procedures,
such as clearing a blocked windpipe or cutting precancerous
polyps from the colon.
Subscribe to:
Posts (Atom)