Wednesday, December 19, 2012
Synthetic DNA
The invention:
A method for replicating viral deoxyribonucleic
acid (DNA) in a test tube that paved the way for genetic engineering.
The people behind the invention:
Arthur Kornberg (1918- ), an American physician and
biochemist
Robert L. Sinsheimer (1920- ), an American biophysicist
Mehran Goulian (1929- ), a physician and biochemist
The Role of DNA
Until the mid-1940’s, it was believed that proteins were the
carriers of genetic information, the source of heredity. Proteins
appeared to be the only biological molecules that had the complexity
necessary to encode the enormous amount of genetic information
required to reproduce even the simplest organism.
Nevertheless, proteins could not be shown to have genetic properties,
and by 1944, it was demonstrated conclusively that deoxyribonucleic
acid (DNA) was the material that transmitted hereditary
information. It was discovered that DNA isolated from a
strain of infective bacteria that can cause pneumonia was able to
transform a strain of noninfective bacteria into an infective strain;
in addition, the infectivity trait was transmitted to future generations.
Subsequently, it was established that DNA is the genetic material
in virtually all forms of life.
Once DNA was known to be the transmitter of genetic information,
scientists sought to discover how it performs its role. DNA is a
polymeric molecule composed of four different units, called “deoxynucleotides.”
The units consist of a sugar, a phosphate group, and a
base; they differ only in the nature of the base, which is always one of
four related compounds: adenine, guanine, cytosine, or thymine. The
way in which such a polymer could transmit genetic information,
however, was difficult to discern. In 1953, biophysicists James D.Watson
and Francis Crick brilliantly determined the three-dimensional
structure of DNAby analyzing X-ray diffraction photographs of DNA
fibers. From their analysis of the structure of DNA,Watson and Crick
inferredDNA’s mechanism of replication. Their work led to an understanding
of gene function in molecular terms.
Watson and Crick showed that DNA has a very long doublestranded
(duplex) helical structure. DNAhas a duplex structure because
each base forms a link to a specific base on the opposite
strand. The discovery of this complementary pairing of bases provided
a model to explain the two essential functions of a hereditary
molecule: It must preserve the genetic code from one generation to
the next, and it must direct the development of the cell.
Watson and Crick also proposed that DNA is able to serve as a
mold (or template) for its own reproduction because the two strands
ofDNApolymer can separate. Upon separation, each strand acts as a
template for the formation of a new complementary strand. An adenine
base in the existing strand gives rise to cytosine, and so on. In
this manner, a new double-stranded DNAis generated that is identical
to the parent DNA.
DNA in a Test Tube
Watson and Crick’s theory provided a valuable model for the reproduction
of DNA, but it did not explain the biological mechanism
by which the process occurs. The biochemical pathway of DNA reproduction
and the role of the enzymes required for catalyzing the
reproduction process were discovered by Arthur Kornberg and his
coworkers. For his success in achievingDNAsynthesis in a test tube
and for discovering and isolating an enzyme—DNA polymerase—
that catalyzed DNA synthesis, Kornberg won the 1959 Nobel Prize
in Physiology or Medicine.
To achieve DNAreplication in a test tube, Kornberg found that a
small amount of preformed DNA must be present, in addition to
DNApolymerase enzyme and all four of the deoxynucleotides that
occur in DNA. Kornberg discovered that the base composition of
the newly made DNAwas determined solely by the base composition
of the preformed DNA, which had been used as a template in
the test-tube synthesis. This result showed that DNA polymerase
obeys instructions dictated by the template DNA. It is thus said to
be “template-directed.” DNA polymerase was the first templatedirected
enzyme to be discovered.
Although test-tube synthesis was a most significant achievement,
important questions about the precise character of the newly
made DNAwere still unanswered. Methods of analyzing the order,
or sequence, of the bases in DNA were not available, and hence it
could not be shown directly whetherDNAmade in the test tube was
an exact copy of the template of DNA or merely an approximate
copy. In addition, some DNAs prepared by DNA polymerase appeared
to be branched structures. Since chromosomes in living cells
contain long, linear, unbranched strands of DNA, this branching
might have indicated that DNA synthesized in a test tube was not
equivalent to DNA synthesized in the living cell.
Kornberg realized that the best way to demonstrate that newly
synthesizedDNAis an exact copy of the original was to test the new
DNAfor biological activity in a suitable system. Kornberg reasoned
that a demonstration of infectivity in viral DNA produced in a test
tube would prove that polymerase-catalyzed synthesis was virtually
error-free and equivalent to natural, biological synthesis. The
experiment, carried out by Kornberg, Mehran Goulian at Stanford
University, and Robert L. Sinsheimer at the California Institute of
Technology, was a complete success. The viral DNAs produced in a
test tube by the DNA polymerase enzyme, using a viral DNA template,
were fully infective. This synthesis showed that DNA polymerase
could copy not merely a single gene but also an entire chromosome
of a small virus without error.
Consequences
The purification of DNApolymerase and the preparation of biologically
active DNA were major achievements that influenced
biological research on DNA for decades. Kornberg’s methodology
proved to be invaluable in the discovery of other enzymes that synthesize
DNA. These enzymes have been isolated from Escherichia
coli bacteria and fromother bacteria, viruses, and higher organisms.
The test-tube preparation of viral DNA also had significance in
the studies of genes and chromosomes. In the mid-1960’s, it had not
been established that a chromosome contains a continuous strand of
DNA. Kornberg and Sinsheimer’s synthesis of a viral chromosome
proved that it was, indeed, a very long strand of uninterrupted
DNA.
Kornberg and Sinsheimer’s work laid the foundation for subsequent
recombinant DNAresearch and for genetic engineering technology.
This technology promises to revolutionize both medicine
and agriculture. The enhancement of food production and the generation
of new drugs and therapies are only a few of the subsequent
benefits that may be expected.
See also : Artificial hormone; Cloning; Genetic“fingerprinting”;
Genetically engineered insulin; In vitro plant culture;
Synthetic amino acid;Artificial gene synthesis.
Further Reading
Monday, December 10, 2012
Synthetic amino acid
The invention :
Amethod for synthesizing amino acids by combining
water, hydrogen, methane, and ammonia and exposing the
mixture to an electric spark.
The people behind the invention :
Stanley Lloyd Miller (1930- ), an American professor of
chemistry
Harold Clayton Urey (1893-1981), an American chemist who
won the 1934 Nobel Prize in Chemistry
Aleksandr Ivanovich Oparin (1894-1980), a Russian biochemist
John Burdon Sanderson Haldane (1892-1964), a British scientist
Prebiological Evolution
The origin of life on Earth has long been a tough problem for scientists
to solve. While most scientists can envision the development
of life through geologic time from simple single-cell bacteria
to complex mammals by the processes of mutation and natural selection,
they have found it difficult to develop a theory to define
how organic materials were first formed and organized into lifeforms.
This stage in the development of life before biologic systems
arose, which is called “chemical evolution,” occurred between
4.5 and 3.5 billion years ago. Although great advances in
genetics and biochemistry have shown the intricate workings of
the cell, relatively little light has been shed on the origins of this intricate
machinery of the cell. Some experiments, however, have
provided important data from which to build a scientific theory of
the origin of life. The first of these experiments was the classic
work of Stanley Lloyd Miller.
Miller worked with Harold Clayton Urey, a Nobel laureate, on the
environments of the early earth. John Burdon Sanderson Haldane, a
British biochemist, had suggested in 1929 that the earth’s early atmosphere
was a reducing one—that it contained no free oxygen. In
1952, Urey published a seminal work in planetology, The Planets, in
which he elaborated on Haldane’s suggestion, and he postulated
that the earth had formed from a cold stellar dust cloud. Urey
thought that the earth’s primordial atmosphere probably contained
elements in the approximate relative abundances found in the solar
system and the universe.
It had been discovered in 1929 that the Sun is approximately 87
percent hydrogen, and by 1935 it was known that hydrogen encompassed
the vast majority (92.8 percent) of atoms in the universe.
Urey reasoned that the earth’s early atmosphere contained mostly
hydrogen, with the oxygen, nitrogen, and carbon atoms chemically
bonded to hydrogen to form water, ammonia, and methane. Most
important, free oxygen could not exist in the presence of such an
abundance of hydrogen.
As early as the mid-1920’s, Aleksandr Ivanovich Oparin, a Russian
biochemist, had argued that the organic compounds necessary
for life had been built up on the early earth by chemical combinations
in a reducing atmosphere. The energy from the Sun would
have been sufficient to drive the reactions to produce life. Haldane
later proposed that the organic compounds would accumulate in
the oceans to produce a “dilute organic soup” and that life might
have arisen by some unknown process from that mixture of organic
compounds.
Primordial Soup in a Bottle
Miller combined the ideas of Oparin and Urey and designed a
simple, but elegant, experiment. He decided to mix the gases presumed
to exist in the early atmosphere (water vapor, hydrogen, ammonia,
and methane) and expose them to an electrical spark to determine
which, if any, organic compounds were formed. To do this,
he constructed a relatively simple system, essentially consisting of
two Pyrex flasks connected by tubing in a roughly circular pattern.
The water and gases in the smaller flask were boiled and the resulting
gas forced through the tubing into a larger flask that contained
tungsten electrodes. As the gases passed the electrodes, an electrical
spark was generated, and from this larger flask the gases and any
other compounds were condensed. The gases were recycled through
the system, whereas the organic compounds were trapped in the
bottom of the system.
Miller was trying to simulate conditions that had prevailed on
the early earth. During the one week of operation, Miller extracted
and analyzed the residue of compounds at the bottom of the system.
The results were truly astounding. He found that numerous organic
compounds had, indeed, been formed in only that one week. As
much as 15 percent of the carbon (originally in the gas methane) had
been combined into organic compounds, and at least 5 percent of
the carbon was incorporated into biochemically important compounds.
The most important compounds produced were some of
the twenty amino acids essential to life on Earth.
The formation of amino acids is significant because they are the
building blocks of proteins. Proteins consist of a specific sequence of
amino acids assembled into a well-defined pattern. Proteins are necessary
for life for two reasons. First, they are important structural
materials used to build the cells of the body. Second, the enzymes
that increase the rate of the multitude of biochemical reactions of life
are also proteins. Miller not only had produced proteins in the laboratory
but also had shown clearly that the precursors of proteins—
the amino acids—were easily formed in a reducing environment
with the appropriate energy.
Perhaps the most important aspect of the experiment was the
ease with which the amino acids were formed. Of all the thousands
of organic compounds that are known to chemists, amino acids
were among those that were formed by this simple experiment. This
strongly implied that one of the first steps in chemical evolution was
not only possible but also highly probable. All that was necessary
for the synthesis of amino acids were the common gases of the solar
system, a reducing environment, and an appropriate energy source,
all of which were present on early Earth.
Consequences
Miller opened an entirely new field of research with his pioneering
experiments. His results showed that much about chemical
evolution could be learned by experimentation in the laboratory.
As a result, Miller and many others soon tried variations on
his original experiment by altering the combination of gases, using
other gases, and trying other types of energy sources. Almost all
the essential amino acids have been produced in these laboratory
experiments.
Miller’s work was based on the presumed composition of the
primordial atmosphere of Earth. The composition of this atmosphere
was calculated on the basis of the abundance of elements
in the universe. If this reasoning is correct, then it is highly likely
that there are many other bodies in the universe that have similar
atmospheres and are near energy sources similar to the Sun.
Moreover, Miller’s experiment strongly suggests that amino acids,
and perhaps life as well, should have formed on other planets.
See also : Artificial hormone; Artificial kidney .
Further Reading :
Sunday, December 2, 2012
Synchrocyclotron
The invention:
A powerful particle accelerator that performed
better than its predecessor, the cyclotron.
The people behind the invention:
Edwin Mattison McMillan (1907-1991), an American physicist
who won the Nobel Prize in Chemistry in 1951
Vladimir Iosifovich Veksler (1907-1966), a Soviet physicist
Ernest Orlando Lawrence (1901-1958), an American physicist
Hans Albrecht Bethe (1906- ), a German American physicist
The First Cyclotron
The synchrocyclotron is a large electromagnetic apparatus designed
to accelerate atomic and subatomic particles at high energies.
Therefore, it falls under the broad class of scientific devices
known as “particle accelerators.” By the early 1920’s, the experimental
work of physicists such as Ernest Rutherford and George
Gamow demanded that an artificial means be developed to generate
streams of atomic and subatomic particles at energies much
greater than those occurring naturally. This requirement led Ernest
Orlando Lawrence to develop the cyclotron, the prototype for most
modern accelerators. The synchrocyclotron was developed in response
to the limitations of the early cyclotron.
In September, 1930, Lawrence announced the basic principles behind
the cyclotron. Ionized—that is, electrically charged—particles
are admitted into the central section of a circular metal drum. Once
inside the drum, the particles are exposed to an electric field alternating
within a constant magnetic field. The combined action of the
electric and magnetic fields accelerates the particles into a circular
path, or orbit. This increases the particles’ energy and orbital radii.
This process continues until the particles reach the desired energy
and velocity and are extracted from the machine for use in experiments
ranging from particle-to-particle collisions to the synthesis of
radioactive elements.
Although Lawrence was interested in the practical applications
of his invention in medicine and biology, the cyclotron also was applied
to a variety of experiments in a subfield of physics called
“high-energy physics.” Among the earliest applications were studies
of the subatomic, or nuclear, structure of matter. The energetic
particles generated by the cyclotron made possible the very type of
experiment that Rutherford and Gamow had attempted earlier.
These experiments, which bombarded lithium targets with streams
of highly energetic accelerated protons, attempted to probe the inner
structure of matter.
Although funding for scientific research on a large scale was
scarce beforeWorldWar II (1939-1945), Lawrence nevertheless conceived
of a 467-centimeter cyclotron that would generate particles
with energies approaching 100 million electronvolts. By the end of
the war, increases in the public and private funding of scientific research
and a demand for higher-energy particles created a situation
in which this plan looked as if it would become reality, were it not
for an inherent limit in the physics of cyclotron operation.
Overcoming the Problem of Mass
In 1937, Hans Albrecht Bethe discovered a severe theoretical limitation
to the energies that could be produced in a cyclotron. Physicist
Albert Einstein’s special theory of relativity had demonstrated
that as any mass particle gains velocity relative to the speed of light,
its mass increases. Bethe showed that this increase in mass would
eventually slow the rotation of each particle. Therefore, as the rotation
of each particle slows and the frequency of the alternating electric
field remains constant, particle velocity will decrease eventually.
This factor set an upper limit on the energies that any cyclotron
could produce.
Edwin Mattison McMillan, a colleague of Lawrence at Berkeley,
proposed a solution to Bethe’s problem in 1945. Simultaneously and
independently, Vladimir Iosifovich Veksler of the Soviet Union proposed
the same solution. They suggested that the frequency of the
alternating electric field be slowed to meet the decreasing rotational
frequencies of the accelerating particles—in essence, “synchroniz-
ing” the electric field with the moving particles. The result was the
synchrocyclotron.
Prior toWorldWar II, Lawrence and his colleagues had obtained
the massive electromagnet for the new 100-million-electronvolt cyclotron.
This 467-centimeter magnet would become the heart of the
new Berkeley synchrocyclotron. After initial tests proved successful,
the Berkeley team decided that it would be reasonable to convert
the cyclotron magnet for use in a new synchrocyclotron. The
apparatus was operational in November of 1946.
These high energies combined with economic factors to make the
synchrocyclotron a major achievement for the Berkeley Radiation
Laboratory. The synchrocyclotron required less voltage to produce
higher energies than the cyclotron because the obstacles cited by
Bethe were virtually nonexistent. In essence, the energies produced
by synchrocyclotrons are limited only by the economics of building
them. These factors led to the planning and construction of other
synchrocyclotrons in the United States and Europe. In 1957, the
Berkeley apparatus was redesigned in order to achieve energies of
720 million electronvolts, at that time the record for cyclotrons of
any kind.
Impact
Previously, scientists had had to rely on natural sources for highly
energetic subatomic and atomic particles with which to experiment.
In the mid-1920’s, the American physicist Robert Andrews Millikan
began his experimental work in cosmic rays, which are one natural
source of energetic particles called “mesons.” Mesons are charged
particles that have a mass more than two hundred times that of the
electron and are therefore of great benefit in high-energy physics experiments.
In February of 1949, McMillan announced the first synthetically
produced mesons using the synchrocyclotron.
McMillan’s theoretical development led not only to the development
of the synchrocyclotron but also to the development of the
electron synchrotron, the proton synchrotron, the microtron, and
the linear accelerator. Both proton and electron synchrotrons have
been used successfully to produce precise beams of muons and pimesons,
or pions (a type of meson).
The increased use of accelerator apparatus ushered in a new era
of physics research, which has become dominated increasingly by
large accelerators and, subsequently, larger teams of scientists and
engineers required to run individual experiments. More sophisticated
machines have generated energies in excess of 2 trillion
electronvolts at the United States’ Fermi National Accelerator Laboratory,
or Fermilab, in Illinois. Part of the huge Tevatron apparatus
at Fermilab, which generates these particles, is a proton synchrotron,
a direct descendant of McMillan and Lawrence’s early
efforts.
See also: Atomic bomb; Cyclotron; Electron microscope;
Field ionmicroscope; Geiger counter; Hydrogen bomb;
Mass spectrograph;Neutrino detector; Scanning tunneling microscope;
Synchrocyclotron
Further Reading :
Wednesday, November 21, 2012
Supersonic passenger plane
The invention:
The first commercial airliner that flies passengers at
speeds in excess of the speed of sound.
The people behind the invention:
Sir Archibald Russell (1904- ), a designer with the British
Aircraft Corporation
Pierre Satre (1909- ), technical director at Sud-Aviation
Julian Amery (1919- ), British minister of aviation, 1962-1964
Geoffroy de Cource (1912- ), French minister of aviation,
1962
William T. Coleman, Jr. (1920- ), U.S. secretary of
transportation, 1975-1977
Birth of Supersonic Transportations
On January 21, 1976, the Anglo-French Concorde became the
world’s first supersonic airliner to carry passengers on scheduled
commercial flights. British Airways flew a Concorde from London’s
Heathrow Airport to the Persian Gulf emirate of Bahrain in
three hours and thirty-eight minutes. At about the same time, Air
France flew a Concorde from Paris’s Charles de Gaulle Airport to
Rio de Janeiro, Brazil, in seven hours and twenty-five minutes.
The Concordes’ cruising speeds were about twice the speed of
sound, or 1,350 miles per hour. On May 24, 1976, the United States
and Europe became linked for the first time with commercial supersonic
air transportation. British Airways inaugurated flights
between Dulles International Airport in Washington, D.C., and
Heathrow Airport. Likewise, Air France inaugurated flights between
Dulles International Airport and Charles de Gaulle Airport.
The London-Washington, D.C., flight was flown in an unprecedented
time of three hours and forty minutes. The Paris-
Washington, D.C., flight was flown in a time of three hours and
fifty-five minutes.
The Decision to Build the SST
Events leading to the development and production of the Anglo-
French Concorde went back almost twenty years and included approximately
$3 billion in investment costs. Issues surrounding the
development and final production of the supersonic transport (SST)
were extremely complex and at times highly emotional. The concept
of developing an SST brought with it environmental concerns
and questions, safety issues both in the air and on the ground, political
intrigue of international proportions, and enormous economic
problems from costs of operations, research, and development.
In England, the decision to begin the SST project was made in October,
1956. Under the promotion of Morien Morgan with the Royal
Aircraft Establishment in Farnborough, England, it was decided at
the Aviation Ministry headquarters in London to begin development
of a supersonic aircraft. This decision was based on the intense competition
from the American Boeing 707 and Douglas DC-8 subsonic
jets going into commercial service. There was little point in developing
another subsonic plane; the alternative was to go above the speed
of sound. In November, 1956, at Farnborough, the first meeting of the
Supersonic Transport Aircraft Committee, known as STAC, was held.
Members of the STAC proposed that development costs would be
in the range of $165 million to $260 million, depending on the range,
speed, and payload of the chosen SST. The committee also projected
that by 1970, there would be a world market for at least 150 to 500 supersonic
planes. Estimates were that the supersonic plane would recover
its entire research and development cost through thirty sales.
The British, in order to continue development of an SST, needed a European
partner as a way of sharing the costs and preempting objections
to proposed funding by England’s Treasury.
In 1960, the British government gave the newly organized British
Aircraft Corporation (BAC) $1 million for an SST feasibility study.
Sir Archibald Russell, BAC’s chief supersonic designer, visited Pierre
Satre, the technical director at the French firm of Sud-Aviation.
Satre’s suggestion was to evolve an SST from Sud-Aviation’s highly
successful subsonic Caravelle transport. By September, 1962, an
agreement was reached by Sud and BAC design teams on a new
SST, the Super Caravelle.
There was a bitter battle over the choice of engines with two British
engine firms, Bristol-Siddeley and Rolls-Royce, as contenders.
Sir Arnold Hall, the managing director of Bristol-Siddeley, in collaboration
with the French aero-engine company SNECMA, was eventually
awarded the contract for the engines. The engine chosen was
a “civilianized” version of the Olympus, which Bristol had been developing
for the multirole TRS-2 combat plane.
The Concorde Consortium
On November 29, 1962, the Concorde Consortium was created
by an agreement between England and the French Republic, signed
by Ministers of Aviation Julian Amery and Geoffroy de Cource
(1912- ). The first Concorde, Model 001, rolled out from Sud-
Aviation’s St. Martin-du-Touch assembly plant on December 11,
1968. The second, Model 002, was completed at the British Aircraft
Corporation a few months later. Eight years later, on January 21,
1976, the Concorde became the world’s first supersonic airliner to
carry passengers on scheduled commercial flights.
Development of the SST did not come easily for the Anglo-
French consortium. The nature of supersonic flight created numerous
problems and uncertainties not present for subsonic flight. The
SST traveled faster than the speed of sound. Sound travels at 760
miles per hour at sea level at a temperature of 59 degrees Fahrenheit.
This speed drops to about 660 miles per hour at sixty-five thousand
feet, cruising altitude for the SST, where the air temperature
drops to 70 degrees below zero.
The Concorde was designed to fly at a maximum of 1,450 miles
per hour. The European designers could use an aluminum alloy
construction and stay below the critical skin-friction temperatures
that required other airframe alloys, such as titanium. The Concorde
was designed with a slender curved wing surface. The design incorporated
widely separated engine nacelles, each housing two Olympus
593 jet engines. The Concorde was also designed with a “droop
snoot,” providing three positions: the supersonic configuration, a
heat-visor retracted position for subsonic flight, and a nose-lowered
position for landing patterns.
Impact
Early SST designers were faced with questions such as the intensity
and ionization effect of cosmic rays at flight altitudes of sixty to
seventy thousand feet. The “cascade effect” concerned the intensification
of cosmic radiation when particles from outer space struck a
metallic cover. Scientists looked for ways to shield passengers from
this hazard inside the aluminum or titanium shell of an SST flying
high above the protective blanket of the troposphere. Experts questioned
whether the risk of being struck by meteorites was any
greater for the SST than for subsonic jets and looked for evidence on
wind shear of jet streams in the stratosphere.
Other questions concerned the strength and frequency of clear air
turbulence above forty-five thousand feet, whether the higher ozone
content of the air at SST cruise altitude would affect the materials of
the aircraft, whether SST flights would upset or destroy the protective
nature of the earth’s ozone layer, the effect of aerodynamic heating
on material strength, and the tolerable strength of sonic booms
over populated areas. These and other questions consumed the designers
and researchers involved in developing the Concorde.
Through design research and flight tests, many of the questions
were resolved or realized to be less significant than anticipated. Several
issues did develop into environmental, economic, and international
issues. In late 1975, the British and French governments requested
permission to use the Concorde at New York’s John F.
Kennedy International Airport and at Dulles International Airport
for scheduled flights between the United States and Europe. In December,
1975, as a result of strong opposition from anti-Concorde
environmental groups, the U.S. House of Representatives approved
a six-month ban on SSTs coming into the United States so that the
impact of flights could be studied. Secretary of TransportationWilliam
T. Coleman, Jr., held hearings to prepare for a decision by February
5, 1976, as to whether to allow the Concorde into U.S. airspace.
The British and French, if denied landing rights, threatened
to take the United States to an international court, claiming that
treaties had been violated.
The treaties in question were the Chicago Convention and Bermuda
agreements of February 11, 1946, and March 27, 1946. These
treaties prohibited the United States from banning aircraft that both
France and Great Britain had certified to be safe. The Environmental
Defense Fund contended that the United States had the right to ban
SST aircraft on environmental grounds.
Under pressure from both sides, Coleman decided to allow limited
Concorde service at Dulles and John F. Kennedy airports for a
sixteen-month trial period. Service into John F. Kennedy Airport,
however, was delayed by a ban by the Port Authority of New York
and New Jersey until a pending suit was pursued by the airlines.
During the test period, detailed records were to be kept on the
Concorde’s noise levels, vibration, and engine emission levels. Other
provisions included that the plane would not fly at supersonic
speeds over the continental United States; that all flights could be
cancelled by the United States with four months notice, or immediately
if they proved harmful to the health and safety of Americans;
and that at the end of a year, four months of study would begin to
determine if the trial period should be extended.
The Concorde’s noise was one of the primary issues in determining
whether the plane should be allowed into U.S. airports. The Federal
Aviation Administration measured the effective perceived noise
in decibels. After three months of monitoring the Concorde’s departure
noise at 3.5 nautical miles was found to vary from 105 to 130
decibels. The Concorde’s approach noise at one nautical mile from
threshold varied from 115 to 130 decibels. These readings were approximately
equal to noise levels of other four-engine jets, such as
the Boeing 747, on landing but were twice as loud on takeoff.
The Economics of Operation
Another issue of significance was the economics of Concorde’s
operation and its tremendous investment costs. In 1956, early predictions
of Great Britain’s STAC were for a world market of 150 to
500 supersonic planes. In November, 1976, Great Britain’s Gerald
Kaufman and France’s Marcel Cavaille said that production of the
Concorde would not continue beyond the sixteen vehicles then contracted
for with BAC and Sud-Aviation. There was no demand by
U.S. airline corporations for the plane. Given that the planes could
not fly at supersonic speeds over populated areas because of the
sonic boom phenomenon, markets for the SST had to be separated
by at least three thousand miles, with flight paths over mostly water
or desert. Studies indicated that there were only twelve to fifteen
routes in the world for which the Concorde was suitable. The planes
were expensive, at a price of approximately $74 million each and
had a limited seating capacity of one hundred passengers. The
plane’s range was about four thousand miles.
These statistics compared to a Boeing 747 with a cost of $35 million,
seating capacity of 360, and a range of six thousand miles. In
addition, the International Air Transport Association negotiated
that the fares for the Concorde flights should be equivalent to current
first-class fares plus 20 percent. The marketing promotion for
the Anglo-French Concorde was thus limited to the elite business
traveler who considered speed over cost of transportation. Given
these factors, the recovery of research and development costs for
Great Britain and France would never occur.
See also : Airplane; Bullet train; Dirigible; Rocket; Stealth aircraft;
Supersonic transport
Further Reading
Sunday, November 18, 2012
Supercomputer
The invention:
A computer that had the greatest computational
power that then existed.
The person behind the invention:
Seymour R. Cray (1928-1996), American computer architect and
designer
The Need for Computing Power
Although modern computers have roots in concepts first proposed
in the early nineteenth century, it was only around 1950 that they became
practical. Early computers enabled their users to calculate equations
quickly and precisely, but it soon became clear that even more
powerful computers—machines capable of receiving, computing, and
sending out data with great precision and at the highest speeds—
would enable researchers to use computer “models,” which are programs
that simulate the conditions of complex experiments.
Few computer manufacturers gave much thought to building the
fastest machine possible, because such an undertaking is expensive
and because the business use of computers rarely demands the
greatest processing power. The first company to build computers
specifically to meet scientific and governmental research needs was
Control Data Corporation (CDC). The company had been founded
in 1957 by William Norris, and its young vice president for engineering
was the highly respected computer engineer Seymour R.
Cray. When CDC decided to limit high-performance computer design,
Cray struck out on his own, starting Cray Research in 1972. His
goal was to design the most powerful computer possible. To that
end, he needed to choose the principles by which his machine
would operate; that is, he needed to determine its architecture.
The Fastest Computer
All computers rely upon certain basic elements to process data.
Chief among these elements are the central processing unit, or CPU
(which handles data), memory (where data are stored temporarily
before and after processing), and the bus (the interconnection between
memory and the processor, and the means by which data are
transmitted to or from other devices, such as a disk drive or a monitor).
The structure of early computers was based on ideas developed
by the mathematician John von Neumann, who, in the 1940’s,
conceived a computer architecture in which the CPU controls all
events in a sequence: It fetches data frommemory, performs calculations
on those data, and then stores the results in memory. Because it
functions in sequential fashion, the speed of this “scalar processor”
is limited by the rate at which the processor is able to complete each
cycle of tasks.
Before Cray produced his first supercomputer, other designers
tried different approaches. One alternative was to link a vector processor
to a scalar unit. Avector processor achieves its speed by performing
computations on a large series of numbers (called a vector)
at one time rather than in sequential fashion, though specialized
and complex programs were necessary to make use of this feature.
In fact, vector processing computers spent most of their time operating
as traditional scalar processors and were not always efficient at
switching back and forth between the two processing types.
Another option chosen by Cray’s competitors was the notion of
“pipelining” the processor’s tasks. A scalar processor often must
wait while data are retrieved or stored in memory. Pipelining techniques
allow the processor to make use of idle time for calculations
in other parts of the program being run, thus increasing the effective
speed. A variation on this technique is “parallel processing,” in
which multiple processors are linked. If each of, for example, eight
central processors is given a portion of a computing task to perform,
the task will be completed more quickly than the traditional von
Neumann architecture, with its single processor, would allow.
Ever the pragmatist, however, Cray decided to employ proved
technology rather than use advanced techniques in his first supercomputer,
the Cray 1, which was introduced in 1976. Although the
Cray 1 did incorporate vector processing, Cray used a simple form
of vector calculation that made the technique practical and easy to
use. Most striking about this computer was its shape, which was far
more modern than its internal design. The Cray 1 was shaped like a
cylinder with a small section missing and a hollow center, with
what appeared to be a bench surrounding it. The shape of the machine
was designed to minimize the length of the interconnecting
wires that ran between circuit boards to allow electricity to move the
shortest possible distance. The bench concealed an important part
of the cooling system that kept the system at an appropriate operating
temperature.
The measurements that describe the performance of supercomputers
are called MIPS (millions of instructions per second) for scalar
processors and megaflops (millions of floating-point operations per
second) for vector processors. (Floating-point numbers are numbers
expressed in scientific notation; for example, 1027.) Whereas the fastest
computer before the Cray 1 was capable of some 35 MIPS, the
Cray 1 was capable of 80 MIPS. Moreover, the Cray 1 was theoretically
capable of vector processing at the rate of 160 megaflops, a rate
unheard of at the time.
Consequences
Seymour Cray first estimated that there would be few buyers for
a machine as advanced as the Cray 1, but his estimate turned out to
be incorrect. There were many scientists who wanted to perform
computer modeling (in which scientific ideas are expressed in such
a way that computer-based experiments can be conducted) and
who needed raw processing power.
When dealing with natural phenomena such as the weather or
geological structures, or in rocket design, researchers need to make
calculations involving large amounts of data. Before computers,
advanced experimental modeling was simply not possible, since
even the modest calculations for the development of atomic energy,
for example, consumed days and weeks of scientists’ time.
With the advent of supercomputers, however, large-scale computation
of vast amounts of information became possible. Weather
researchers can design a detailed program that allows them to analyze
complex and seemingly unpredictable weather events such
as hurricanes; geologists searching for oil fields can gather data
about successful finds to help identify new ones; and spacecraft
designers can “describe” in computer terms experimental ideas
that are too costly or too dangerous to carry out. As supercomputer
performance evolves, there is little doubt that scientists will
make ever greater use of its power.
Seymour R. Cray
Seymour R. Cray was born in 1928 in Chippewa Falls, Wisconsin.
The son of a civil engineer, he became interested in radio
and electronics as a boy. After graduating from high school in
1943, he joined the U.S. Army, was posted to Europe in an infantry
communications platoon, and fought in the Battle of the
Bulge. Back from the war, he pursued his interest in electronics
in college while majoring in mathematics at the University of
Minnesota. Upon graduation in 1950, he took a job at Engineering
Research Associates. It was there that he first learned
about computers. In fact, he helped design the first digital computer,
UNIVAC.
Cray co-founded Control Data Corporation in 1957. Based
on his ideas, the company built large-scale, high-speed computers.
In 1972 he founded his own company, Cray Research Incorporated,
with the intention of employing new processing methods
and simplifying architecture and software to build the
world’s fastest computers. He succeeded, and the series of computers
that the company marketed made possible computer
modeling as a central part of scientific research in areas as diverse
as meteorology, oil exploration, and nuclear weapons design.
Through the 1970’s and 1980’s Cray Research was at the
forefront of supercomputer technology, which became one of
the symbols of American technological leadership.
In 1989 Cray left Cray Research to form still another company,
Cray Computer Corporation. He planned to build the
next generation supercomputer, the Cray 5, but advances in microprocessor
technology undercut the demand for supercomputers.
Cray Computer entered bankruptcy in 1995.Ayear later
he died from injuries sustained in an automobile accident near
Colorado Springs, Colorado.
See also : Apple II computer; BINAC computer; Colossus computer;
ENIAC computer; IBM Model 1401 computer; Personal computer; Seymour R. Cray
Further Reading
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,
1987.
Understanding Supercomputing
Labels:
info,
informations,
invention,
inventor,
Seymour R. Cray,
Supercomputer
Sunday, November 11, 2012
Steelmaking process
The invention:
Known as the basic oxygen, or L-D, process, a
method for producing steel that worked about twelve times
faster than earlier methods.
The people behind the invention:
Henry Bessemer (1813-1898), the English inventor of a process
for making steel from iron
Robert Durrer (1890-1978), a Swiss scientist who first proved
the workability of the oxygen process in a laboratory
F. A. Loosley (1891-1966), head of research and development at
Dofasco Steel in Canada
Theodor Suess (1894-1956), works manager at Voest
Ferrous Metal
The modern industrial world is built on ferrous metal. Until
1857, ferrous metal meant cast iron and wrought iron, though a few
specialty uses of steel, especially for cutlery and swords, had existed
for centuries. In 1857, Henry Bessemer developed the first largescale
method of making steel, the Bessemer converter. By the 1880’s,
modification of his concepts (particularly the development of a ‘’basic”
process that could handle ores high in phosphor) had made
large-scale production of steel possible.
Bessemer’s invention depended on the use of ordinary air, infused
into the molten metal, to burn off excess carbon. Bessemer himself
had recognized that if it had been possible to use pure oxygen instead
of air, oxidation of the carbon would be far more efficient and rapid.
Pure oxygen was not available in Bessemer’s day, except at very high
prices, so steel producers settled for what was readily available, ordinary
air. In 1929, however, the Linde-Frakl process for separating the
oxygen in air from the other elements was discovered, and for the
first time inexpensive oxygen became available.
Nearly twenty years elapsed before the ready availability of pure
oxygen was applied to refining the method of making steel. The first
experiments were carried out in Switzerland by Robert Durrer. In
1949, he succeeded in making steel expeditiously in a laboratory setting
through the use of a blast of pure oxygen. Switzerland, however,
had no large-scale metallurgical industry, so the Swiss turned
the idea over to the Austrians, who for centuries had exploited the
large deposits of iron ore in a mountain in central Austria. Theodor
Suess, the works manager of the state-owned Austrian steel complex,
Voest, instituted some pilot projects. The results were sufficiently
favorable to induce Voest to authorize construction of production
converters. In 1952, the first ‘’heat” (as a batch of steel is
called) was “blown in,” at the Voest works in Linz. The following
year, another converter was put into production at the works in
Donauwitz. These two initial locations led to the basic oxygen process
sometimes being referred to as the L-D process.
The L-D Process
The basic oxygen, or L-D, process makes use of a converter similar
to the Bessemer converter. Unlike the Bessemer, however, the LD
converter blows pure oxygen into the molten metal from above
through a water-cooled injector known as a lance. The oxygen burns
off the excess carbon rapidly, and the molten metal can then be
poured off into ingots, which can later be reheated and formed into
the ultimately desired shape. The great advantage of the process is
the speed with which a “heat” reaches the desirable metallurgical
composition for steel, with a carbon content between 0.1 percent
and 2 percent. The basic oxygen process requires about forty minutes.
In contrast, the prevailing method of making steel, using an
open-hearth furnace (which transferred the technique from the
closed Bessemer converter to an open-burning furnace to which the
necessary additives could be introduced by hand) requires eight to
eleven hours for a “heat” or batch.
The L-D process was not without its drawbacks, however. It was
adopted by the Austrians because, by carefully calibrating the timing
and amount of oxygen introduced, they could turn their moderately
phosphoric ore into steel without further intervention. The
process required ore of a standardized metallurgical, or chemical,
content, for which the lancing had been calculated. It produced a
large amount of iron-oxide dust that polluted the surrounding at-
mosphere, and it required a lining in the converter of dolomitic
brick. The specific chemical content of the brick contributed to the
chemical mixture that produced the desired result.
The Austrians quickly realized that the process was an improvement.
In May, 1952, the patent specifications for the new process
were turned over to a new company, Brassert Oxygen Technik, or
BOT, which filed patent applications around the world. BOT embarked
on an aggressive marketing campaign, bringing potential
customers to Austria to observe the process in action. Despite BOT’s
efforts, the new process was slow to catch on, even though in 1953
BOT licensed a U.S. firm, Kaiser Engineers, to spread the process in
the United States.
Many factors serve to explain the reluctance of steel producers
around the world to adopt the new process. One of these was the
large investment most major steel producers had in their openhearth
furnaces. Another was uncertainty about the pollution factor.
Later, special pollution-control equipment would be developed
to deal with this problem. A third concern was whether the necessary
refractory liners for the new converters would be available. A
fourth was the fact that the new process could handle a load that
contained no more than 30 percent scrap, preferably less. In practice,
therefore, it would only work where a blast furnace smelting
ore was already set up.
One of the earliest firms to show serious interest in the new technology
was Dofasco, a Canadian steel producer. Between 1952 and
1954, Dofasco, pushed by its head of research and development, F.
A. Loosley, built pilot operations to test the methodology. The results
were sufficiently promising that in 1954 Dofasco built the first
basic oxygen furnace outside Austria. Dofasco had recently built its
own blast furnace, so it had ore available on site. It was able to devise
ways of dealing with the pollution problem, and it found refractory
liners that would work. It became the first North American
producer of basic oxygen steel.
Having bought the licensing rights in 1953, Kaiser Engineers was
looking for a U.S. steel producer adventuresome enough to invest in
the new technology. It found that producer in McLouth Steel, a
small steel plant in Detroit, Michigan. Kaiser Engineers supplied
much of the technical advice that enabled McLouth to build the first
U.S. basic oxygen steel facility, though McLouth also sent one of its
engineers to Europe to observe the Austrian operations. McLouth,
which had backing from General Motors, also made use of technical
descriptions in the literature.
The Specifications Question
One factor that held back adoption of basic oxygen steelmaking
was the question of specifications. Many major engineering projects
came with precise specifications detailing the type of steel to be
used and even the method of its manufacture. Until basic oxygen
steel was recognized as an acceptable form by the engineering fra-
ternity, so that job specifications included it as appropriate in specific
applications, it could not find large-scale markets. It took a
number of years for engineers to modify their specifications so that
basic oxygen steel could be used.
The next major conversion to the new steelmaking process occurred
in Japan. The Japanese had learned of the process early,
while Japanese metallurgical engineers were touring Europe in
1951. Some of them stopped off at the Voest works to look at the pilot
projects there, and they talked with the Swiss inventor, Robert
Durrer. These engineers carried knowledge of the new technique
back to Japan. In 1957 and 1958, Yawata Steel and Nippon Kokan,
the largest and third-largest steel producers in Japan, decided to implement
the basic oxygen process. An important contributor to this
decision was the Ministry of International Trade and Industry, which
brokered a licensing arrangement through Nippon Kokan, which in
turn had signed a one-time payment arrangement with BOT. The
licensing arrangement allowed other producers besides Nippon
Kokan to use the technique in Japan.
The Japanese made two important technical improvements in
the basic oxygen technology. They developed a multiholed lance for
blowing in oxygen, thus dispersing it more effectively in the molten
metal and prolonging the life of the refractory lining of the converter
vessel. They also pioneered the OG process for recovering
some of the gases produced in the converter. This procedure reduced
the pollution generated by the basic oxygen converter.
The first large American steel producer to adopt the basic oxygen
process was Jones and Laughlin, which decided to implement the
new process for several reasons. It had some of the oldest equipment
in the American steel industry, ripe for replacement. It also
had experienced significant technical difficulties at its Aliquippa
plant, difficulties it was unable to solve by modifying its openhearth
procedures. It therefore signed an agreement with Kaiser Engineers
to build some of the new converters for Aliquippa. These
converters were constructed on license from Kaiser Engineers by
Pennsylvania Engineering, with the exception of the lances, which
were imported from Voest in Austria. Subsequent lances, however,
were built in the United States. Some of Jones and Laughlin’s production
managers were sent to Dofasco for training, and technical
advisers were brought to the Aliquippa plant both from Kaiser Engineers
and from Austria.
Other European countries were somewhat slower to adopt the
new process. Amajor cause for the delay was the necessary modification
of the process to fit the high phosphoric ores available in Germany
and France. Europeans also experimented with modifications
of the basic oxygen technique by developing converters that revolved.
These converters, known as Kaldo in Sweden and Rotor in
Germany, proved in the end to have sufficient technical difficulties
that they were abandoned in favor of the standard basic oxygen
converter. The problems they had been designed to solve could be
better dealt with through modifications of the lance and through
adjustments in additives.
By the mid-1980’s, the basic oxygen process had spread throughout
the world. Neither Japan nor the European Community was
producing any steel by the older, open-hearth method. In conjunction
with the electric arc furnace, fed largely on scrap metal, the basic
oxygen process had transformed the steel industry of the world.
Impact
The basic oxygen process has significant advantages over older
procedures. It does not require additional heat, whereas the openhearth
technique calls for the infusion of nine to twelve gallons of
fuel oil to raise the temperature of the metal to the level necessary to
burn off all the excess carbon. The investment cost of the converter
is about half that of an open-hearth furnace. Fewer refractories are
required, less than half those needed in an open-hearth furnace.
Most important of all, however, a “heat” requires less than an hour,
as compared with the eight or more hours needed for a “heat” in an
open-hearth furnace.
There were some disadvantages to the basic oxygen process. Perhaps
the most important was the limited amount of scrap that could
be included in a “heat,” a maximum of 30 percent. Because the process
required at least 70 percent new ore, it could be operated most
effectively only in conjunction with a blast furnace. Counterbalancing
this last factor was the rapid development of the electric arc
furnace, which could operate with 100 percent scrap. Afirm with its
own blast furnace could, with both an oxygen converter and an electric
arc furnace, handle the available raw material.
The advantages of the basic oxygen process overrode the disadvantages.
Some other new technologies combined to produce this
effect. The most important of these was continuous casting. Because
of the short time required for a “heat,” it was possible, if a plant had
two or three converters, to synchronize output with the fill needs of
a continuous caster, thus largely canceling out some of the economic
drawbacks of the batch process. Continuous production, always
more economical, was now possible in the basic steel industry, particularly
after development of computer-controlled rolling mills.
These new technologies forced major changes in the world’s steel
industry. Labor requirements for the basic oxygen converter were
about half those for the open-hearth furnace. The high speed of the
new technology required far less manual labor but much more technical
expertise. Labor requirements were significantly reduced, producing
major social dislocations in steel-producing regions. This effect
was magnified by the fact that demand for steel dropped
sharply in the 1970’s, further reducing the need for steelworkers.
The U.S. steel industry was slower than either the Japanese or the
European to convert to the basic oxygen technique. The U.S. industry
generally operated with larger quantities, and it took a number
of years before the basic oxygen technique was adapted to converters
with an output equivalent to that of the open-hearth furnace. By
the time that had happened, world steel demand had begun to
drop. U.S. companies were less profitable, failing to generate internally
the capital needed for the major investment involved in
abandoning open-hearth furnaces for oxygen converters. Although
union contracts enabled companies to change work assignments
when new technologies were introduced, there was stiff resistance
to reducing employment of steelworkers, most of whom had lived
all their lives in one-industry towns. Finally, engineers at the steel
firms were wedded to the old methods and reluctant to change, as
were the large bureaucracies of the big U.S. steel firms.
The basic oxygen technology in steel is part of a spate of new
technical developments that have revolutionized industrial production,
drastically reducing the role of manual labor and dramatically
increasing the need for highly skilled individuals with technical ex-
pertise. Because capital costs are significantly lower than for alternative
processes, it has allowed a number of developing countries
to enter a heavy industry and compete successfully with the old industrial
giants. It has thus changed the face of the steel industry.
Henry Bessemer
Henry Bessemer was born in the small village of Charlton,
England, in 1813. His father was an early example of a technician,
specializing in steam engines, and operated a business
making metal type for printing presses. The elder Bessemer
wanted his son to attend university, but Henry preferred to
study under his father. During his apprenticeship, he learned
the properties of alloys. At seventeen he moved to London to
open his own business, which fabricated specialty metals.
Three years later the Royal Academy held an exhibition of
Bessemer’s work. His career, well begun, moved from one invention
to another until at his death in 1898 he held 114 patents.
Among them were processes for casting type and producing
graphite for pencils; methods for manufacturing glass, sugar,
bronze powder, and ships; and his best known creation, the Bessemer
converter for making steel from iron. Bessemer built his
first converter in 1855; fifteen years later Great Britain was producing
half of the world’s steel.
Bessemer’s life and career were models of early Industrial
Age industry, prosperity, and longevity. Amillionaire from patent
royalties, he retired at fifty-nine, lived another twenty-six
years, working on yet more inventions and cultivating astronomy
as a hobby, and was married for sixty-four years. Among
his many awards and honors was a knighthood, bestowed by
Queen Victoria.
See also : Assembly line; Buna rubber; Disposable razor; Laminated glass;
Memory metal; Neoprene; Oil-well drill bit; Steelmaking Wikipedia .
Further Reading :
Restructuring of the Steel Industry in Eight Countries.
Steel Phoenix: The Fall and Rise of the U.S. Steel Industry
Secondary Steelmaking: Principles and Applications
Sunday, November 4, 2012
Stealth aircraft
The invention:
The first generation of “radar-invisible” aircraft,
stealth planes were designed to elude enemy radar systems.
The people behind the invention:
Lockhead Corporation, an American research and development firm
Northrop Corporation, an American aerospace firm
Radar
During World War II, two weapons were developed that radically
altered the thinking of the U.S. military-industrial establishment
and the composition of U.S. military forces. These weapons
were the atomic bombs that were dropped on the Japanese cities of
Hiroshima and Nagasaki by U.S. forces and “radio detection and
ranging,” or radar. Radar saved the English during the Battle of Britain,
and it was radar that made it necessary to rethink aircraft design.
With radar, attacking aircraft can be detected hundreds of
miles from their intended targets, which makes it possible for those
aircraft to be intercepted before they can attack. During World
War II, radar, using microwaves, was able to relay the number, distance,
direction, and speed of German aircraft to British fighter interceptors.
This development allowed the fighter pilots of the Royal
Air Force, “the few” who were so highly praised byWinston Churchill,
to shoot down four times as many planes as they lost.
Because of the development of radar, American airplane design
strategy has been to reduce the planes’ cross sections, reduce or
eliminate the use of metal by replacing it with composite materials,
and eliminate the angles that are found on most aircraft control surfaces.
These actions help make aircraft less visible—and in some
cases, almost invisible—to radar. The Lockheed F-117A Nightrider
and the Northrop B-2 Stealth Bomber are the results of these efforts.
Airborne “Ninjas”
Hidden inside Lockheed Corporation is a research and development
organization that is unique in the corporate world.
This facility has provided the Air Force with the Sidewinder heatseeking
missile; the SR-71, a titanium-skinned aircraft that can fly
at four times the speed of sound; and, most recently, the F-117A
Nightrider. The Nightrider eluded Iraqi radar so effectively during
the 1991 Persian Gulf War that the Iraqis nicknamed it Shaba,
which is an Arabic word that means ghost. In an unusual move
for military projects, the Nightrider was delivered to the Air
Force in 1982, before the plane had been perfected. This was done
so that Air Force pilots could test fly the plane and provide input
that could be used to improve the aircraft before it went into full
production.
The Northrop B-2 Stealth Bomber was the result of a design philosophy
that was completely different from that of the F-117A
Nightrider. The F-117A, for example, has a very angular appearance,
but the angles are all greater than 180 degrees. This configuration
spreads out radar waves rather than allowing them to be concentrated
and sent back to their point of origin. The B-2, however,
stays away from angles entirely, opting for a smooth surface that
also acts to spread out the radar energy. (The B-2 so closely resembles
the YB-49 FlyingWing, which was developed in the late 1940’s,
that it even has the same wingspan.) The surface of the aircraft is
covered with radar-absorbing material and carries its engines and
weapons inside to reduce the radar cross section. There are no vertical
control surfaces, which has the disadvantage of making the aircraft
unstable, so the stabilizing system uses computers to make
small adjustments in the control elements on the trailing edges of
the wings, thus increasing the craft’s stability.
The F-117A Nightrider and the B-2 Stealth Bomber are the “ninjas”
of military aviation. Capable of striking powerfully, rapidly,
and invisibly, these aircraft added a dimension to the U.S. Air Force
that did not exist previously. Before the advent of these aircraft, missions
that required radar-avoidance tactics had to be flown below
the horizon of ground-based radar, which is 30.5 meters above the
ground. Such low-altitude flight is dangerous because of both the
increased difficulty of maneuvering and vulnerability to ground
fire. Additionally, such flying does not conceal the aircraft from the
airborne radar carried by such craft as the American E-3A AWACS
and the former Soviet Mainstay. In a major conflict, the only aircraft
that could effectively penetrate enemy airspace would be the Nightrider
and the B-2.
The purpose of the B-2 was to carry nuclear weapons into hostile
airspace undetected.With the demise of the Soviet Union, mainland
China seemed the only remaining major nuclear threat. For this reason,
many defense experts believed that there was no longer a need
for two radar-invisible planes, and cuts in U.S. military expenditures
threatened the B-2 program during the early 1990’s.
Consequences
The development of the Nightrider and the B-2 meant that the
former Soviet Union would have had to spend at least $60 billion to
upgrade its air defense forces to meet the challenge offered by these
aircraft. This fact, combined with the evolution of the Strategic Defense
Initiative, commonly called “Star Wars,” led to the United
States’ victory in the arms race. Additionally, stealth technology has
found its way onto the conventional battlefield.
As was shown in 1991 during the Desert Storm campaign in Iraq,
targets that have strategic importance are often surrounded by a
network of anti-air missiles and gun emplacements. During the
Desert Storm air war, the F-117A was the only Allied aircraft to be
assigned to targets in Baghdad. Nightriders destroyed more than 47
percent of the strategic areas that were targeted, and every pilot and
plane returned to base unscathed.
Since the world appears to be moving away from superpower
conflicts and toward smaller regional conflicts, stealth aircraft may
come to be used more for surveillance than for air attacks. This is
particularly true because the SR-71, which previously played the
primary role in surveillance, has been retired from service.
See also : Airplane; Cruise missile; Hydrogen bomb; Radar;
Rocket; Stealth aircraft Wikipedia .
Saturday, October 27, 2012
Sonar
The invention:
A device that detects soundwaves transmitted
through water, sonar was originally developed to detect enemy
submarines but is also used in navigation, fish location, and
ocean mapping.
The people behind the invention:
Jacques Curie (1855-1941), a French physicist
Pierre Curie (1859-1906), a French physicist
Paul Langévin (1872-1946), a French physicist
Active Sonar, Submarines, and Piezoelectricity
Sonar, which stands for sound navigation and ranging, is the
American name for a device that the British call “asdic.” There are
two types of sonar. Active sonar, the more widely used of the two
types, detects and locates underwater objects when those objects reflect
sound pulses sent out by the sonar. Passive sonar merely listens
for sounds made by underwater objects. Passive sonar is used
mostly when the loud signals produced by active sonar cannot be
used (for example, in submarines).
The invention of active sonar was the result of American, British,
and French efforts, although it is often credited to Paul Langévin,
who built the first working active sonar system by 1917. Langévin’s
original reason for developing sonar was to locate icebergs, but the
horrors of German submarine warfare inWorldWar I led to the new
goal of submarine detection. Both Langévin’s short-range system
and long-range modern sonar depend on the phenomenon of “piezoelectricity,”
which was discovered by Pierre and Jacques Curie in
1880. (Piezoelectricity is electricity that is produced by certain materials,
such as certain crystals, when they are subjected to pressure.)
Since its invention, active sonar has been improved and its capabilities
have been increased. Active sonar systems are used to detect
submarines, to navigate safely, to locate schools of fish, and to map
the oceans.
Sonar Theory, Development, and Use
Although active sonar had been developed by 1917, it was not
available for military use until World War II. An interesting major
use of sonar before that time was measuring the depth of the ocean.
That use began when the 1922 German Meteor Oceanographic Expedition
was equipped with an active sonar system. The system
was to be used to help pay German WorldWar I debts by aiding in
the recovery of gold from wrecked vessels. It was not used successfully
to recover treasure, but the expedition’s use of sonar to determine
ocean depth led to the discovery of the Mid-Atlantic Ridge.
This development revolutionized underwater geology.
Active sonar operates by sending out sound pulses, often called
“pings,” that travel through water and are reflected as echoes when
they strike large objects. Echoes from these targets are received by
the system, amplified, and interpreted. Sound is used instead of
light or radar because its absorption by water is much lower. The
time that passes between ping transmission and the return of an
echo is used to identify the distance of a target from the system by
means of a method called “echo ranging.” The basis for echo ranging
is the normal speed of sound in seawater (5,000 feet per second).
The distance of the target from the radar system is calculated by
means of a simple equation: range = speed of sound × 0.5 elapsed
time. The time is divided in half because it is made up of the time
taken to reach the target and the time taken to return.
The ability of active sonar to show detail increases as the energy
of transmitted sound pulses is raised by decreasing the
sound wavelength. Figuring out active sonar data is complicated
by many factors. These include the roughness of the ocean, which
scatters sound and causes the strength of echoes to vary, making
it hard to estimate the size and identity of a target; the speed of
the sound wave, which changes in accordance with variations in
water temperature, pressure, and saltiness; and noise caused by
waves, sea animals, and ships, which limits the range of active sonar
systems.
Asimple active pulse sonar system produces a piezoelectric signal
of a given frequency and time duration. Then, the signal is amplified
and turned into sound, which enters the water. Any echo that is produced
returns to the system to be amplified and used to determine the identity
and distance of the target.
Most active sonar systems are mounted near surface vessel keels
or on submarine hulls in one of three ways. The first and most popular
mounting method permits vertical rotation and scanning of a
section of the ocean whose center is the system’s location. The second
method, which is most often used in depth sounders, directs
the beam downward in order to measure ocean depth. The third
method, called wide scanning, involves the use of two sonar systems,
one mounted on each side of the vessel, in such a way that the
two beams that are produced scan the whole ocean at right angles to
the direction of the vessel’s movement.
Active single-beam sonar operation applies an alternating voltage
to a piezoelectric crystal, making it part of an underwater loudspeaker
(transducer) that creates a sound beam of a particular frequency.
When an echo returns, the system becomes an underwater
microphone (receiver) that identifies the target and determines its
range. The sound frequency that is used is determined by the sonar’s
purpose and the fact that the absorption of sound by water increases
with frequency. For example, long-range submarine-seeking sonar
systems (whose detection range is about ten miles) operate at 3 to 40
kilohertz. In contrast, short-range systems that work at about 500 feet
(in mine sweepers, for example) use 150 kilohertz to 2 megahertz.
Impact
Modern active sonar has affected military and nonmilitary activities
ranging from submarine location to undersea mapping and
fish location. In all these uses, two very important goals have been
to increase the ability of sonar to identify a target and to increase the
effective range of sonar. Much work related to these two goals has
involved the development of new piezoelectric materials and the replacement
of natural minerals (such as quartz) with synthetic piezoelectric
ceramics.
Efforts have also been made to redesign the organization of sonar
systems. One very useful development has been changing beammaking
transducers from one-beam units to multibeam modules
made of many small piezoelectric elements. Systems that incorporate
these developments have many advantages, particularly the ability
to search simultaneously in many directions. In addition, systems
have been redesigned to be able to scan many echo beams simultaneously
with electronic scanners that feed into a central receiver.
These changes, along with computer-aided tracking and target
classification, have led to the development of greatly improved active
sonar systems. It is expected that sonar systems will become
even more powerful in the future, finding uses that have not yet
been imagined.
Paul Langévin
If he had not published the Special Theory of Relativity in
1905, Albert Einstein once said, Paul Langévin would have
done so not long afterward. Born in Paris in 1872, Langévin was
among the foremost physicists of his generation. He studied in
the best French schools of science—and with such teachers as
Pierre Curie and Jean Perrin—and became a professor of physics
at the College de France in 1904. He moved to the Sorbonne
in 1909.
Langévin’s research was always widely influential. In addition
to his invention of active sonar, he was especially noted for
his studies of the molecular structure of gases, analysis of secondary
X rays from irradiated metals, his theory of magnetism,
and work on piezoelectricity and piezoceramics. His suggestion
that magnetic properties are linked to the valence electrons of atoms
inspired Niels Bohr’s classic model of the atom. In his later
career, a champion of Einstein’s theories of relativity, Langévin
worked on the implications of the space-time continuum.
DuringWorldWar II, Langévin, a pacifist, publicly denounced
the Nazis and their occupation of France. They jailed him for it.
He escaped to Switzerland in 1944, returning as soon as France
was liberated. He died in late 1946.
See also : Aqualung ; Bathyscaphe ; Bathysphere ; Geiger counter ;
Gyrocompass ; Radar ; Richter scale ; Paul Langévin .
Further Reading
Labels:
info,
informations,
invention,
inventor,
Jacques Curie,
Paul Langévin,
Pierre Curie,
Sonar
Subscribe to:
Posts (Atom)