Wednesday, November 21, 2012
Supersonic passenger plane
The invention:
The first commercial airliner that flies passengers at
speeds in excess of the speed of sound.
The people behind the invention:
Sir Archibald Russell (1904- ), a designer with the British
Aircraft Corporation
Pierre Satre (1909- ), technical director at Sud-Aviation
Julian Amery (1919- ), British minister of aviation, 1962-1964
Geoffroy de Cource (1912- ), French minister of aviation,
1962
William T. Coleman, Jr. (1920- ), U.S. secretary of
transportation, 1975-1977
Birth of Supersonic Transportations
On January 21, 1976, the Anglo-French Concorde became the
world’s first supersonic airliner to carry passengers on scheduled
commercial flights. British Airways flew a Concorde from London’s
Heathrow Airport to the Persian Gulf emirate of Bahrain in
three hours and thirty-eight minutes. At about the same time, Air
France flew a Concorde from Paris’s Charles de Gaulle Airport to
Rio de Janeiro, Brazil, in seven hours and twenty-five minutes.
The Concordes’ cruising speeds were about twice the speed of
sound, or 1,350 miles per hour. On May 24, 1976, the United States
and Europe became linked for the first time with commercial supersonic
air transportation. British Airways inaugurated flights
between Dulles International Airport in Washington, D.C., and
Heathrow Airport. Likewise, Air France inaugurated flights between
Dulles International Airport and Charles de Gaulle Airport.
The London-Washington, D.C., flight was flown in an unprecedented
time of three hours and forty minutes. The Paris-
Washington, D.C., flight was flown in a time of three hours and
fifty-five minutes.
The Decision to Build the SST
Events leading to the development and production of the Anglo-
French Concorde went back almost twenty years and included approximately
$3 billion in investment costs. Issues surrounding the
development and final production of the supersonic transport (SST)
were extremely complex and at times highly emotional. The concept
of developing an SST brought with it environmental concerns
and questions, safety issues both in the air and on the ground, political
intrigue of international proportions, and enormous economic
problems from costs of operations, research, and development.
In England, the decision to begin the SST project was made in October,
1956. Under the promotion of Morien Morgan with the Royal
Aircraft Establishment in Farnborough, England, it was decided at
the Aviation Ministry headquarters in London to begin development
of a supersonic aircraft. This decision was based on the intense competition
from the American Boeing 707 and Douglas DC-8 subsonic
jets going into commercial service. There was little point in developing
another subsonic plane; the alternative was to go above the speed
of sound. In November, 1956, at Farnborough, the first meeting of the
Supersonic Transport Aircraft Committee, known as STAC, was held.
Members of the STAC proposed that development costs would be
in the range of $165 million to $260 million, depending on the range,
speed, and payload of the chosen SST. The committee also projected
that by 1970, there would be a world market for at least 150 to 500 supersonic
planes. Estimates were that the supersonic plane would recover
its entire research and development cost through thirty sales.
The British, in order to continue development of an SST, needed a European
partner as a way of sharing the costs and preempting objections
to proposed funding by England’s Treasury.
In 1960, the British government gave the newly organized British
Aircraft Corporation (BAC) $1 million for an SST feasibility study.
Sir Archibald Russell, BAC’s chief supersonic designer, visited Pierre
Satre, the technical director at the French firm of Sud-Aviation.
Satre’s suggestion was to evolve an SST from Sud-Aviation’s highly
successful subsonic Caravelle transport. By September, 1962, an
agreement was reached by Sud and BAC design teams on a new
SST, the Super Caravelle.
There was a bitter battle over the choice of engines with two British
engine firms, Bristol-Siddeley and Rolls-Royce, as contenders.
Sir Arnold Hall, the managing director of Bristol-Siddeley, in collaboration
with the French aero-engine company SNECMA, was eventually
awarded the contract for the engines. The engine chosen was
a “civilianized” version of the Olympus, which Bristol had been developing
for the multirole TRS-2 combat plane.
The Concorde Consortium
On November 29, 1962, the Concorde Consortium was created
by an agreement between England and the French Republic, signed
by Ministers of Aviation Julian Amery and Geoffroy de Cource
(1912- ). The first Concorde, Model 001, rolled out from Sud-
Aviation’s St. Martin-du-Touch assembly plant on December 11,
1968. The second, Model 002, was completed at the British Aircraft
Corporation a few months later. Eight years later, on January 21,
1976, the Concorde became the world’s first supersonic airliner to
carry passengers on scheduled commercial flights.
Development of the SST did not come easily for the Anglo-
French consortium. The nature of supersonic flight created numerous
problems and uncertainties not present for subsonic flight. The
SST traveled faster than the speed of sound. Sound travels at 760
miles per hour at sea level at a temperature of 59 degrees Fahrenheit.
This speed drops to about 660 miles per hour at sixty-five thousand
feet, cruising altitude for the SST, where the air temperature
drops to 70 degrees below zero.
The Concorde was designed to fly at a maximum of 1,450 miles
per hour. The European designers could use an aluminum alloy
construction and stay below the critical skin-friction temperatures
that required other airframe alloys, such as titanium. The Concorde
was designed with a slender curved wing surface. The design incorporated
widely separated engine nacelles, each housing two Olympus
593 jet engines. The Concorde was also designed with a “droop
snoot,” providing three positions: the supersonic configuration, a
heat-visor retracted position for subsonic flight, and a nose-lowered
position for landing patterns.
Impact
Early SST designers were faced with questions such as the intensity
and ionization effect of cosmic rays at flight altitudes of sixty to
seventy thousand feet. The “cascade effect” concerned the intensification
of cosmic radiation when particles from outer space struck a
metallic cover. Scientists looked for ways to shield passengers from
this hazard inside the aluminum or titanium shell of an SST flying
high above the protective blanket of the troposphere. Experts questioned
whether the risk of being struck by meteorites was any
greater for the SST than for subsonic jets and looked for evidence on
wind shear of jet streams in the stratosphere.
Other questions concerned the strength and frequency of clear air
turbulence above forty-five thousand feet, whether the higher ozone
content of the air at SST cruise altitude would affect the materials of
the aircraft, whether SST flights would upset or destroy the protective
nature of the earth’s ozone layer, the effect of aerodynamic heating
on material strength, and the tolerable strength of sonic booms
over populated areas. These and other questions consumed the designers
and researchers involved in developing the Concorde.
Through design research and flight tests, many of the questions
were resolved or realized to be less significant than anticipated. Several
issues did develop into environmental, economic, and international
issues. In late 1975, the British and French governments requested
permission to use the Concorde at New York’s John F.
Kennedy International Airport and at Dulles International Airport
for scheduled flights between the United States and Europe. In December,
1975, as a result of strong opposition from anti-Concorde
environmental groups, the U.S. House of Representatives approved
a six-month ban on SSTs coming into the United States so that the
impact of flights could be studied. Secretary of TransportationWilliam
T. Coleman, Jr., held hearings to prepare for a decision by February
5, 1976, as to whether to allow the Concorde into U.S. airspace.
The British and French, if denied landing rights, threatened
to take the United States to an international court, claiming that
treaties had been violated.
The treaties in question were the Chicago Convention and Bermuda
agreements of February 11, 1946, and March 27, 1946. These
treaties prohibited the United States from banning aircraft that both
France and Great Britain had certified to be safe. The Environmental
Defense Fund contended that the United States had the right to ban
SST aircraft on environmental grounds.
Under pressure from both sides, Coleman decided to allow limited
Concorde service at Dulles and John F. Kennedy airports for a
sixteen-month trial period. Service into John F. Kennedy Airport,
however, was delayed by a ban by the Port Authority of New York
and New Jersey until a pending suit was pursued by the airlines.
During the test period, detailed records were to be kept on the
Concorde’s noise levels, vibration, and engine emission levels. Other
provisions included that the plane would not fly at supersonic
speeds over the continental United States; that all flights could be
cancelled by the United States with four months notice, or immediately
if they proved harmful to the health and safety of Americans;
and that at the end of a year, four months of study would begin to
determine if the trial period should be extended.
The Concorde’s noise was one of the primary issues in determining
whether the plane should be allowed into U.S. airports. The Federal
Aviation Administration measured the effective perceived noise
in decibels. After three months of monitoring the Concorde’s departure
noise at 3.5 nautical miles was found to vary from 105 to 130
decibels. The Concorde’s approach noise at one nautical mile from
threshold varied from 115 to 130 decibels. These readings were approximately
equal to noise levels of other four-engine jets, such as
the Boeing 747, on landing but were twice as loud on takeoff.
The Economics of Operation
Another issue of significance was the economics of Concorde’s
operation and its tremendous investment costs. In 1956, early predictions
of Great Britain’s STAC were for a world market of 150 to
500 supersonic planes. In November, 1976, Great Britain’s Gerald
Kaufman and France’s Marcel Cavaille said that production of the
Concorde would not continue beyond the sixteen vehicles then contracted
for with BAC and Sud-Aviation. There was no demand by
U.S. airline corporations for the plane. Given that the planes could
not fly at supersonic speeds over populated areas because of the
sonic boom phenomenon, markets for the SST had to be separated
by at least three thousand miles, with flight paths over mostly water
or desert. Studies indicated that there were only twelve to fifteen
routes in the world for which the Concorde was suitable. The planes
were expensive, at a price of approximately $74 million each and
had a limited seating capacity of one hundred passengers. The
plane’s range was about four thousand miles.
These statistics compared to a Boeing 747 with a cost of $35 million,
seating capacity of 360, and a range of six thousand miles. In
addition, the International Air Transport Association negotiated
that the fares for the Concorde flights should be equivalent to current
first-class fares plus 20 percent. The marketing promotion for
the Anglo-French Concorde was thus limited to the elite business
traveler who considered speed over cost of transportation. Given
these factors, the recovery of research and development costs for
Great Britain and France would never occur.
See also : Airplane; Bullet train; Dirigible; Rocket; Stealth aircraft;
Supersonic transport
Further Reading
Sunday, November 18, 2012
Supercomputer
The invention:
A computer that had the greatest computational
power that then existed.
The person behind the invention:
Seymour R. Cray (1928-1996), American computer architect and
designer
The Need for Computing Power
Although modern computers have roots in concepts first proposed
in the early nineteenth century, it was only around 1950 that they became
practical. Early computers enabled their users to calculate equations
quickly and precisely, but it soon became clear that even more
powerful computers—machines capable of receiving, computing, and
sending out data with great precision and at the highest speeds—
would enable researchers to use computer “models,” which are programs
that simulate the conditions of complex experiments.
Few computer manufacturers gave much thought to building the
fastest machine possible, because such an undertaking is expensive
and because the business use of computers rarely demands the
greatest processing power. The first company to build computers
specifically to meet scientific and governmental research needs was
Control Data Corporation (CDC). The company had been founded
in 1957 by William Norris, and its young vice president for engineering
was the highly respected computer engineer Seymour R.
Cray. When CDC decided to limit high-performance computer design,
Cray struck out on his own, starting Cray Research in 1972. His
goal was to design the most powerful computer possible. To that
end, he needed to choose the principles by which his machine
would operate; that is, he needed to determine its architecture.
The Fastest Computer
All computers rely upon certain basic elements to process data.
Chief among these elements are the central processing unit, or CPU
(which handles data), memory (where data are stored temporarily
before and after processing), and the bus (the interconnection between
memory and the processor, and the means by which data are
transmitted to or from other devices, such as a disk drive or a monitor).
The structure of early computers was based on ideas developed
by the mathematician John von Neumann, who, in the 1940’s,
conceived a computer architecture in which the CPU controls all
events in a sequence: It fetches data frommemory, performs calculations
on those data, and then stores the results in memory. Because it
functions in sequential fashion, the speed of this “scalar processor”
is limited by the rate at which the processor is able to complete each
cycle of tasks.
Before Cray produced his first supercomputer, other designers
tried different approaches. One alternative was to link a vector processor
to a scalar unit. Avector processor achieves its speed by performing
computations on a large series of numbers (called a vector)
at one time rather than in sequential fashion, though specialized
and complex programs were necessary to make use of this feature.
In fact, vector processing computers spent most of their time operating
as traditional scalar processors and were not always efficient at
switching back and forth between the two processing types.
Another option chosen by Cray’s competitors was the notion of
“pipelining” the processor’s tasks. A scalar processor often must
wait while data are retrieved or stored in memory. Pipelining techniques
allow the processor to make use of idle time for calculations
in other parts of the program being run, thus increasing the effective
speed. A variation on this technique is “parallel processing,” in
which multiple processors are linked. If each of, for example, eight
central processors is given a portion of a computing task to perform,
the task will be completed more quickly than the traditional von
Neumann architecture, with its single processor, would allow.
Ever the pragmatist, however, Cray decided to employ proved
technology rather than use advanced techniques in his first supercomputer,
the Cray 1, which was introduced in 1976. Although the
Cray 1 did incorporate vector processing, Cray used a simple form
of vector calculation that made the technique practical and easy to
use. Most striking about this computer was its shape, which was far
more modern than its internal design. The Cray 1 was shaped like a
cylinder with a small section missing and a hollow center, with
what appeared to be a bench surrounding it. The shape of the machine
was designed to minimize the length of the interconnecting
wires that ran between circuit boards to allow electricity to move the
shortest possible distance. The bench concealed an important part
of the cooling system that kept the system at an appropriate operating
temperature.
The measurements that describe the performance of supercomputers
are called MIPS (millions of instructions per second) for scalar
processors and megaflops (millions of floating-point operations per
second) for vector processors. (Floating-point numbers are numbers
expressed in scientific notation; for example, 1027.) Whereas the fastest
computer before the Cray 1 was capable of some 35 MIPS, the
Cray 1 was capable of 80 MIPS. Moreover, the Cray 1 was theoretically
capable of vector processing at the rate of 160 megaflops, a rate
unheard of at the time.
Consequences
Seymour Cray first estimated that there would be few buyers for
a machine as advanced as the Cray 1, but his estimate turned out to
be incorrect. There were many scientists who wanted to perform
computer modeling (in which scientific ideas are expressed in such
a way that computer-based experiments can be conducted) and
who needed raw processing power.
When dealing with natural phenomena such as the weather or
geological structures, or in rocket design, researchers need to make
calculations involving large amounts of data. Before computers,
advanced experimental modeling was simply not possible, since
even the modest calculations for the development of atomic energy,
for example, consumed days and weeks of scientists’ time.
With the advent of supercomputers, however, large-scale computation
of vast amounts of information became possible. Weather
researchers can design a detailed program that allows them to analyze
complex and seemingly unpredictable weather events such
as hurricanes; geologists searching for oil fields can gather data
about successful finds to help identify new ones; and spacecraft
designers can “describe” in computer terms experimental ideas
that are too costly or too dangerous to carry out. As supercomputer
performance evolves, there is little doubt that scientists will
make ever greater use of its power.
Seymour R. Cray
Seymour R. Cray was born in 1928 in Chippewa Falls, Wisconsin.
The son of a civil engineer, he became interested in radio
and electronics as a boy. After graduating from high school in
1943, he joined the U.S. Army, was posted to Europe in an infantry
communications platoon, and fought in the Battle of the
Bulge. Back from the war, he pursued his interest in electronics
in college while majoring in mathematics at the University of
Minnesota. Upon graduation in 1950, he took a job at Engineering
Research Associates. It was there that he first learned
about computers. In fact, he helped design the first digital computer,
UNIVAC.
Cray co-founded Control Data Corporation in 1957. Based
on his ideas, the company built large-scale, high-speed computers.
In 1972 he founded his own company, Cray Research Incorporated,
with the intention of employing new processing methods
and simplifying architecture and software to build the
world’s fastest computers. He succeeded, and the series of computers
that the company marketed made possible computer
modeling as a central part of scientific research in areas as diverse
as meteorology, oil exploration, and nuclear weapons design.
Through the 1970’s and 1980’s Cray Research was at the
forefront of supercomputer technology, which became one of
the symbols of American technological leadership.
In 1989 Cray left Cray Research to form still another company,
Cray Computer Corporation. He planned to build the
next generation supercomputer, the Cray 5, but advances in microprocessor
technology undercut the demand for supercomputers.
Cray Computer entered bankruptcy in 1995.Ayear later
he died from injuries sustained in an automobile accident near
Colorado Springs, Colorado.
See also : Apple II computer; BINAC computer; Colossus computer;
ENIAC computer; IBM Model 1401 computer; Personal computer; Seymour R. Cray
Further Reading
Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press,
1987.
Understanding Supercomputing
Labels:
info,
informations,
invention,
inventor,
Seymour R. Cray,
Supercomputer
Sunday, November 11, 2012
Steelmaking process
The invention:
Known as the basic oxygen, or L-D, process, a
method for producing steel that worked about twelve times
faster than earlier methods.
The people behind the invention:
Henry Bessemer (1813-1898), the English inventor of a process
for making steel from iron
Robert Durrer (1890-1978), a Swiss scientist who first proved
the workability of the oxygen process in a laboratory
F. A. Loosley (1891-1966), head of research and development at
Dofasco Steel in Canada
Theodor Suess (1894-1956), works manager at Voest
Ferrous Metal
The modern industrial world is built on ferrous metal. Until
1857, ferrous metal meant cast iron and wrought iron, though a few
specialty uses of steel, especially for cutlery and swords, had existed
for centuries. In 1857, Henry Bessemer developed the first largescale
method of making steel, the Bessemer converter. By the 1880’s,
modification of his concepts (particularly the development of a ‘’basic”
process that could handle ores high in phosphor) had made
large-scale production of steel possible.
Bessemer’s invention depended on the use of ordinary air, infused
into the molten metal, to burn off excess carbon. Bessemer himself
had recognized that if it had been possible to use pure oxygen instead
of air, oxidation of the carbon would be far more efficient and rapid.
Pure oxygen was not available in Bessemer’s day, except at very high
prices, so steel producers settled for what was readily available, ordinary
air. In 1929, however, the Linde-Frakl process for separating the
oxygen in air from the other elements was discovered, and for the
first time inexpensive oxygen became available.
Nearly twenty years elapsed before the ready availability of pure
oxygen was applied to refining the method of making steel. The first
experiments were carried out in Switzerland by Robert Durrer. In
1949, he succeeded in making steel expeditiously in a laboratory setting
through the use of a blast of pure oxygen. Switzerland, however,
had no large-scale metallurgical industry, so the Swiss turned
the idea over to the Austrians, who for centuries had exploited the
large deposits of iron ore in a mountain in central Austria. Theodor
Suess, the works manager of the state-owned Austrian steel complex,
Voest, instituted some pilot projects. The results were sufficiently
favorable to induce Voest to authorize construction of production
converters. In 1952, the first ‘’heat” (as a batch of steel is
called) was “blown in,” at the Voest works in Linz. The following
year, another converter was put into production at the works in
Donauwitz. These two initial locations led to the basic oxygen process
sometimes being referred to as the L-D process.
The L-D Process
The basic oxygen, or L-D, process makes use of a converter similar
to the Bessemer converter. Unlike the Bessemer, however, the LD
converter blows pure oxygen into the molten metal from above
through a water-cooled injector known as a lance. The oxygen burns
off the excess carbon rapidly, and the molten metal can then be
poured off into ingots, which can later be reheated and formed into
the ultimately desired shape. The great advantage of the process is
the speed with which a “heat” reaches the desirable metallurgical
composition for steel, with a carbon content between 0.1 percent
and 2 percent. The basic oxygen process requires about forty minutes.
In contrast, the prevailing method of making steel, using an
open-hearth furnace (which transferred the technique from the
closed Bessemer converter to an open-burning furnace to which the
necessary additives could be introduced by hand) requires eight to
eleven hours for a “heat” or batch.
The L-D process was not without its drawbacks, however. It was
adopted by the Austrians because, by carefully calibrating the timing
and amount of oxygen introduced, they could turn their moderately
phosphoric ore into steel without further intervention. The
process required ore of a standardized metallurgical, or chemical,
content, for which the lancing had been calculated. It produced a
large amount of iron-oxide dust that polluted the surrounding at-
mosphere, and it required a lining in the converter of dolomitic
brick. The specific chemical content of the brick contributed to the
chemical mixture that produced the desired result.
The Austrians quickly realized that the process was an improvement.
In May, 1952, the patent specifications for the new process
were turned over to a new company, Brassert Oxygen Technik, or
BOT, which filed patent applications around the world. BOT embarked
on an aggressive marketing campaign, bringing potential
customers to Austria to observe the process in action. Despite BOT’s
efforts, the new process was slow to catch on, even though in 1953
BOT licensed a U.S. firm, Kaiser Engineers, to spread the process in
the United States.
Many factors serve to explain the reluctance of steel producers
around the world to adopt the new process. One of these was the
large investment most major steel producers had in their openhearth
furnaces. Another was uncertainty about the pollution factor.
Later, special pollution-control equipment would be developed
to deal with this problem. A third concern was whether the necessary
refractory liners for the new converters would be available. A
fourth was the fact that the new process could handle a load that
contained no more than 30 percent scrap, preferably less. In practice,
therefore, it would only work where a blast furnace smelting
ore was already set up.
One of the earliest firms to show serious interest in the new technology
was Dofasco, a Canadian steel producer. Between 1952 and
1954, Dofasco, pushed by its head of research and development, F.
A. Loosley, built pilot operations to test the methodology. The results
were sufficiently promising that in 1954 Dofasco built the first
basic oxygen furnace outside Austria. Dofasco had recently built its
own blast furnace, so it had ore available on site. It was able to devise
ways of dealing with the pollution problem, and it found refractory
liners that would work. It became the first North American
producer of basic oxygen steel.
Having bought the licensing rights in 1953, Kaiser Engineers was
looking for a U.S. steel producer adventuresome enough to invest in
the new technology. It found that producer in McLouth Steel, a
small steel plant in Detroit, Michigan. Kaiser Engineers supplied
much of the technical advice that enabled McLouth to build the first
U.S. basic oxygen steel facility, though McLouth also sent one of its
engineers to Europe to observe the Austrian operations. McLouth,
which had backing from General Motors, also made use of technical
descriptions in the literature.
The Specifications Question
One factor that held back adoption of basic oxygen steelmaking
was the question of specifications. Many major engineering projects
came with precise specifications detailing the type of steel to be
used and even the method of its manufacture. Until basic oxygen
steel was recognized as an acceptable form by the engineering fra-
ternity, so that job specifications included it as appropriate in specific
applications, it could not find large-scale markets. It took a
number of years for engineers to modify their specifications so that
basic oxygen steel could be used.
The next major conversion to the new steelmaking process occurred
in Japan. The Japanese had learned of the process early,
while Japanese metallurgical engineers were touring Europe in
1951. Some of them stopped off at the Voest works to look at the pilot
projects there, and they talked with the Swiss inventor, Robert
Durrer. These engineers carried knowledge of the new technique
back to Japan. In 1957 and 1958, Yawata Steel and Nippon Kokan,
the largest and third-largest steel producers in Japan, decided to implement
the basic oxygen process. An important contributor to this
decision was the Ministry of International Trade and Industry, which
brokered a licensing arrangement through Nippon Kokan, which in
turn had signed a one-time payment arrangement with BOT. The
licensing arrangement allowed other producers besides Nippon
Kokan to use the technique in Japan.
The Japanese made two important technical improvements in
the basic oxygen technology. They developed a multiholed lance for
blowing in oxygen, thus dispersing it more effectively in the molten
metal and prolonging the life of the refractory lining of the converter
vessel. They also pioneered the OG process for recovering
some of the gases produced in the converter. This procedure reduced
the pollution generated by the basic oxygen converter.
The first large American steel producer to adopt the basic oxygen
process was Jones and Laughlin, which decided to implement the
new process for several reasons. It had some of the oldest equipment
in the American steel industry, ripe for replacement. It also
had experienced significant technical difficulties at its Aliquippa
plant, difficulties it was unable to solve by modifying its openhearth
procedures. It therefore signed an agreement with Kaiser Engineers
to build some of the new converters for Aliquippa. These
converters were constructed on license from Kaiser Engineers by
Pennsylvania Engineering, with the exception of the lances, which
were imported from Voest in Austria. Subsequent lances, however,
were built in the United States. Some of Jones and Laughlin’s production
managers were sent to Dofasco for training, and technical
advisers were brought to the Aliquippa plant both from Kaiser Engineers
and from Austria.
Other European countries were somewhat slower to adopt the
new process. Amajor cause for the delay was the necessary modification
of the process to fit the high phosphoric ores available in Germany
and France. Europeans also experimented with modifications
of the basic oxygen technique by developing converters that revolved.
These converters, known as Kaldo in Sweden and Rotor in
Germany, proved in the end to have sufficient technical difficulties
that they were abandoned in favor of the standard basic oxygen
converter. The problems they had been designed to solve could be
better dealt with through modifications of the lance and through
adjustments in additives.
By the mid-1980’s, the basic oxygen process had spread throughout
the world. Neither Japan nor the European Community was
producing any steel by the older, open-hearth method. In conjunction
with the electric arc furnace, fed largely on scrap metal, the basic
oxygen process had transformed the steel industry of the world.
Impact
The basic oxygen process has significant advantages over older
procedures. It does not require additional heat, whereas the openhearth
technique calls for the infusion of nine to twelve gallons of
fuel oil to raise the temperature of the metal to the level necessary to
burn off all the excess carbon. The investment cost of the converter
is about half that of an open-hearth furnace. Fewer refractories are
required, less than half those needed in an open-hearth furnace.
Most important of all, however, a “heat” requires less than an hour,
as compared with the eight or more hours needed for a “heat” in an
open-hearth furnace.
There were some disadvantages to the basic oxygen process. Perhaps
the most important was the limited amount of scrap that could
be included in a “heat,” a maximum of 30 percent. Because the process
required at least 70 percent new ore, it could be operated most
effectively only in conjunction with a blast furnace. Counterbalancing
this last factor was the rapid development of the electric arc
furnace, which could operate with 100 percent scrap. Afirm with its
own blast furnace could, with both an oxygen converter and an electric
arc furnace, handle the available raw material.
The advantages of the basic oxygen process overrode the disadvantages.
Some other new technologies combined to produce this
effect. The most important of these was continuous casting. Because
of the short time required for a “heat,” it was possible, if a plant had
two or three converters, to synchronize output with the fill needs of
a continuous caster, thus largely canceling out some of the economic
drawbacks of the batch process. Continuous production, always
more economical, was now possible in the basic steel industry, particularly
after development of computer-controlled rolling mills.
These new technologies forced major changes in the world’s steel
industry. Labor requirements for the basic oxygen converter were
about half those for the open-hearth furnace. The high speed of the
new technology required far less manual labor but much more technical
expertise. Labor requirements were significantly reduced, producing
major social dislocations in steel-producing regions. This effect
was magnified by the fact that demand for steel dropped
sharply in the 1970’s, further reducing the need for steelworkers.
The U.S. steel industry was slower than either the Japanese or the
European to convert to the basic oxygen technique. The U.S. industry
generally operated with larger quantities, and it took a number
of years before the basic oxygen technique was adapted to converters
with an output equivalent to that of the open-hearth furnace. By
the time that had happened, world steel demand had begun to
drop. U.S. companies were less profitable, failing to generate internally
the capital needed for the major investment involved in
abandoning open-hearth furnaces for oxygen converters. Although
union contracts enabled companies to change work assignments
when new technologies were introduced, there was stiff resistance
to reducing employment of steelworkers, most of whom had lived
all their lives in one-industry towns. Finally, engineers at the steel
firms were wedded to the old methods and reluctant to change, as
were the large bureaucracies of the big U.S. steel firms.
The basic oxygen technology in steel is part of a spate of new
technical developments that have revolutionized industrial production,
drastically reducing the role of manual labor and dramatically
increasing the need for highly skilled individuals with technical ex-
pertise. Because capital costs are significantly lower than for alternative
processes, it has allowed a number of developing countries
to enter a heavy industry and compete successfully with the old industrial
giants. It has thus changed the face of the steel industry.
Henry Bessemer
Henry Bessemer was born in the small village of Charlton,
England, in 1813. His father was an early example of a technician,
specializing in steam engines, and operated a business
making metal type for printing presses. The elder Bessemer
wanted his son to attend university, but Henry preferred to
study under his father. During his apprenticeship, he learned
the properties of alloys. At seventeen he moved to London to
open his own business, which fabricated specialty metals.
Three years later the Royal Academy held an exhibition of
Bessemer’s work. His career, well begun, moved from one invention
to another until at his death in 1898 he held 114 patents.
Among them were processes for casting type and producing
graphite for pencils; methods for manufacturing glass, sugar,
bronze powder, and ships; and his best known creation, the Bessemer
converter for making steel from iron. Bessemer built his
first converter in 1855; fifteen years later Great Britain was producing
half of the world’s steel.
Bessemer’s life and career were models of early Industrial
Age industry, prosperity, and longevity. Amillionaire from patent
royalties, he retired at fifty-nine, lived another twenty-six
years, working on yet more inventions and cultivating astronomy
as a hobby, and was married for sixty-four years. Among
his many awards and honors was a knighthood, bestowed by
Queen Victoria.
See also : Assembly line; Buna rubber; Disposable razor; Laminated glass;
Memory metal; Neoprene; Oil-well drill bit; Steelmaking Wikipedia .
Further Reading :
Restructuring of the Steel Industry in Eight Countries.
Steel Phoenix: The Fall and Rise of the U.S. Steel Industry
Secondary Steelmaking: Principles and Applications
Sunday, November 4, 2012
Stealth aircraft
The invention:
The first generation of “radar-invisible” aircraft,
stealth planes were designed to elude enemy radar systems.
The people behind the invention:
Lockhead Corporation, an American research and development firm
Northrop Corporation, an American aerospace firm
Radar
During World War II, two weapons were developed that radically
altered the thinking of the U.S. military-industrial establishment
and the composition of U.S. military forces. These weapons
were the atomic bombs that were dropped on the Japanese cities of
Hiroshima and Nagasaki by U.S. forces and “radio detection and
ranging,” or radar. Radar saved the English during the Battle of Britain,
and it was radar that made it necessary to rethink aircraft design.
With radar, attacking aircraft can be detected hundreds of
miles from their intended targets, which makes it possible for those
aircraft to be intercepted before they can attack. During World
War II, radar, using microwaves, was able to relay the number, distance,
direction, and speed of German aircraft to British fighter interceptors.
This development allowed the fighter pilots of the Royal
Air Force, “the few” who were so highly praised byWinston Churchill,
to shoot down four times as many planes as they lost.
Because of the development of radar, American airplane design
strategy has been to reduce the planes’ cross sections, reduce or
eliminate the use of metal by replacing it with composite materials,
and eliminate the angles that are found on most aircraft control surfaces.
These actions help make aircraft less visible—and in some
cases, almost invisible—to radar. The Lockheed F-117A Nightrider
and the Northrop B-2 Stealth Bomber are the results of these efforts.
Airborne “Ninjas”
Hidden inside Lockheed Corporation is a research and development
organization that is unique in the corporate world.
This facility has provided the Air Force with the Sidewinder heatseeking
missile; the SR-71, a titanium-skinned aircraft that can fly
at four times the speed of sound; and, most recently, the F-117A
Nightrider. The Nightrider eluded Iraqi radar so effectively during
the 1991 Persian Gulf War that the Iraqis nicknamed it Shaba,
which is an Arabic word that means ghost. In an unusual move
for military projects, the Nightrider was delivered to the Air
Force in 1982, before the plane had been perfected. This was done
so that Air Force pilots could test fly the plane and provide input
that could be used to improve the aircraft before it went into full
production.
The Northrop B-2 Stealth Bomber was the result of a design philosophy
that was completely different from that of the F-117A
Nightrider. The F-117A, for example, has a very angular appearance,
but the angles are all greater than 180 degrees. This configuration
spreads out radar waves rather than allowing them to be concentrated
and sent back to their point of origin. The B-2, however,
stays away from angles entirely, opting for a smooth surface that
also acts to spread out the radar energy. (The B-2 so closely resembles
the YB-49 FlyingWing, which was developed in the late 1940’s,
that it even has the same wingspan.) The surface of the aircraft is
covered with radar-absorbing material and carries its engines and
weapons inside to reduce the radar cross section. There are no vertical
control surfaces, which has the disadvantage of making the aircraft
unstable, so the stabilizing system uses computers to make
small adjustments in the control elements on the trailing edges of
the wings, thus increasing the craft’s stability.
The F-117A Nightrider and the B-2 Stealth Bomber are the “ninjas”
of military aviation. Capable of striking powerfully, rapidly,
and invisibly, these aircraft added a dimension to the U.S. Air Force
that did not exist previously. Before the advent of these aircraft, missions
that required radar-avoidance tactics had to be flown below
the horizon of ground-based radar, which is 30.5 meters above the
ground. Such low-altitude flight is dangerous because of both the
increased difficulty of maneuvering and vulnerability to ground
fire. Additionally, such flying does not conceal the aircraft from the
airborne radar carried by such craft as the American E-3A AWACS
and the former Soviet Mainstay. In a major conflict, the only aircraft
that could effectively penetrate enemy airspace would be the Nightrider
and the B-2.
The purpose of the B-2 was to carry nuclear weapons into hostile
airspace undetected.With the demise of the Soviet Union, mainland
China seemed the only remaining major nuclear threat. For this reason,
many defense experts believed that there was no longer a need
for two radar-invisible planes, and cuts in U.S. military expenditures
threatened the B-2 program during the early 1990’s.
Consequences
The development of the Nightrider and the B-2 meant that the
former Soviet Union would have had to spend at least $60 billion to
upgrade its air defense forces to meet the challenge offered by these
aircraft. This fact, combined with the evolution of the Strategic Defense
Initiative, commonly called “Star Wars,” led to the United
States’ victory in the arms race. Additionally, stealth technology has
found its way onto the conventional battlefield.
As was shown in 1991 during the Desert Storm campaign in Iraq,
targets that have strategic importance are often surrounded by a
network of anti-air missiles and gun emplacements. During the
Desert Storm air war, the F-117A was the only Allied aircraft to be
assigned to targets in Baghdad. Nightriders destroyed more than 47
percent of the strategic areas that were targeted, and every pilot and
plane returned to base unscathed.
Since the world appears to be moving away from superpower
conflicts and toward smaller regional conflicts, stealth aircraft may
come to be used more for surveillance than for air attacks. This is
particularly true because the SR-71, which previously played the
primary role in surveillance, has been retired from service.
See also : Airplane; Cruise missile; Hydrogen bomb; Radar;
Rocket; Stealth aircraft Wikipedia .
Subscribe to:
Posts (Atom)