Wednesday, October 28, 2009
Polio vaccine (Salk)
The invention: Jonas Salk’s vaccine was the first that prevented polio,resulting in the virtual eradication of crippling polio epidemics.The people behind the invention:
Jonas Edward Salk (1914-1995), an American physician,
immunologist, and virologist
Thomas Francis, Jr. (1900-1969), an
American microbiologist
Cause for Celebration
Poliomyelitis (polio) is an infectious disease that can adversely
affect the central nervous system, causing paralysis and great muscle
wasting due to the destruction of motor neurons (nerve cells) in
the spinal cord. Epidemiologists believe that polio has existed since
ancient times, and evidence of its presence in Egypt, circa 1400 b.c.e.,
has been presented. Fortunately, the Salk vaccine and the later vaccine
developed by the American virologist Albert Bruce Sabin can
prevent the disease. Consequently, except in underdeveloped nations,
polio is rare. Moreover, although once a person develops polio,
there is still no cure for it, a large number of polio cases end without
paralysis or any observable effect.
Polio is often called “infantile paralysis.” This results from the
fact that it is seen most often in children. It is caused by a virus and
begins with body aches, a stiff neck, and other symptoms that are
very similar to those of a severe case of influenza. In some cases,
within two weeks after its onset, the course of polio begins to lead to
muscle wasting and paralysis.
On April 12, 1955, the world was thrilled with the announcement
that Jonas Edward Salk’s poliomyelitis vaccine could prevent the
disease. It was reported that schools were closed in celebration of
this event. Salk, the son of a New York City garment worker, has
since become one of the most well-known and publicly venerated
medical scientists in the world.
Vaccination is a method of disease prevention by immunization,
whereby a small amount of virus is injected into the body to prevent
a viral disease. The process depends on the production of antibodies
(body proteins that are specifically coded to prevent the disease
spread by the virus) in response to the vaccination. Vaccines are
made of weakened or killed virus preparations.
Electrifying Results
The Salk vaccine was produced in two steps. First, polio viruses
were grown in monkey kidney tissue cultures. These polio viruses
were then killed by treatment with the right amount of formaldehyde
to produce an effective vaccine. The killed-virus polio vaccine
was found to be safe and to cause the production of antibodies
against the disease, a sign that it should prevent polio.
In early 1952, Salk tested a prototype vaccine against Type I polio virus
on children who were afflicted with the disease and were thus
deemed safe from reinfection. This test showed that the vaccination greatly elevated the concentration of polio antibodies in these children.
On July 2, 1952, encouraged by these results, Salk vaccinated fortythree
children who had never had polio with vaccines against each of
the three virus types (Type I, Type II, and Type III). All inoculated children
produced high levels of polio antibodies, and none of them developed
the disease. Consequently, the vaccine appeared to be both safe in
humans and likely to become an effective public health tool.
In 1953, Salk reported these findings in the Journal of the American
Medical Association. In April, 1954, nationwide testing of the Salk
vaccine began, via the mass vaccination of American schoolchildren.
The results of the trial were electrifying. The vaccine was safe,
and it greatly reduced the incidence of the disease. In fact, it was estimated
that Salk’s vaccine gave schoolchildren 60 to 90 percent protection
against polio.
Salk was instantly praised. Then, however, several cases of polio
occurred as a consequence of the vaccine. Its use was immediately
suspended by the U.S. surgeon general, pending a complete examination.
Soon, it was evident that all the cases of vaccine-derived polio
were attributable to faulty batches of vaccine made by one
pharmaceutical company. Salk and his associates were in no way responsible
for the problem. Appropriate steps were taken to ensure
that such an error would not be repeated, and the Salk vaccine was
again released for use by the public.
Consequences
The first reports on the polio epidemic in the United States had
occurred on June 27, 1916, when one hundred residents of Brooklyn,
New York, were afflicted. Soon, the disease had spread. By August,
twenty-seven thousand people had developed polio. Nearly seven
thousand afflicted people died, and many survivors of the epidemic
were permanently paralyzed to varying extents. In New York City
alone, nine thousand people developed polio and two thousand
died. Chaos reigned as large numbers of terrified people attempted
to leave and were turned back by police. Smaller polio epidemics
occurred throughout the nation in the years that followed (for example,
the Catawba County, North Carolina, epidemic of 1944). A
particularly horrible aspect of polio was the fact that more than 70 percent of polio victims were small children. Adults caught it too;
the most famous of these adult polio victims was U.S. President
Franklin D. Roosevelt. There was no cure for the disease. The best
available treatment was physical therapy.
As of August, 1955, more than four million polio vaccines had
been given. The Salk vaccine appeared to work very well. There were
only half as many reported cases of polio in 1956 as there had been in
1955. It appeared that polio was being conquered. By 1957, the number
of cases reported nationwide had fallen below six thousand.
Thus, in two years, its incidence had dropped by about 80 percent.
This was very exciting, and soon other countries clamored for the
vaccine. By 1959, ninety other countries had been supplied with the
Salk vaccine.Worldwide, the disease was being eradicated. The introduction
of an oral polio vaccine by Albert Bruce Sabin supported
this progress.
Salk received many honors, including honorary degrees from
American and foreign universities, the LaskerAward, a Congressional
Medal for Distinguished Civilian Service, and membership in
the French Legion of Honor, yet he received neither the Nobel Prize
nor membership in the American National Academy of Sciences. It
is believed by many that this neglect was a result of the personal antagonism
of some of the members of the scientific community who
strongly disagreed with his theories of viral inactivation.
Polio vaccine (Sabin)
The invention: Albert Bruce Sabin’s vaccine was the first to stimulate
long-lasting immunity against polio without the risk of causing
paralytic disease.
The people behind the invention:
Albert Bruce Sabin (1906-1993), a Russian-born American
virologist
Jonas Edward Salk (1914-1995), an American physician,
immunologist, and virologist
Renato Dulbecco (1914- ), an Italian-born American
virologist who shared the 1975 Nobel Prize in Physiology or
Medicine
The Search for a Living Vaccine
Almost a century ago, the first major poliomyelitis (polio) epidemic
was recorded. Thereafter, epidemics of increasing
frequency
and severity struck the industrialized world. By the 1950’s, as many
as sixteen thousand individuals, most of them children, were being
paralyzed by the disease each year.
Poliovirus enters the body through ingestion by the mouth. It
replicates in the throat and the intestines and establishes an infection
that normally is harmless. From there, the virus can enter the
bloodstream. In some individuals it makes its way to the nervous
system, where it attacks and destroys nerve cells crucial for muscle
movement. The presence of antibodies in the bloodstream will prevent
the virus from reaching the nervous system and causing paralysis.
Thus, the goal of vaccination is to administer poliovirus that
has been altered so that it cannot cause disease but nevertheless will
stimulate the production of antibodies to fight the disease.
Albert Bruce Sabin received his medical degree from New York
University College of Medicine in 1931. Polio was epidemic in 1931,
and for Sabin polio research became a lifelong interest. In 1936,
while working at the Rockefeller Institute, Sabin and Peter Olinsky
successfully grew poliovirus using tissues cultured in vitro. Tissue
culture proved to be an excellent source of virus. Jonas Edward Salk
soon developed an inactive polio vaccine consisting of virus grown
from tissue culture that had been inactivated (killed) by chemical
treatment. This vaccine became available for general use in 1955, almost
fifty years after poliovirus had first been identified.
Sabin, however, was not convinced that an inactivated virus vaccine
was adequate. He believed that it would provide only temporary
protection and that individuals would have to be vaccinated
repeatedly in order to maintain protective levels of antibodies.
Knowing that natural infection with poliovirus induced lifelong immunity,
Sabin believed that a vaccine consisting of a living virus
was necessary to produce long-lasting immunity. Also, unlike the
inactive vaccine, which is injected, a living virus (weakened so that
it would not cause disease) could be taken orally and would invade
the body and replicate of its own accord.
Sabin was not alone in his beliefs. Hilary Koprowski and Harold
Cox also favored a living virus vaccine and had, in fact, begun
searching for weakened strains of poliovirus as early as 1946 by repeatedly
growing the virus in rodents. When Sabin began his search
for weakened virus strains in 1953, a fiercely competitive contest ensued
to achieve an acceptable live virus vaccine.
Rare, Mutant Polioviruses
Sabin’s approach was based on the principle that, as viruses acquire
the ability to replicate in a foreign species or tissue (for example,
in mice), they become less able to replicate in humans and thus
less able to cause disease. Sabin used tissue culture techniques to
isolate those polioviruses that grew most rapidly in monkey kidney
cells. He then employed a technique developed by Renato Dulbecco
that allowed him to recover individual virus particles. The recovered
viruses were injected directly into the brains or spinal cords of
monkeys in order to identify those viruses that did not damage the
nervous system. These meticulously performed experiments, which
involved approximately nine thousand monkeys and more than
one hundred chimpanzees, finally enabled Sabin to isolate rare mutant
polioviruses that would replicate in the intestinal tract but not
in the nervous systems of chimpanzees or, it was hoped, of humans.
In addition, the weakened virus strains were shown to stimulate antibodies when they were fed to chimpanzees; this was a critical attribute
for a vaccine strain.
By 1957, Sabin had identified three strains of attenuated viruses that
were ready for small experimental trials in humans. Asmall group of
volunteers, including Sabin’s own wife and children, were fed the vaccine
with promising results. Sabin then gave his vaccine to virologists
in the Soviet Union, Eastern Europe, Mexico, and Holland for further
testing. Combined with smaller studies in the United States, these trials
established the effectiveness and safety of his oral vaccine.
During this period, the strains developed by Cox and by Koprowski
were being tested also in millions of persons in field trials
around the world. In 1958, two laboratories independently compared
the vaccine strains and concluded that the Sabin strains were
superior. In 1962, after four years of deliberation by the U.S. Public
Health Service, all three of Sabin’s vaccine strains were licensed for
general use.Consequences
The development of polio vaccines ranks as one of the triumphs of
modern medicine. In the early 1950’s, paralytic polio struck 13,500
out of every 100 million Americans. The use of the Salk vaccine
greatly reduced the incidence of polio, but outbreaks of paralytic disease
continued to occur: Fifty-seven hundred cases were reported in
1959 and twenty-five hundred cases in 1960. In 1962, the oral Sabin
vaccine became the vaccine of choice in the United States. Since its
widespread use, the number of paralytic cases in the United States
has dropped precipitously, eventually averaging fewer than ten per
year. Worldwide, the oral vaccine prevented an estimated 5 million
cases of paralytic poliomyelitis between 1970 and 1990.
The oral vaccine is not without problems. Occasionally, the living
virus mutates to a disease-causing (virulent) form as it multiplies in
the vaccinated person. When this occurs, the person may develop
paralytic poliomyelitis. The inactive vaccine, in contrast, cannot
mutate to a virulent form. Ironically, nearly every incidence of polio
in the United States is caused by the vaccine itself.
In the developing countries of the world, the issue of vaccination is
more pressing. Millions receive neither form of polio vaccine; as a result,
at least 250,000 individuals are paralyzed or die each year. The World
Health Organization and other health providers continue to work toward
the very practical goal of completely eradicating this disease.
Wednesday, October 21, 2009
Pocket calculator
The invention: The first portable and reliable hand-held calculator
capable of performing a wide range of mathematical computations.
The people behind the invention:
Jack St. Clair Kilby (1923- ), the inventor of the
semiconductor microchip
Jerry D. Merryman (1932- ), the first project manager of the
team that invented the first portable calculator
James Van Tassel (1929- ), an inventor and expert on
semiconductor components
An Ancient Dream
In the earliest accounts of civilizations that developed number
systems to perform mathematical calculations, evidence has been
found of efforts to fashion a device that would permit people to perform
these calculations with reduced effort and increased accuracy.
The ancient Babylonians are regarded as the inventors of the first
abacus (or counting board, from the Greek abakos, meaning “board”
or “tablet”). It was originally little more than a row of shallow
grooves with pebbles or bone fragments as counters.
The next step in mechanical calculation did not occur until the
early seventeenth century. John Napier, a Scottish baron and mathematician,
originated the concept of “logarithms” as a mathematical
device to make calculating easier. This concept led to the first slide
rule, created by the English mathematician William Oughtred of
Cambridge. Oughtred’s invention consisted of two identical, circular
logarithmic scales held together and adjusted by hand. The slide
rule made it possible to perform rough but rapid multiplication and
division. Oughtred’s invention in 1623 was paralleled by the work
of a German professor,Wilhelm Schickard, who built a “calculating
clock” the same year. Because the record of Schickard’s work was
lost until 1935, however, the French mathematician Blaise Pascal
was generally thought to have built the first mechanical calculator,
the “Pascaline,” in 1645.Other versions of mechanical calculators were built in later centuries,
but none was rapid or compact enough to be useful beyond specific
laboratory or mercantile situations. Meanwhile, the dream of
such a machine continued to fascinate scientists and mathematicians.
The development that made a fast, small calculator possible did
not occur until the middle of the twentieth century, when Jack St.
Clair Kilby of Texas Instruments invented the silicon microchip (or
integrated circuit) in 1958. An integrated circuit is a tiny complex of
electronic components and their connections that is produced in or
on a small slice of semiconductor material such as silicon. Patrick
Haggerty, then president of Texas Instruments, wrote in 1964 that
“integrated electronics” would “remove limitations” that determined
the size of instruments, and he recognized that Kilby’s invention
of the microchip made possible the creation of a portable,
hand-held calculator. He challenged Kilby to put together a team to
design a calculator that would be as powerful as the large, electromechanical
models in use at the time but small enough to fit into a
coat pocket. Working with Jerry D. Merryman and James Van Tassel,
Kilby began to work on the project in October, 1965.
An Amazing Reality
At the outset, there were basically five elements that had to be designed.
These were the logic designs that enabled the machine to
perform the actual calculations, the keyboard or keypad, the power
supply, the readout display, and the outer case. Kilby recalls that
once a particular size for the unit had been determined (something
that could be easily held in the hand), project manager Merryman
was able to develop the initial logic designs in three days.Van Tassel
contributed his experience with semiconductor components to solve
the problems of packaging the integrated circuit. The display required
a thermal printer that would work on a low power source.
The machine also had to include a microencapsulated ink source so
that the paper readouts could be imprinted clearly. Then the paper
had to be advanced for the next calculation. Kilby, Merryman, and
Van Tassel filed for a patent on their work in 1967.
Although this relatively small, working prototype of the minicalculator
made obsolete the transistor-operated design of the much larger desk calculators, the cost of setting up new production lines
and the need to develop a market made it impractical to begin production
immediately. Instead, Texas Instruments and Canon of Tokyo
formed a joint venture, which led to the introduction of the
Canon Pocketronic Printing Calculator in Japan in April, 1970, and
in the United States that fall. Built entirely of Texas Instruments
parts, this four-function machine with three metal oxide semiconductor (MOS) circuits was similar to the prototype designed in 1967.
The calculator was priced at $400, weighed 740 grams, and measured
101 millimeters wide by 208 millimeters long by 49 millimeters
high. It could perform twelve-digit calculations and worked up
to four decimal places.
In September, 1972, Texas Instruments put the Datamath, its first
commercial hand-held calculator using a single MOS chip, on the
retail market. It weighed 340 grams and measured 75 millimeters
wide by 137 millimeters long by 42 millimeters high. The Datamath
was priced at $120 and included a full-floating decimal point that
could appear anywhere among the numbers on its eight-digit, lightemitting
diode (LED) display. It came with a rechargeable battery
that could also be connected to a standard alternating current (AC)
outlet. The Datamath also had the ability to conserve power while
awaiting the next keyboard entry. Finally, the machine had a built-in
limited amount of memory storage.Consequences
Prior to 1970, most calculating machines were of such dimensions
that professional mathematicians and engineers were either tied to
their desks or else carried slide rules whenever they had to be away
from their offices. By 1975, Keuffel&Esser, the largest slide rule manufacturer
in the world, was producing its last model, and mechanical
engineers found that problems that had previously taken a week
could now be solved in an hour using the new machines.
That year, the Smithsonian Institution accepted the world’s first
miniature electronic calculator for its permanent collection, noting
that it was the forerunner of more than one hundred million pocket
calculators then in use. By the 1990’s, more than fifty million portable
units were being sold each year in the United States. In general,
the electronic pocket calculator revolutionized the way in which
people related to the world of numbers.
Moreover, the portability of the hand-held calculator made it
ideal for use in remote locations, such as those a petroleum engineer
might have to explore. Its rapidity and reliability made it an indispensable
instrument for construction engineers, architects, and real
estate agents, who could figure the volume of a room and other
building dimensions almost instantly and then produce cost estimates
almost on the spot.
Wednesday, October 14, 2009
Plastic
The invention: The first totally synthetic thermosetting plastic,
which paved the way for modern materials science.
The people behind the invention:
John Wesley Hyatt (1837-1920), an American inventor
Leo Hendrik Baekeland (1863-1944), a Belgian-born chemist,
consultant, and inventor
Christian Friedrich Schönbein (1799-1868), a German chemist
who produced guncotton, the first artificial polymer
Adolf von Baeyer (1835-1917), a German chemist
Exploding Billiard Balls
In the 1860’s, the firm of Phelan and Collender offered a prize of
ten thousand dollars to anyone producing a substance that could
serve as an inexpensive substitute for ivory, which was somewhat
difficult to obtain in large quantities at reasonable prices. Earlier,
Christian Friedrich Schönbein had laid the groundwork for a breakthrough
in the quest for a new material in 1846 by the serendipitous
discovery of nitrocellulose, more commonly known as “guncotton,”
which was produced by the reaction of nitric acid with cotton.
An American inventor, John Wesley Hyatt, while looking for a
substitute for ivory as a material for making billiard balls, discovered
that the addition of camphor to nitrocellulose under certain
conditions led to the formation of a white material that could be
molded and machined. He dubbed this substance “celluloid,” and
this product is now acknowledged as the first synthetic plastic. Celluloid
won the prize for Hyatt, and he promptly set out to exploit his
product. Celluloid was used to make baby rattles, collars, dentures,
and other manufactured goods.
As a billiard ball substitute, however, it was not really adequate,
for various reasons. First, it is thermoplastic—in other words, a material
that softens when heated and can then be easily deformed or
molded. It was thus too soft for billiard ball use. Second, it was
highly flammable, hardly a desirable characteristic. Awidely circulated, perhaps apocryphal, story claimed that celluloid billiard balls
detonated when they collided.
Truly Artificial
Since celluloid can be viewed as a derivative of a natural product,
it is not a completely synthetic substance. Leo Hendrik Baekeland
has the distinction of being the first to produce a completely artificial
plastic. Born in Ghent, Belgium, Baekeland emigrated to the
United States in 1889 to pursue applied research, a pursuit not encouraged
in Europe at the time. One area in which Baekeland hoped
to make an inroad was in the development of an artificial shellac.
Shellac at the time was a natural and therefore expensive product,
and there would be a wide market for any reasonably priced substitute.
Baekeland’s research scheme, begun in 1905, focused on finding
a solvent that could dissolve the resinous products from a certain
class of organic chemical reaction.
The particular resins he used had been reported in the mid-
1800’s by the German chemist Adolf von Baeyer. These resins were
produced by the condensation reaction of formaldehyde with a
class of chemicals called “phenols.” Baeyer found that frequently
the major product of such a reaction was a gummy residue that was
virtually impossible to remove from glassware. Baekeland focused
on finding a material that could dissolve these resinous products.
Such a substance would prove to be the shellac substitute he sought.
These efforts proved frustrating, as an adequate solvent for these
resins could not be found. After repeated attempts to dissolve these
residues, Baekeland shifted the orientation of his work. Abandoning
the quest to dissolve the resin, he set about trying to develop a resin
that would be impervious to any solvent, reasoning that such a material
would have useful applications.
Baekeland’s experiments involved the manipulation of phenolformaldehyde
reactions through precise control of the temperature
and pressure at which the reactions were performed. Many of these
experiments were performed in a 1.5-meter-tall reactor vessel, which
he called a “Bakelizer.” In 1907, these meticulous experiments paid
off when Baekeland opened the reactor to reveal a clear solid that
was heat resistant, nonconducting, and machinable. Experimentation proved that the material could be dyed practically any color in
the manufacturing process, with no effect on the physical properties
of the solid.
Baekeland filed a patent for this new material in 1907. (This patent
was filed one day before that filed by James Swinburne, a British electrical engineer who had developed a similar material in his
quest to produce an insulating material.) Baekeland dubbed his new
creation “Bakelite” and announced its existence to the scientific
community on February 15, 1909, at the annual meeting of the American
Chemical Society. Among its first uses was in the manufacture
of ignition parts for the rapidly growing automobile industry.
Impact
Bakelite proved to be the first of a class of compounds called
“synthetic polymers.” Polymers are long chains of molecules chemically
linked together. There are many natural polymers, such as cotton.
The discovery of synthetic polymers led to vigorous research
into the field and attempts to produce other useful artificial materials.
These efforts met with a fair amount of success; by 1940, a multitude
of new products unlike anything found in nature had been discovered.
These included such items as polystyrene and low-density
polyethylene. In addition, artificial substitutes for natural polymers,
such as rubber, were a goal of polymer chemists. One of the results
of this research was the development of neoprene.
Industries also were interested in developing synthetic polymers
to produce materials that could be used in place of natural fibers
such as cotton. The most dramatic success in this area was achieved
by Du Pont chemist Wallace Carothers, who had also developed
neoprene. Carothers focused his energies on forming a synthetic fiber
similar to silk, resulting in the synthesis of nylon.
Synthetic polymers constitute one branch of a broad area known
as “materials science.” Novel, useful materials produced synthetically
from a variety of natural materials have allowed for tremendous
progress in many areas. Examples of these new materials include
high-temperature superconductors, composites, ceramics, and
plastics. These materials are used to make the structural components
of aircraft, artificial limbs and implants, tennis rackets, garbage
bags, and many other common objects.
Tuesday, October 13, 2009
Photovoltaic cell
Photovoltaic cell
The invention: Drawing their energy directly from the Sun, the
first photovoltaic cells powered instruments on early space vehicles
and held out hope for future uses of solar energy.
The people behind the invention:
Daryl M. Chapin (1906-1995), an American physicist
Calvin S. Fuller (1902-1994), an American chemist
Gerald L. Pearson (1905- ), an American physicist
Unlimited Energy Source
All the energy that the world has at its disposal ultimately comes
from the Sun. Some of this solar energy was trapped millions of years
ago in the form of vegetable and animal matter that became the coal,
oil, and natural gas that the world relies upon for energy. Some of this
fuel is used directly to heat homes and to power factories and gasoline
vehicles. Much of this fossil fuel, however, is burned to produce
the electricity on which modern society depends.
The amount of energy available from the Sun is difficult to imagine,
but some comparisons may be helpful. During each forty-hour
period, the Sun provides the earth with as much energy as the
earth’s total reserves of coal, oil, and natural gas. It has been estimated
that the amount of energy provided by the sun’s radiation
matches the earth’s reserves of nuclear fuel every forty days. The
annual solar radiation that falls on about twelve hundred square
miles of land in Arizona matched the world’s estimated total annual
energy requirement for 1960. Scientists have been searching for
many decades for inexpensive, efficient means of converting this
vast supply of solar radiation directly into electricity.
The Bell Solar Cell
Throughout its history, Bell Systems has needed to be able to
transmit, modulate, and amplify electrical signals. Until the 1930’s,
these tasks were accomplished by using insulators and metallic conductors. At that time, semiconductors, which have electrical properties
that are between those of insulators and those of conductors,
were developed. One of the most important semiconductor materials
is silicon, which is one of the most common elements on the
earth. Unfortunately, silicon is usually found in the form of compounds
such as sand or quartz, and it must be refined and purified
before it can be used in electrical circuits. This process required
much initial research, and very pure silicon was not available until
the early 1950’s.
Electric conduction in silicon is the result of the movement of
negative charges (electrons) or positive charges (holes). One way of
accomplishing this is by deliberately adding to the silicon phosphorus
or arsenic atoms, which have five outer electrons. This addition
creates a type of semiconductor that has excess negative charges (an
n-type semiconductor). Adding boron atoms, which have three
outer electrons, creates a semiconductor that has excess positive
charges (a p-type semiconductor). Calvin Fuller made an important
study of the formation of p-n junctions, which are the points at
which p-type and n-type semiconductors meet, by using the process
of diffusing impurity atoms—that is, adding atoms of materials that
would increase the level of positive or negative charges, as described
above. Fuller’s work stimulated interested in using the process
of impurity diffusion to create cells that would turn solar energy
into electricity. Fuller and Gerald Pearson made the first largearea
p-n junction by using the diffusion process. Daryl Chapin,
Fuller, and Pearson made a similar p-n junction very close to the
surface of a silicon crystal, which was then exposed to sunlight.
The cell was constructed by first making an ingot of arsenicdoped
silicon that was then cut into very thin slices. Then a very
thin layer of p-type silicon was formed over the surface of the n-type
wafer, providing a p-n junction close to the surface of the cell. Once
the cell cooled, the p-type layer was removed from the back of the
cell and lead wires were attached to the two surfaces. When light
was absorbed at the p-n junction, electron-hole pairs were produced,
and the electric field that was present at the junction forced
the electrons to the n side and the holes to the p side.
The recombination of the electrons and holes takes place after the
electrons have traveled through the external wires, where they do useful work. Chapin, Fuller, and Pearson announced in 1954 that
the resulting photovoltaic cell was the most efficient (6 percent)
means then available for converting sunlight into electricity.
The first experimental use of the silicon solar battery was in amplifiers
for electrical telephone signals in rural areas. An array of 432
silicon cells, capable of supplying 9 watts of power in bright sunlight,
was used to charge a nickel-cadmium storage battery. This, in
turn, powered the amplifier for the telephone signal. The electrical
energy derived from sunlight during the day was sufficient to keep
the storage battery charged for continuous operation. The system
was successfully tested for six months of continuous use in Americus,
Georgia, in 1956. Although it was a technical success, the silicon solar
cell was not ready to compete economically with conventional
means of producing electrical power.
Consequences
One of the immediate applications of the solar cell was to supply
electrical energy for Telstar satellites. These cells are used extensively
on all satellites to generate power. The success of the U.S. satellite program prompted serious suggestions in 1965 for the use of
an orbiting power satellite. A large satellite could be placed into a
synchronous orbit of the earth. It would collect sunlight, convert it
to microwave radiation, and beam the energy to an Earth-based receiving
station. Many technical problems must be solved, however,
before this dream can become a reality.
Solar cells are used in small-scale applications such as power
sources for calculators. Large-scale applications are still not economically
competitive with more traditional means of generating
electric power. The development of the ThirdWorld countries, however,
may provide the incentive to search for less-expensive solar
cells that can be used, for example, to provide energy in remote villages.
As the standards of living in such areas improve, the need for
electric power will grow. Solar cells may be able to provide the necessary
energy while safeguarding the environment for future generations.
Monday, October 12, 2009
Photoelectric cell
The invention: The first devices to make practical use of the photoelectric
effect, photoelectric cells were of decisive importance in
the electron theory of metals.
The people behind the invention:
Julius Elster (1854-1920), a German experimental physicist
Hans Friedrich Geitel (1855-1923), a German physicist
Wilhelm Hallwachs (1859-1922), a German physicist
Early Photoelectric Cells
The photoelectric effect was known to science in the early
nineteenth century when the French physicist Alexandre-Edmond
Becquerel wrote of it in connection with his work on glass-enclosed
primary batteries. He discovered that the voltage of his batteries increased
with intensified illumination and that green light produced
the highest voltage. Since Becquerel researched batteries exclusively,
however, the liquid-type photocell was not discovered until
1929, when the Wein and Arcturus cells were introduced commercially.
These cells were miniature voltaic cells arranged so that light
falling on one side of the front plate generated a considerable
amount of electrical energy. The cells had short lives, unfortunately;
when subjected to cold, the electrolyte froze, and when subjected to
heat, the gas generated would expand and explode the cells.
What came to be known as the photoelectric cell, a device connecting
light and electricity, had its beginnings in the 1880’s. At
that time, scientists noticed that a negatively charged metal plate
lost its charge much more quickly in the light (especially ultraviolet
light) than in the dark. Several years later, researchers demonstrated
that this phenomenon was not an “ionization” effect because
of the air’s increased conductivity, since the phenomenon
took place in a vacuum but did not take place if the plate were positively
charged. Instead, the phenomenon had to be attributed to
the light that excited the electrons of the metal and caused them to
fly off: Aneutral plate even acquired a slight positive charge under the influence of strong light. Study of this effect not only contributed
evidence to an electronic theory of matter—and, as a result of
some brilliant mathematical work by the physicist Albert Einstein,
later increased knowledge of the nature of radiant energy—but
also further linked the studies of light and electricity. It even explained
certain chemical phenomena, such as the process of photography.
It is important to note that all the experimental work on
photoelectricity accomplished prior to the work of Julius Elster
and Hans Friedrich Geitel was carried out before the existence of
the electron was known.
Explaining Photoelectric Emission
After the English physicist Sir Joseph John Thomson’s discovery
of the electron in 1897, investigators soon realized that the photoelectric
effect was caused by the emission of electrons under the influence
of radiation. The fundamental theory of photoelectric emission
was put forward by Einstein in 1905 on the basis of the German
physicist Max Planck’s quantum theory (1900). Thus, it was not surprising
that light was found to have an electronic effect. Since it was
known that the longer radio waves could shake electrons into resonant
oscillations and the shorter X rays could detach electrons from
the atoms of gases, the intermediate waves of visual light would
have been expected to have some effect upon electrons—such as detaching
them from metal plates and therefore setting up a difference
of potential. The photoelectric cell, developed by Elster and Geitel
in 1904, was a practical device that made use of this effect.
In 1888,Wilhelm Hallwachs observed that an electrically charged
zinc electrode loses its charge when exposed to ultraviolet radiation
if the charge is negative, but is able to retain a positive charge under
the same conditions. The following year, Elster and Geitel discovered
a photoelectric effect caused by visible light; however, they
used the alkali metals potassium and sodium for their experiments
instead of zinc.
The Elster-Geitel photocell (a vacuum emission cell, as opposed to
a gas-filled cell) consisted of an evacuated glass bulb containing two
electrodes. The cathode consisted of a thin film of a rare, chemically
active metal (such as potassium) that lost its electrons fairly readily; the anode was simply a wire sealed in to complete the circuit. This anode
was maintained at a positive potential in order to collect the negative
charges released by light from the cathode. The Elster-Geitel
photocell resembled two other types of vacuum tubes in existence at
the time: the cathode-ray tube, in which the cathode emitted electrons
under the influence of a high potential, and the thermionic
valve (a valve that permits the passage of current in one direction only), in which it emitted electrons under the influence of heat. Like
both of these vacuum tubes, the photoelectric cell could be classified
as an “electronic” device.
The new cell, then, emitted electrons when stimulated by light, and
at a rate proportional to the intensity of the light. Hence, a current
could be obtained from the cell. Yet Elster and Geitel found that their
photoelectric currents fell off gradually; they therefore spoke of “fatigue”
(instability). It was discovered later that most of this change was
not a direct effect of a photoelectric current’s passage; it was not even
an indirect effect but was caused by oxidation of the cathode by the air.
Since all modern cathodes are enclosed in sealed vessels, that source of
change has been completely abolished. Nevertheless, the changes that
persist in modern cathodes often are indirect effects of light that can be
produced independently of any photoelectric current.
Impact
The Elster-Geitel photocell was, for some twenty years, used in
all emission cells adapted for the visible spectrum, and throughout
the twentieth century, the photoelectric cell has had a wide variety
of applications in numerous fields. For example, if products leaving
a factory on a conveyor belt were passed between a light and a cell,
they could be counted as they interrupted the beam. Persons entering
a building could be counted also, and if invisible ultraviolet rays
were used, those persons could be detected without their knowledge.
Simple relay circuits could be arranged that would automatically
switch on street lamps when it grew dark. The sensitivity of
the cell with an amplifying circuit enabled it to “see” objects too
faint for the human eye, such as minor stars or certain lines in the
spectra of elements excited by a flame or discharge. The fact that the
current depended on the intensity of the light made it possible to
construct photoelectric meters that could judge the strength of illumination
without risking human error—for example, to determine
the right exposure for a photograph.
A further use for the cell was to make talking films possible. The
early “talkies” had depended on gramophone records, but it was very
difficult to keep the records in time with the film. Now, the waves of
speech and music could be recorded in a “sound track” by turning the sound first into current through a microphone and then into light with
a neon tube or magnetic shutter; next, the variations in the intensity of
this light on the side of the film were photographed. By reversing the
process and running the film between a light and a photoelectric cell,
the visual signals could be converted back to sound.
Personal computer
The invention: Originally a tradename of the IBM Corporation,
“personal computer” has become a generic term for increasingly
powerful desktop computing systems using microprocessors.
The people behind the invention:
Tom J. Watson, (1874-1956), the founder of IBM, who set
corporate philosophy and marketing principles
Frank Cary (1920- ), the chief executive officer of IBM at the
time of the decision to market a personal computer
John Opel (1925- ), a member of the Corporate Management
Committee
George Belzel, a member of the Corporate Management
Committee
Paul Rizzo, a member of the Corporate Management Committee
Dean McKay (1921- ), a member of the Corporate
Management Committee
William L. Sydnes, the leader of the original twelve-member
design team
Shaking up the System
For many years, the International Business Machines (IBM) Corporation
had been set in its ways, sticking to traditions established
by its founder, Tom Watson, Sr. If it hoped to enter the new microcomputer
market, however, it was clear that only nontraditional
methods would be useful. Apple Computer was already beginning
to make inroads into large IBM accounts, and IBM stock was starting
to stagnate onWall Street. A1979 BusinessWeek article asked: “Is
IBM just another stodgy, mature company?” The microcomputer
market was expected to grow more than 40 percent in the early
1980’s, but IBM would have to make some changes in order to bring
a competitive personal computer (PC) to the market.
The decision to build and market the PC was made by the company’s
Corporate Management Committee (CMC). CMC members
included chief executive officer Frank Cary, John Opel, George Belzel, Paul Rizzo, Dean McKay, and three senior vice presidents. In
July of 1980, Cary gave the order to proceed. He wanted the PC to be
designed and built within a year. The CMC approved the initial design
of the PC one month later. Twelve engineers, with William L.
Sydnes as their leader, were appointed as the design team. At the
end of 1980, the team had grown to 150.
Most parts of the PC had to be produced outside IBM. Microsoft
Corporation won the contract to produce the PC’s disk operating system
(DOS) and the BASIC (Beginner’s All-purpose Symbolic Instruction
Code) language that is built into the PC’s read-only memory
(ROM). Intel Corporation was chosen to make the PC’s central processing
unit (CPU) chip, the “brains” of the machine. Outside programmers
wrote software for the PC. Ten years earlier, this strategy
would have been unheard of within IBM since all aspects of manufacturing,
service, and repair were traditionally taken care of in-house.
Marketing the System
IBM hired a New York firm to design a media campaign for the
new PC. Readers of magazines and newspapers saw the character
of Charlie Chaplin advertising the new PC. The machine was delivered
on schedule on August 12, 1981. The price of the basic “system
unit” was $1,565. A system with 64 kilobytes of random access
memory (RAM), a 13-centimeter single-sided disk drive holding
160 kilobytes, and a monitor was priced at about $3,000. A system
with color graphics, a second disk drive, and a dot matrix printer
cost about $4,500.
Many useful computer programs had been adapted to the PC
and were available when it was introduced. VisiCalc from Personal
Software—the program that is credited with “making” the microcomputer
revolution—was one of the first available. Other packages
included a comprehensive accounting system by Peachtree
Software and a word processing package called Easywriter by Information
Unlimited Software.
As the selection of software grew, so did sales. In the first year after
its introduction, the IBM PC went from a zero market share to 28
percent of the market. Yet the credit for the success of the PC does
not go to IBM alone. Many hundreds of companies were able to produce software and hardware for the PC.Within two years, powerful
products such as Lotus Corporation’s 1-2-3 business spreadsheet
had come to the market. Many believed that Lotus 1-2-3 was the
program that caused the PC to become so phenomenally successful.
Other companies produced hardware features (expansion boards)
that increased the PC’s memory storage or enabled the machine to
“drive” audiovisual presentations such as slide shows. Business especially
found the PC to be a powerful tool. The PC has survived because
of its expansion capability.
IBM has continued to upgrade the PC. In 1983, the PC/XT was
introduced. It had more expansion slots and a fixed disk offering 10
million bytes of storage for programs and data. Many of the companies
that made expansion boards found themselves able to make
whole PCs. An entire range of PC-compatible systems was introduced
to the market, many offering features that IBM did not include
in the original PC. The original PC has become a whole family
of computers, sold by both IBM and other companies. The hardware
and software continue to evolve; each generation offers more computing
power and storage with a lower price tag.
Consequences
IBM’s entry into the microcomputer market gave microcomputers
credibility. Apple Computer’s earlier introduction of its computer
did not win wide acceptance with the corporate world. Apple
did, however, thrive within the educational marketplace. IBM’s
name already carried with it much clout, because IBM was a successful
company. Apple Computer represented all that was great
about the “new” microcomputer, but the IBM PC benefited from
IBM’s image of stability and success.
IBM coined the term personal computer and its acronym PC. The
acronym PC is now used almost universally to refer to the microcomputer.
It also had great significance with users who had previously
used a large mainframe computer that had to be shared with
the whole company. This was their personal computer. That was important
to many PC buyers, since the company mainframe was perceived
as being complicated and slow. The PC owner now had complete
control.
Thursday, October 1, 2009
Penicillin
The invention: The first successful and widely used antibiotic
drug, penicillin has been called the twentieth century’s greatest
“wonder drug.”
The people behind the invention:
Sir Alexander Fleming (1881-1955), a Scottish bacteriologist,
cowinner of the 1945 Nobel Prize in Physiology or Medicine
Baron Florey (1898-1968), an Australian pathologist, cowinner
of the 1945 Nobel Prize in Physiology or Medicine
Ernst Boris Chain (1906-1979), an émigré German biochemist,
cowinner of the 1945 Nobel Prize in Physiology or Medicine
The Search for the Perfect Antibiotic
During the early twentieth century, scientists were aware of antibacterial
substances but did not know how to make full use of them
in the treatment of diseases. Sir Alexander Fleming discovered penicillin
in 1928, but he was unable to duplicate his laboratory results
of its antibiotic properties in clinical tests; as a result, he did not recognize
the medical potential of penicillin. Between 1935 and 1940,
penicillin was purified, concentrated, and clinically tested by pathologist
Baron Florey, biochemist Ernst Boris Chain, and members
of their Oxford research group. Their achievement has since been regarded
as one of the greatest medical discoveries of the twentieth
century.
Florey was a professor at Oxford University in charge of the Sir
William Dunn School of Pathology. Chain had worked for two years
at Cambridge University in the laboratory of Frederick Gowland
Hopkins, an eminent chemist and discoverer of vitamins. Hopkins
recommended Chain to Florey, who was searching for a candidate
to lead a new biochemical unit in the Dunn School of Pathology.
In 1938, Florey and Chain formed a research group to investigate
the phenomenon of antibiosis, or the antagonistic association between
different forms of life. The union of Florey’s medical knowledge
and Chain’s biochemical expertise proved to be an ideal combination for exploring the antibiosis potential of penicillin. Florey
and Chain began their investigation with a literature search in
which Chain came across Fleming’s work and added penicillin to
their list of potential antibiotics.
Their first task was to isolate pure penicillin from a crude liquid
extract. A culture of Fleming’s original Penicillium notatum was
maintained at Oxford and was used by the Oxford group for penicillin
production. Extracting large quantities of penicillin from the
medium was a painstaking task, as the solution contained only one
part of the antibiotic in ten million. When enough of the raw juice
was collected, the Oxford group focused on eliminating impurities
and concentrating the penicillin. The concentrated liquid was then
freeze-dried, leaving a soluble brown powder.
Spectacular Results
In May, 1940, Florey’s clinical tests of the crude penicillin proved
its value as an antibiotic. Following extensive controlled experiments
with mice, the Oxford group concluded that they had discovered
an antibiotic that was nontoxic and far more effective against
pathogenic bacteria than any of the known sulfa drugs. Furthermore,
penicillin was not inactivated after injection into the bloodstream
but was excreted unchanged in the urine. Continued tests
showed that penicillin did not interfere with white blood cells and
had no adverse effect on living cells. Bacteria susceptible to the antibiotic
included those responsible for gas gangrene, pneumonia,
meningitis, diphtheria, and gonorrhea. American researchers later
proved that penicillin was also effective against syphilis.
In January, 1941, Florey injected a volunteer with penicillin
and found that there were no side effects to treatment with the
antibiotic. In February, the group began treatment of Albert Alexander,
a forty-three-year-old policeman with a serious staphylococci
and streptococci infection that was resisting massive doses of
sulfa drugs. Alexander had been hospitalized for two months after
an infection in the corner of his mouth had spread to his face,
shoulder, and lungs. After receiving an injection of 200 milligrams
of penicillin, Alexander showed remarkable progress, and for the
next ten days his condition improved. Unfortunately, the Oxford production facility was unable to generate enough penicillin to
overcome Alexander’s advanced infection completely, and he died
on March 15. A later case involving a fourteen-year-old boy with
staphylococcal septicemia and osteomyelitis had a more spectacular
result: The patient made a complete recovery in two months. In
all the early clinical treatments, patients showed vast improvement,
and most recovered completely from infections that resisted
all other treatment.
Impact
Penicillin is among the greatest medical discoveries of the twentieth
century. Florey and Chain’s chemical and clinical research
brought about a revolution in the treatment of infectious disease.
Almost every organ in the body is vulnerable to bacteria. Before
penicillin, the only antimicrobial drugs available were quinine, arsenic,
and sulfa drugs. Of these, only the sulfa drugs were useful for
treatment of bacterial infection, but their high toxicity often limited
their use. With this small arsenal, doctors were helpless to treat
thousands of patients with bacterial infections.
The work of Florey and Chain achieved particular attention because
ofWorldWar II and the need for treatments of such scourges
as gas gangrene, which had infected the wounds of numerous
World War I soldiers. With the help of Florey and Chain’s Oxford
group, scientists at the U.S. Department of Agriculture’s Northern
Regional Research Laboratory developed a highly efficient method
for producing penicillin using fermentation. After an extended search,
scientists were also able to isolate a more productive penicillin
strain, Penicillium chrysogenum. By 1945, a strain was developed that
produced five hundred times more penicillin than Fleming’s original
mold had.
Penicillin, the first of the “wonder drugs,” remains one of the
most powerful antibiotic in existence. Diseases such as pneumonia,
meningitis, and syphilis are still treated with penicillin. Penicillin
and other antibiotics also had a broad impact on other fields of medicine,
as major operations such as heart surgery, organ transplants,
and management of severe burns became possible once the threat of
bacterial infection was minimized.Florey and Chain received numerous awards for their achievement,
the greatest of which was the 1945 Nobel Prize in Physiology
or Medicine, which they shared with Fleming for his original discovery.
Florey was among the most effective medical scientists of
his generation, and Chain earned similar accolades in the science of
biochemistry. This combination of outstanding medical and chemical
expertise made possible one of the greatest discoveries in human
history.
Subscribe to:
Posts (Atom)