Monday, December 29, 2008
Artificial insemination
The invention:
Practical techniques for the artificial insemination of farm animals that have revolutionized livestock breeding practices throughout the world.
The people behind the invention:
Lazzaro Spallanzani (1729-1799), an Italian physiologist
Ilya Ivanovich Ivanov (1870-1932), a Soviet biologist
R. W. Kunitsky, a Soviet veterinarian
Reproduction Without Sex
The tale is told of a fourteenth-century Arabian chieftain who sought to improve his mediocre breed of horses. Sneaking into the territory of a neighboring hostile tribe, he stimulated a prize stallion to ejaculate into a piece of cotton. Quickly returning home, he inserted this cotton into the vagina of his own mare, who subsequently gave birth to a high-quality horse. This may have been the first case of “artificial insemination,” the technique by which semen is introduced into the female reproductive tract without sexual contact.
The first scientific record of artificial insemination comes from Italy in the 1770’s.
Lazzaro Spallanzani was one of the foremost physiologists of his time, well known for having disproved the theory of spontaneous generation, which states that living organisms can spring “spontaneously” from lifeless matter. There was some disagreement at that time about the basic requirements for reproduction in animals. It was unclear if the sex act was necessary for an embryo to develop, or if it was sufficient that the sperm and eggs come into contact. Spallanzani began by studying animals in which union of the sperm and egg normally takes place outside the body of the female. He stimulated males and females to release their sperm and eggs, then mixed these sex cells in a glass dish. In this way, he produced young frogs, toads, salamanders, and silkworms.
Next, Spallanzani asked whether the sex act was also unnecessary for reproduction in those species in which fertilization normally takes place inside the body of the female. He collected semen that had been ejaculated by a male spaniel and, using a syringe, injected the semen into the vagina of a female spaniel in heat. Two
months later, she delivered a litter of three pups, which bore some resemblance to both the mother and the male that had provided the sperm.
It was in animal breeding that Spallanzani’s techniques were to have their most dramatic application. In the 1880’s, an English dog breeder, Sir Everett Millais, conducted several experiments on artificial insemination. He was interested mainly in obtaining offspring from dogs that would not normally mate with one another because of difference in size. He followed Spallanzani’s methods to produce
a cross between a short, low, basset hound and the much larger bloodhound.
Long-Distance Reproduction
Ilya Ivanovich Ivanov was a Soviet biologist who was commissioned by his government to investigate the use of artificial insemination on horses. Unlike previous workers who had used artificial insemination to get around certain anatomical barriers to fertilization, Ivanov began the use of artificial insemination to reproduce
thoroughbred horses more effectively. His assistant in this work was the veterinarian R. W. Kunitsky.
In 1901, Ivanov founded the Experimental Station for the Artificial Insemination of Horses. As its director, he embarked on a series of experiments to devise the most efficient techniques for breeding these animals. Not content with the demonstration that the technique was scientifically feasible, he wished to ensure further that it could be practiced by Soviet farmers.
If sperm from a male were to be used to impregnate females in another location, potency would have to be maintained for a long time. Ivanov first showed that the secretions from the sex glands were not required for successful insemination; only the sperm itself was necessary. He demonstrated further that if a testicle were removed from a bull and kept cold, the sperm would remain alive.
More useful than preservation of testicles would be preservation
of the ejaculated sperm. By adding certain salts to the sperm-containing fluids, and by keeping these at cold temperatures, Ivanov was able to preserve sperm for long periods.
Ivanov also developed instruments to inject the sperm, to hold the vagina open during insemination, and to hold the horse in place during the procedure. In 1910, Ivanov wrote a practical textbook with technical instructions for the artificial insemination of horses.
He also trained some three hundred veterinary technicians in the use of artificial insemination, and the knowledge he developed quickly spread throughout the Soviet Union. Artificial insemination became the major means of breeding horses.
Until his death in 1932, Ivanov was active in researching many aspects of the reproductive biology of animals. He developed methods to treat reproductive diseases of farm animals and refined methods of obtaining, evaluating, diluting, preserving, and disinfecting sperm. He also began to produce hybrids between wild and domestic animals in the hope of producing new breeds that would be able to withstand extreme weather conditions better and that would be more resistant to disease.
His crosses included hybrids of ordinary cows with aurochs, bison, and yaks, as well as some more exotic crosses of zebras with horses.
Ivanov also hoped to use artificial insemination to help preserve species that were in danger of becoming extinct. In 1926, he led an expedition to West Africa to experiment with the hybridization of different species of anthropoid apes.
Impact
The greatest beneficiaries of artificial insemination have been dairy farmers. Some bulls are able to sire genetically superior cows that produce exceptionally large volumes of milk. Under natural conditions, such a bull could father at most a few hundred offspring in its lifetime. Using artificial insemination, a prize bull can inseminate ten to fifteen thousand cows each year. Since frozen sperm may be purchased through the mail, this also means that dairy farmers no longer need to keep dangerous bulls on the farm. Artificial insemination has become the main method of reproduction of dairy cows, with about 150 million cows (as of 1992) produced this way throughout the world.
In the 1980’s, artificial insemination gained added importance as a method of breeding rare animals. Animals kept in zoo cages, animals that are unable to take part in normal mating, may still produce sperm that can be used to inseminate a female artificially.
Some species require specific conditions of housing or diet for normal breeding to occur, conditions not available in all zoos. Such animals can still reproduce using artificial insemination.
Wednesday, December 17, 2008
Artificial hormone
The invention:
Synthesized oxytocin, a small polypeptide hormone
fromthe pituitary gland that has shownhowcomplex polypeptides
and proteins may be synthesized and used in medicine.
The people behind the invention:
Vincent du Vigneaud (1901-1978), an American biochemist and
winner of the 1955 Nobel Prize in Chemistry
Oliver Kamm (1888-1965), an American biochemist
Sir Edward Albert Sharpey-Schafer (1850-1935), an English
physiologist
Sir Henry Hallett Dale (1875-1968), an English physiologist and
winner of the 1936 Nobel Prize in Physiology or Medicine
John Jacob Abel (1857-1938), an American pharmacologist and
biochemist
Body-Function Special Effects
In England in 1895, physician George Oliver and physiologist
Edward Albert Sharpey-Schafer reported that a hormonal extract
from the pituitary gland of a cow produced a rise in blood pressure
(a pressor effect) when it was injected into animals. In 1901, Rudolph
Magnus and Sharpey-Schafer discovered that extracts from
the pituitary also could restrict the flow of urine (an antidiuretic effect).
This observation was related to the fact that when a certain
section of the pituitary was removed surgically from an animal, the
animal excreted an abnormally large amount of urine.
In addition to the pressor and antidiuretic activities in the pituitary,
two other effects were found in 1909. Sir Henry Hallett Dale,
an English physiologist, was able to show that the extracts could
cause the uterine muscle to contract (an oxytocic effect), and Isaac
Ott and John C. Scott found that when lactating (milk-producing)
animals were injected with the extracts, milk was released from the
mammary gland.
Following the discovery of these various effects, attempts were
made to concentrate and isolate the substance or substances that
were responsible. John Jacob Abel was able to concentrate the pressor
activity at The Johns Hopkins University using heavy metal salts
and extraction with organic solvents. The results of the early work,
however, were varied. Some investigators came to the conclusion
that only one substance was responsible for all the activities, while
others concluded that two or more substances were likely to be involved.
In 1928, Oliver Kamm and his coworkers at the drug firm of
Parke, Davis and Company in Detroit reported a method for the
separation of the four activities into two chemical fractions with
high potency. One portion contained most of the pressor and antidiuretic
activities, while the other contained the uterine-contracting
and milk-releasing activities. Over the years, several names have
been used for the two substances responsible for the effects. The generic
name “vasopressin” generally has become the accepted term
for the substance causing the pressor and antidiuretic effects, while
the name “oxytocin” has been used for the other two effects. The
two fractions that Kamm and his group had prepared were pure
enough for the pharmaceutical firm to make them available for
medical research related to obstetrics, surgical shock, and diabetes
insipidus.
A Complicated Synthesis
The problem of these hormones and their nature interested Vincent
du Vigneaud at the George Washington University School of
Medicine.Working with Kamm, he was able to show that the sulfur
content of both the oxytocin and the vasopressin fractions was a result
of the amino acid cystine. This helped to strengthen the concept
that these hormones were polypeptide, or proteinlike, substances.
Du Vigneaud and his coworkers next tried to find a way of purifying
oxytocin and vasopressin. This required not only the separation
of the hormones themselves but also the separation from other impurities
present in the preparations.
During World War II (1939-1945) and shortly thereafter, other
techniques were developed that would give du Vigneaud the tools
he needed to complete the job of purifying and characterizing
the two hormonal factors. One of the most important was the
countercurrent distribution method of chemist Lyman C. Craig at
the Rockefeller Institute. Craig had developed an apparatus that
could do multiple extractions, making possible separations of substances
with similar properties. Du Vigneaud had used this technique
in purifying his synthetic penicillin, and when he returned to
the study of oxytocin and vasopressin in 1946, he used it on his purest
preparations. The procedure worked well, and milligram quantities
of pure oxytocin were available in 1949 for chemical characterization.
Using the available techniques, Vigneaud and his coworkers
were able to determine the structure of oxytocin. It was du Vigneaud’s
goal to make synthetic oxytocin by duplicating the structure
his group had worked out. Eventually, du Vigneaud’s synthetic
oxytocin was obtained and the method published in the Journal of
the American Chemical Society in 1953.
Du Vigneaud’s oxytocin was next tested against naturally occurring
oxytocin, and the two forms were found to act identically in every
respect. In the final test, the synthetic form was found to induce
labor when given intravenously to women about to give birth. Also,
when microgram quantities of oxytocin were given intravenously
to women who had recently given birth, milk was released from the
mammary gland in less than a minute.
Consequences
The work of du Vigneaud and his associates demonstrated for
the first time that it was possible to synthesize peptides that have
properties identical to the natural ones and that these can be useful
in certain medical conditions. Oxytocin has been used in the last
stages of labor during childbirth. Vasopressin has been used in the
treatment of diabetes insipidus, when an individual has an insufficiency
in the natural hormone, much as insulin is used by persons
having diabetes mellitus.
After receiving the Nobel Prize in Chemistry in 1955, du Vigneaud
continued his work on synthesizing chemical variations of the two
hormones. By making peptides that differed from oxytocin and
vasopressin by one or more amino acids, it was possible to study how
the structure of the peptide was related to its physiological activity.
After the structure of insulin and some of the smaller proteins
were determined, they, too, were synthesized, although with greater
difficulty. Other methods of carrying out the synthesis of peptides
and proteins have been developed and are used today. The production
of biologically active proteins, such as insulin and growth hormone,
has been made possible by efficient methods of biotechnology.
The genes for these proteins can be put inside microorganisms,
which then make them in addition to their own proteins. The microorganisms
are then harvested and the useful protein hormones isolated
and purified.
See also: Abortion pill; Artificial blood; Birth control pill;
Geneticallyengineered insulin; Pap test.
Friday, December 12, 2008
Artificial heart
The invention:
The first successful artificial heart, the Jarvik-7, has
helped to keep patients suffering from otherwise terminal heart
disease alive while they await human heart transplants.
The people behind the invention:
Robert Jarvik (1946- ), the main inventor of the Jarvik-7
William Castle DeVries (1943- ), a surgeon at the University
of Utah in Salt Lake City
Barney Clark (1921-1983), a Seattle dentist, the first recipient of
the Jarvik-7
Early Success
The Jarvik-7 artificial heart was designed and produced by researchers
at the University of Utah in Salt Lake City; it is named for
the leader of the research team, Robert Jarvik. An air-driven pump
made of plastic and titanium, it is the size of a human heart. It is made
up of two hollow chambers of polyurethane and aluminum, each
containing a flexible plastic membrane. The heart is implanted in a
human being but must remain connected to an external air pump by
means of two plastic hoses. The hoses carry compressed air to the
heart, which then pumps the oxygenated blood through the pulmonary
artery to the lungs and through the aorta to the rest of the body.
The device is expensive, and initially the large, clumsy air compressor
had to be wheeled from room to room along with the patient.
The device was new in 1982, and that same year Barney Clark, a
dentist from Seattle, was diagnosed as having only hours to live.
His doctor, cardiac specialistWilliam Castle DeVries, proposed surgically
implanting the Jarvik-7 heart, and Clark and his wife agreed.
The Food and Drug Administration (FDA), which regulates the use
of medical devices, had already given DeVries and his coworkers
permission to implant up to seven Jarvik-7 hearts for permanent use.
The operation was performed on Clark, and at first it seemed quite
successful. Newspapers, radio, and television reported this medical
breakthrough: the first time a severely damaged heart had been re-placed by a totally artificial heart. It seemed DeVries had proved that an artificial heart could be almost as good as a human heart.
Soon after Clark’s surgery, DeVries went on to implant the device placed by a totally artificial heart.in several other patients with serious heart disease. For a time, all of them survived the surgery. As a result, DeVries was offered a position
at Humana Hospital in Louisville, Kentucky. Humana offered
to pay for the first one hundred implant operations
The Controversy Begins
In the three years after DeVries’s operation on Barney Clark,
however, doubts and criticism arose. Of the people who by then had
received the plastic and metal device as a permanent replacement
for their own diseased hearts, three had died (including Clark) and
four had suffered serious strokes. The FDAasked Humana Hospital
and Symbion (the company that manufactured the Jarvik-7) for
complete, detailed histories of the artificial-heart recipients.
It was determined that each of the patients who had died or been
disabled had suffered from infection. Life-threatening infection, or
“foreign-body response,” is a danger with the use of any artificial
organ. The Jarvik-7, with its metal valves, plastic body, and Velcro
attachments, seemed to draw bacteria like a magnet—and these
bacteria proved resistant to even the most powerful antibiotics.
By 1988, researchers had come to realize that severe infection was
almost inevitable if a patient used the Jarvik-7 for a long period of
time. As a result, experts recommended that the device be used for
no longer than thirty days.
Questions of values and morality also became part of the controversy
surrounding the artificial heart. Some people thought that it
was wrong to offer patients a device that would extend their lives
but leave them burdened with hardship and pain. At times DeVries
claimed that it was worth the price for patients to be able live another
year; at other times, he admitted that if he thought a patient
would have to spend the rest of his or her life in a hospital, he would
think twice before performing the implant.
There were also questions about “informed consent”—the patient’s
understanding that a medical procedure has a high risk of
failure and may leave the patient in misery even if it succeeds.
Getting truly informed consent from a dying patient is tricky, because,
understandably, the patient is probably willing to try anything.
The Jarvik-7 raised several questions in this regard:Was the ordeal worth the risk? Was the patient’s suffering justifiable? Who should make the decision for or against the surgery: the patient, the researchers, or a government agency?
Also there was the issue of cost. Should money be poured into expensive,
high-technology devices such as the Jarvik heart, or should
it be reserved for programs to help prevent heart disease in the first
place? Expenses for each of DeVries’s patients had amounted to
about one million dollars.
Humana’s and DeVries’s earnings were criticized in particular.
Once the first one hundred free Jarvik-7 implantations had been
performed, Humana Hospital could expect to make large amounts
of money on the surgery. By that time, Humana would have so
much expertise in the field that, though the surgical techniques
could not be patented, it was expected to have a practical monopoly.
DeVries himself owned thousands of shares of stock in Symbion.
Many people wondered whether this was ethical.
Consequences
Given all the controversies, in December of 1985 a panel of experts
recommended that the FDAallow the experiment to continue,but only with careful monitoring. Meanwhile, cardiac transplantation was becoming easier and more common. By the end of 1985, almost twenty-six hundred patients in various countries had received human heart transplants, and 76 percent of these patients had survived
for at least four years. When the demand for donor hearts exceeded the supply, physicians turned to the Jarvik device and other artificial hearts to help see patients through the waiting period.
Experience with the Jarvik-7 made the world keenly aware of
how far medical science still is from making the implantable permanent
mechanical heart a reality. Nevertheless, the device was a
breakthrough in the relatively new field of artificial organs. Since
then, other artificial body parts have included heart valves, blood
vessels, and inner ears that help restore hearing to the deaf.
William C. DeVries
William Castle DeVries did not invent the artificial heart
himself; however, he did develop the procedure to implant it.
The first attempt took him seven and a half hours, and he
needed fourteen assistants. Asuccess, the surgery made DeVries
one of the most talked-about doctors in the world.
DeVries was born in Brooklyn,NewYork, in 1943. His father,
a Navy physician, was killed in action a few months later, and
his mother, a nurse, moved with her son to Utah. As a child
DeVries showed both considerable mechanical aptitude and
athletic prowess. He won an athletic scholarship to the University
of Utah, graduating with honors in 1966. He entered the
state medical school and there met Willem Kolff, a pioneer in
designing and testing artificial organs. Under Kolff’s guidance,
DeVries began performing experimental surgeries on animals
to test prototype mechanical hearts. He finished medical school
in 1970 and from 1971 until 1979 was an intern and then a resident
in surgery at the Duke University Medical Center in North
Carolina.
DeVries returned to the University of Utah as an assistant
professor of cardiovascular and thoracic surgery. In the meantime,
Robert K. Jarvik had devised the Jarvik-7 artificial heart.
DeVries experimented, implanting it in animals and cadavers
until, following approval from the Federal Drug Administration,
Barney Clark agreed to be the first test patient. He died 115
days after the surgery, having never left the hospital. Although
controversy arose over the ethics and cost of the procedure,
more artificial heart implantations followed, many by DeVries.
Long administrative delays getting patients approved for
surgery at Utah frustrated DeVries, so he moved to Humana
Hospital-Audubon in Louisville, Kentucky, in 1984 and then
took a professorship at the University of Louisville. In 1988 he
left experimentation for a traditional clinical practice. The FDA
withdrew its approval for the Jarvik-7 in 1990.
In 1999 DeVries retired from practice, but not from medicine.
The next year he joined the Army Reserve and began teaching
surgery at the Walter Reed Army Medical Center.
Saturday, December 6, 2008
Artificial blood
The invention:
Aperfluorocarbon emulsion that serves as a blood
plasma substitute in the treatment of human patients.
The person behind the invention:
Ryoichi Naito (1906-1982), a Japanese physician.
Blood Substitutes
The use of blood and blood products in humans is a very complicated
issue. Substances present in blood serve no specific purpose
and can be dangerous or deadly, especially when blood or blood
products are taken from one person and given to another. This fact,
combined with the necessity for long-term blood storage, a shortage
of donors, and some patients’ refusal to use blood for religious reasons,
brought about an intense search for a universal bloodlike substance.
The life-sustaining properties of blood (for example, oxygen transport)
can be entirely replaced by a synthetic mixture of known chemicals.
Fluorocarbons are compounds that consist of molecules containing
only fluorine and carbon atoms. These compounds are interesting
to physiologists because they are chemically and pharmacologically
inert and because they dissolve oxygen and other gases.
Studies of fluorocarbons as blood substitutes began in 1966,
when it was shown that a mouse breathing a fluorocarbon liquid
treated with oxygen could survive. Subsequent research involved
the use of fluorocarbons to play the role of red blood cells in transporting
oxygen. Encouraging results led to the total replacement of
blood in a rat, and the success of this experiment led in turn to trials
in other mammals, culminating in 1979 with the use of fluorocarbons
in humans.
Clinical Studies
The chemical selected for the clinical studies was Fluosol-DA,
produced by the Japanese Green Cross Corporation. Fluosol-DA
consists of a 20 percent emulsion of two perfluorocarbons (perfluorodecalin
and perfluorotripopylamine), emulsifiers, and salts
that are included to give the chemical some of the properties of
blood plasma. Fluosol-DA had been tested in monkeys, and it had
shown a rapid reversible uptake and release of oxygen, a reasonably
rapid excretion, no carcinogenicity or irreversible changes in the animals’
systems, and the recovery of blood components to normal
ranges within three weeks of administration.
The clinical studies were divided into three phases. The first
phase consisted of the administration of Fluosol-DA to normal human
volunteers. Twelve healthy volunteers were administered the
chemical, and the emulsion’s effects on blood pressure and composition
and on heart, liver, and kidney functions were monitored. No
adverse effects were found in any case. The first phase ended in
March, 1979, and based on its positive results, the second and third
phases were begun in April, 1979.
Twenty-four Japanese medical institutions were involved in the
next two phases. The reasons for the use of Fluosol-DA instead of
blood in the patients involved were various, and they included refusal
of transfusion for religious reasons, lack of compatible blood,
“bloodless” surgery for protection from risk of hepatitis, and treatment
of carbon monoxide intoxication.
Among the effects noticed by the patients were the following: a
small increase in blood pressure, with no corresponding effects on
respiration and body temperature; an increase in blood oxygen content;
bodily elimination of half the chemical within six to nineteen
hours, depending on the initial dose administered; no change in
red-cell count or hemoglobin content of blood; no change in wholeblood
coagulation time; and no significant blood-chemistry changes.
These results made the clinical trials a success and opened the door
for other, more extensive ones.
IMPACT
Perfluorocarbon emulsions were initially proposed as oxygencarrying
resuscitation fluids, or blood substitutes, and the results of
the pioneering studies show their success as such. Their success in
this area, however, led to advanced studies and expanded use of these compounds in many areas of clinical medicine and biomedical
research.
Perfluorocarbon emulsions are useful in cancer therapy, because
they increase the oxygenation of tumor cells and therefore sensitize
them to the effects of radiation or chemotherapy. Perfluorocarbons
can also be used as “contrasting agents” to facilitate magnetic resonance
imaging studies of various tissues; for example, the uptake of
particles of the emulsion by the cells of malignant tissues makes it
possible to locate tumors. Perfluorocarbons also have a high nitrogen
solubility and therefore can be used to alleviate the potentially
fatal effects of decompression sickness by “mopping up” nitrogen
gas bubbles from the circulation system. They can also be used to
preserve isolated organs and amputated extremities until they can
be reimplanted or reattached. In addition, the emulsions are used in
cell cultures to regulate gas supply and to improve cell growth and
productivity.
The biomedical applications of perfluorocarbon emulsions are
multidisciplinary, involving areas as diverse as tissue imaging, organ
preservation, cancer therapy, and cell culture. The successful
clinical trials opened the door for new applications of these
compounds, which rank among the most versatile compounds exploited
by humankind.
Wednesday, December 3, 2008
Aqualung
The invention:
A device that allows divers to descend hundreds of
meters below the surface of the ocean by enabling them to carry
the oxygen they breathe with them.
The people behind the invention:
Jacques-Yves Cousteau (1910-1997), a French navy officer,
undersea explorer, inventor, and author.
Émile Gagnan, a French engineer who invented an automatic
air-regulating device.
The Limitations of Early Diving
Undersea dives have been made since ancient times for the purposes
of spying, recovering lost treasures from wrecks, and obtaining
natural treasures (such as pearls). Many attempts have been made
since then to prolong the amount of time divers could remain underwater.
The first device, described by the Greek philosopher Aristotle
in 335 b.c.e., was probably the ancestor of the modern snorkel. It was
a bent reed placed in the mouth, with one end above the water.
In addition to depth limitations set by the length of the reed,
pressure considerations also presented a problem. The pressure on
a diver’s body increases by about one-half pound per square centimeter
for every meter ventured below the surface. After descending
about 0.9 meter, inhaling surface air through a snorkel becomes difficult
because the human chest muscles are no longer strong enough
to inflate the chest. In order to breathe at or below this depth, a diver
must breathe air that has been pressurized; moreover, that pressure
must be able to vary as the diver descends or ascends.
Few changes were possible in the technology of diving until air
compressors were invented during the early nineteenth century.
Fresh, pressurized air could then be supplied to divers. At first, the
divers who used this method had to wear diving suits, complete
with fishbowl-like helmets. This “tethered” diving made divers relatively
immobile but allowed them to search for sunken treasure or
do other complex jobs at great depths.
The Development of Scuba Diving
The invention of scuba gear gave divers more freedom to
move about and made them less dependent on heavy equipment.
(“Scuba” stands for self-contained underwater breathing apparatus.)
Its development occurred in several stages. In 1880, Henry
Fleuss of England developed an outfit that used a belt containing
pure oxygen. Belt and diver were connected, and the diver breathed
the oxygen over and over. Aversion of this system was used by the
U.S. Navy in World War II spying efforts. Nevertheless, it had serious
drawbacks: Pure oxygen was toxic to divers at depths greater
than 9 meters, and divers could carry only enough oxygen for relatively
short dives. It did have an advantage for spies, namely, that
the oxygen—breathed over and over in a closed system—did not
reach the surface in the form of telltale bubbles.
The next stage of scuba development occurred with the design
of metal tanks that were able to hold highly compressed air.
This enabled divers to use air rather than the potentially toxic
pure oxygen. More important, being hooked up to a greater supply
of air meant that divers could stay under water longer. Initially,
the main problem with the system was that the air flowed continuously
through a mask that covered the diver’s entire face. This process
wasted air, and the scuba divers expelled a continual stream
of air bubbles that made spying difficult. The solution, according to
Axel Madsen’s Cousteau (1986), was “a valve that would allow inhaling
and exhaling through the same mouthpiece.”
Jacques-Yves Cousteau’s father was an executive for Air Liquide—
France’s main producer of industrial gases. He was able to direct
Cousteau to Émile Gagnan, an engineer at thecompany’s Paris laboratory
who had been developing an automatic gas shutoff valve for Air
Liquide. This valve became the Cousteau-Gagnan regulator, a breathing
device that fed air to the diver at just the right pressure whenever
he or she inhaled.With this valve—and funding from Air Liquide—Cousteau and
Gagnan set out to design what would become the Aqualung. The
first Aqualungs could be used at depths of up to 68.5 meters. During
testing, however, the dangers of Aqualung diving became apparent.
For example, unless divers ascended and descended in slow stages,
it was likely that they would get “the bends” (decompression sickness),
the feared disease of earlier, tethered deep-sea divers. Another
problem was that, below 42.6 meters, divers encountered nitrogen
narcosis. (This can lead to impaired judgment that may cause
fatal actions, including removing a mouthpiece or developing an
overpowering desire to continue diving downward, to dangerous
depths.)Cousteau believed that the Aqualung had tremendous military
potential. DuringWorldWar II, he traveled to London soon after the
Normandy invasion, hoping to persuade the Allied Powers of its
usefulness. He was not successful. So Cousteau returned to Paris
and convinced France’s new government to use Aqualungs to locate
and neutralize underwater mines laid along the French coast by
the German navy. Cousteau was commissioned to combine minesweeping
with the study of the physiology of scuba diving. Further
research revealed that the use of helium-oxygen mixtures increased
to 76 meters the depth to which a scuba diver could go without suffering
nitrogen narcosis.
Impact
One way to describe the effects of the development of the Aqualung
is to summarize Cousteau’s continued efforts to the present. In
1946, he and Philippe Tailliez established the Undersea Research
Group of Toulon to study diving techniques and various aspects of
life in the oceans. They studied marine life in the Red Sea from 1951
to 1952. From 1952 to 1956, they engaged in an expedition supported
by the National Geographic Society. By that time, the Research
Group had developed many techniques that enabled them to
identify life-forms and conditions at great depths.
Throughout their undersea studies, Cousteau and his coworkers
continued to develop better techniques for scuba diving, for recording
observations by means of still and television photography, and
for collecting plant and animal specimens. In addition, Cousteau
participated (with Swiss physicist Auguste Piccard) in the construction
of the deep-submergence research vehicle, or bathyscaphe. In
the 1960’s, he directed a program called Conshelf, which tested a
human’s ability to live in a specially built underwater habitat. He
also wrote and produced films on underwater exploration that attracted,
entertained, and educated millions of people.
Cousteau has won numerous medals and scientific distinctions.
These include the Gold Medal of the National Geographic Society
(1963), the United Nations International Environment Prize (1977),
membership in the American and Indian academies of science (1968
and 1978, respectively), and honorary doctor of science degrees
from the University of California, Berkeley (1970), Harvard University
(1979), and Rensselaer Polytechnical Institute (1979).
Labels:
Aqualung,
Cousteau,
Development,
Diving,
Émile,
Gagnan,
impact,
Jacques-Yves,
Scuba
Sunday, November 30, 2008
Apple II computer
The invention:
The first commercially available, preassembled
personal computer, the Apple II helped move computers out of
the workplace and into the home.
The people behind the invention:
Stephen Wozniak (1950- ), cofounder of Apple and designer
of the Apple II computer
Steven Jobs (1955-2011 ), cofounder of Apple
Regis McKenna (1939- ), owner of the Silicon Valley public
relations and advertising company that handled the Apple
account
Chris Espinosa (1961- ), the high school student who wrote
the BASIC program shipped with the Apple II
Randy Wigginton (1960- ), a high school student and Apple
software programmer
Inventing the Apple
As late as the 1960’s, not many people in the computer industry
believed that a small computer could be useful to the average person.
It was through the effort of two friends from the Silicon Valley—
the high-technology area between San Francisco and San Jose—
that the personal computer revolution was started.
Both Steven Jobs and StephenWozniak had attended Homestead
High School in Los Altos, California, and both developed early interests
in technology, especially computers. In 1971, Wozniak built
his first computer from spare parts. Shortly after this, he was introduced
to Jobs. Jobs had already developed an interest in electronics
(he once telephoned William Hewlett, cofounder of Hewlett-
Packard, to ask for parts), and he and Wozniak became friends.
Their first business together was the construction and sale of “blue
boxes,” illegal devices that allowed the user to make long-distance
telephone calls for free.
After attending college, the two took jobs within the electronics
industry. Wozniak began working at Hewlett-Packard, where he
studied calculator design, and Jobs took a job at Atari, the video
company. The friendship paid off again whenWozniak, at Jobs’s request,
designed the game “Breakout” for Atari, and the pair was
paid seven hundred dollars.
In 1975, the Altair computer, a personal computer in kit form,
was introduced by Micro Instrumentation and Telemetry Systems
(MITS). Shortly thereafter, the first personal computer club, the
Homebrew Computer Club, began meeting in Menlo Park, near
Stanford University. Wozniak and Jobs began attending the meeting
regularly. Wozniak eagerly examined the Altairs that others
brought. He thought that the design could be improved. In only a
few more weeks, he produced a circuit board and interfaces that
connected it to a keyboard and a video monitor. He showed the machine
at a Homebrew meeting and distributed photocopies of the
design.
In this new machine, which he named an “Apple,” Jobs saw a big
opportunity. He talked Wozniak into forming a partnership to develop
personal computers. Jobs sold his car, and Wozniak sold his
two Hewlett-Packard calculators; with the money, they ordered
printed circuit boards made. Their break came when Paul Terrell, a
retailer, was so impressed that he ordered fifty fully assembled Apples.
Within thirty days, the computers were completed, and they
sold for a fairly high price: $666.66.
During the summer of 1976,Wozniak kept improving the Apple.
The new computer would come with a keyboard, an internal power
supply, a built-in computer language called the Beginner’s All-
Purpose Symbolic Instruction Code” (BASIC), hookups for adding
printers and other devices, and color graphics, all enclosed in a plastic
case. The output would be seen on a television screen. The machine
would sell for twelve hundred dollars.
Selling the Apple
Regis McKenna was the head of the Regis McKenna Public Relations
agency, the best of the public relations firms that served the
high-technology industries of the valley, which Jobs wanted to handle
the Apple account. At first, McKenna rejected the offer, but
Jobs’s constant pleading finally convinced him. The agency’s first
contributions to Apple were the colorful striped Apple logo and a
color ad in Playboy magazine.
In February, 1977, the first Apple Computer office was opened in
Cupertino, California. By this time, two of Wozniak’s friends from
Homebrew, Randy Wigginton and Chris Espinosa—both high school
students—had joined the company. Their specialty was writing software.
Espinosa worked through his Christmas vacation so that BASIC
(the built-in computer language) could ship with the computer.
The team pushed ahead to complete the new Apple in time to
display it at the First West Coast Computer Faire in April, 1977. At
this time, the name “Apple II” was chosen for the new model. The
Apple II computer debuted at the convention and included many
innovations. The “motherboard” was far simpler and more elegantly
designed than that of any previous computer, and the ease of
connecting the Apple II to a television screen made it that much
more attractive to consumers.
Consequences
The introduction of the Apple II computer launched what was to
be a wave of new computers aimed at the home and small-business
markets.Within a few months of the Apple II’s introduction, Commodore
introduced its PET computer and Tandy Corporation/Radio
Shack brought out its TRS-80. Apple continued to increase the
types of things that its computers could do and worked out a distribution
deal with the new ComputerLand chain of stores.
In December, 1977, Wozniak began work on creating a floppy
disk system for the Apple II. (Afloppy disk is a small, flexible plastic
disk coated with magnetic material. The magnetized surface enables
computer data to be stored on the disk.) The cassette tape storage
on which all personal computers then depended was slow and
unreliable. Floppy disks, which had been introduced for larger computers
by the International Business Machines (IBM) Corporation in
1970, were fast and reliable. As he did with everything that interested
him,Wozniak spent almost all of his time learning about and
designing a floppy disk drive. When the final drive shipped in June,
1978, it made possible development of more powerful software for
the computer.
By 1980, Apple had sold 130,000 Apple II’s. That year, the company
went public, and Jobs and Wozniak, among others, became
wealthy. Three years later, Apple became the youngest company to
make the Fortune 500 list of the largest industrial companies. By
then, IBM had entered the personal computer field and had begun
to dominate it, but the Apple II’s earlier success ensured that personal
computers would not be a market fad. By the end of the
1980’s, 35 million personal computers would be in use.
Steven Jobs
While IBM and other corporations were devoting massive
resources and talent to designing a small computer in 1975,
Steven Paul Jobs and Stephen Wozniak, members of the tiny
Homebrew Computer Club, put together the first truly userfriendly
personal computer in Wozniak’s home. Jobs admitted
later that “Woz” was the engineering brains. Jobs himself was
the brains of design and marketing. Both had to scrape together
money for the project from their small salaries as low-level electronics
workers. Within eight years, Jobs headed the most progressive
company in the new personal computer industry and
was worth an estimated $210 million.
Little in his background foretold such fast, large material
success. Jobs was born in 1955 and became an orphan. Adopted
by Paul and Clara Jobs, he grew up in California towns near the
area that became known as Silicon Valley. He did not like school
much and was considered a loner, albeit one who always had a
distinctive way of thinking about things. Still in high school, he
impressed William Hewlett, founder of Hewlett-Packard in
Palo Alto, and won a summer job at the company, as well as
some free equipment for one of his school projects.
However, he dropped out of Reed College after one semester
and became a hippie. He studied philosophy and Chinese
and Indian mysticism. He became a vegetarian and practiced
meditation. He even shaved his head and traveled to India on a
spiritual pilgrimage. When he returned to America, however,
he also returned to his interest in electronics and computers.
Through various jobs at his original company, Apple, and elsewhere,
he stayed there.
See also : BINAC computer ; Colossus computer ; ENIAC computer ; Floppy disk ; Hard disk ; IBM Model 1401 Computer ; Personal computer ; Wikipedia - Steven Jobs
Thursday, November 27, 2008
Antibacterial drugs
Mechanisms of genetic resistance to antimicrobial agents:
Bacteria have developed, or will develop, genetic resistance to all known antimicrobial agents that are now in the marketplace. The five main mechanisms that bacteria use to resist antibacterial drugs are shown in the figure.
a | The site of action (enzyme, ribosome or cell-wall precursor) can be altered. For example, acquiring a plasmid or transposon that codes for a resistant dihydrofolate reductase confers trimethoprim resistance to bacteria52.
b | The inhibited steps can be by-passed.
c | Bacteria can reduce the intracellular concentration of the antimicrobial agent, either by reducing membrane permeability, for example, as shown by Pseudomonas aeruginosa53, or by active efflux of the agent54.
d | They can inactivate the drug. For example, some bacteria produce beta-lactamase, which destroys the penicillin beta-lactam ring50, 51 .
e | The target enzyme can be overproduced by the bacteria.
The invention:
Sulfonamides and other drugs that have proved effective
in combating many previously untreatable bacterial diseases.
The people behind the invention:
Gerhard Domagk (1895-1964), a German physician who was
awarded the 1939 Nobel Prize in Physiology or Medicine
Paul Ehrlich (1854-1915), a German chemist and bacteriologist
who was the cowinner of the 1908 Nobel Prize in Physiology
or Medicine.
The Search for Magic Bullets
Although quinine had been used to treat malaria long before the
twentieth century, Paul Ehrlich, who discovered a large number of
useful drugs, is usually considered the father of modern chemotherapy.
Ehrlich was familiar with the technique of using dyes to stain
microorganisms in order to make them visible under a microscope,
and he suspected that some of these dyes might be used to poison
the microorganisms responsible for certain diseases without hurting
the patient. Ehrlich thus began to search for dyes that could act
as “magic bullets” that would destroy microorganisms and cure
diseases. From 1906 to 1910, Ehrlich tested numerous compounds
that had been developed by the German dye industry. He eventually
found that a number of complex trypan dyes would inhibit the
protozoans that caused African sleeping sickness.
Ehrlich and his coworkers also synthesized hundreds of organic
compounds that contained arsenic. In 1910, he found that one of
these compounds, salvarsan, was useful in curing syphilis, a sexually
transmitted disease caused by the bacterium Treponema. This
was an important discovery, because syphilis killed thousands of
people each year. Salvarsan, however, was often toxic to patients,
because it had to be taken in large doses for as long as two years to
effect a cure. Ehrlich thus searched for and found a less toxic arsenic
compound, neosalvarsan, which replaced salvarsan in 1912.
In 1915, tartar emetic (a compound containing the metal antimony)
was found to be useful in treating kala-azar, which was
caused by a protozoan. Kala-azar affected millions of people in Africa,
India, and Asia, causing much suffering and many deaths each
year. Two years later, it was discovered that injection of tartar emetic
into the blood of persons suffering from bilharziasis killed the
flatworms infecting the bladder, liver, and spleen. In 1920, suramin,
a colorless compound developed from trypan red, was introduced
to treat African sleeping sickness. It was much less toxic to the patient
than any of the drugs Ehrlich had developed, and a single dose
would give protection for more than a month. From the dye methylene
blue, chemists made mepacrine, a drug that was effective
against the protozoans that cause malaria. This chemical was introduced
in 1933 and used duringWorldWar II; its principal drawback
was that it could cause a patient’s skin to become yellow.
Well Worth the Effort
Gerhard Domagk had been trained in medicine, but he turned to
research in an attempt to discover chemicals that would inhibit or
kill microorganisms. In 1927, he became director of experimental
pathology and bacteriology at the Elberfeld laboratories of the German
chemical firm I. G. Farbenindustrie. Ehrlich’s discovery that
trypan dyes selectively poisoned microorganisms suggested to Domagk
that he look for antimicrobials in a new group of chemicals
known as azo dyes. A number of these dyes were synthesized
from sulfonamides and purified by Fritz Mietzsch and Josef Klarer.
Domagk found that many of these dyes protected mice infected
with the bacteria Streptococcus pyogenes. In 1932, he discovered that
one of these dyes was much more effective than any tested previously.
This red azo dye containing a sulfonamide was named prontosil
rubrum.
From 1932 to 1935, Domagk began a rigorous testing program to
determine the effectiveness and dangers of prontosil use at different
doses in animals. Since all chemicals injected into animals or humans
are potentially dangerous, Domagk determined the doses that
harmed or killed. In addition, he worked out the lowest doses that
would eliminate the pathogen. The firm supplied samples of the drug to physicians to carry out clinical trials on humans. (Animal
experimentation can give only an indication of which chemicals
might be useful in humans and which doses are required.)
Domagk thus learned which doses were effective and safe. This
knowledge saved his daughter’s life. One day while knitting, Domagk’s
daughter punctured her finger with a needle and was infected
with a virulent bacteria, which quickly multiplied and spread
from the wound into neighboring tissues. In an attempt to alleviate
the swelling, the infected area was lanced and allowed to drain, but
this did not stop the infection from spreading. The child became
critically ill with developing septicemia, or blood poisoning.
In those days, more than 75 percent of those who acquired blood
infections died. Domagk realized that the chances for his daughter’s
survival were poor. In desperation, he obtained some of the powdered
prontosil that had worked so well on infected animals. He extrapolated
from his animal experiments how much to give his
daughter so that the bacteria would be killed but his daughter
would not be poisoned. Within hours of the first treatment, her fever
dropped, and she recovered completely after repeated doses of
prontosil.
Impact
Directly and indirectly, Ehrlich’s and Domagk’s work served to
usher in a new medical age. Prior to the discovery that prontosil
could be use to treat bacterial infection and the subsequent development
of a series of sulfonamides, or “sulfa drugs,” there was no
chemical defense against this type of disease; as a result, illnesses
such as streptococcal infection, gonorrhea, and pneumonia held terrors
of which they have largely been shorn.Asmall injury could easily
lead to death.
By following the clues presented by the synthetic sulfa drugs and
how they worked to destroy bacteria, other scientists were able to
develop an even more powerful type of drug, the antibiotic. When
the American bacteriologist Rene Dubos discovered that natural organisms
could also be used to fight bacteria, interest was renewed in
an earlier discovery by the Scottish bacteriologist Sir Alexander: the
development of penicillin.
Antibiotics such as penicillin and streptomycin have become
some of the most important tools in fighting disease. Antibiotics
have replaced sulfa drugs for most uses, in part because they cause
fewer side effects, but sulfa drugs are still used for a handful of purposes.
Together, sulfonamides and antibiotics have offered the possibility
of a cure to millions of people who previously would have
had little chance of survival.
Labels:
agents,
Antibacterial,
antimicrobial,
Domagk,
drugs,
genetic,
Gerhard,
impact,
inventor,
Mechanisms,
resistance,
to
Sunday, November 23, 2008
Amniocentesis
The invention:
A technique for removing amniotic fluid from
pregnant women, amniocentesis became a life-saving tool for diagnosing
fetal maturity, health, and genetic defects.
The people behind the invention:
Douglas Bevis, an English physician
Aubrey Milunsky (1936- ), an American pediatrician
How Babies Grow
For thousands of years, the inability to see or touch a fetus in the
uterus was a staggering problem in obstetric care and in the diagnosis
of the future mental and physical health of human offspring. A
beginning to the solution of this problem occurred on February 23,
1952, when The Lancet published a study called “The Antenatal Prediction
of a Hemolytic Disease of the Newborn.” This study, carried
out by physician Douglas Bevis, described the use of amniocentesis
to assess the risk factors found in the fetuses of Rh-negative women
impregnated by Rh-positive men. The article is viewed by many as a
landmark in medicine that led to the wide use of amniocentesis as a
tool for diagnosing fetal maturity, fetal health, and fetal genetic
deects.
At the beginning of a human pregnancy (conception) an egg and
a sperm unite to produce the fertilized egg that will become a new
human being. After conception, the fertilized egg passes from the
oviduct into the uterus, while dividing and becoming an organized
cluster of cells capable of carrying out different tasks in the ninemonth-
long series of events leading up to birth.
About a week after conception, the cluster of cells, now a “vesicle”
(a fluid-filled sac containing the new human cells), attaches
to the uterine lining, penetrates it, and becomes intimately intertwined
with uterine tissues. In time, the merger between the vesicle
and the uterus results in formation of a placenta that connects the
mother and the embryo, and an amniotic sac filled with the amniotic
fluid in which the embryo floats.
Eight weeks after conception,the embryo (now afetus) is about 2.5 centimeters
long and possessesall the anatomic elements it will have when it is born.
At this time, about two and one-half months after her last menstruation,
the expectant mother typically visits a physician and finds out she is pregnant.
Also at this time, expecting mothers often begin to worry about possible birth
defects in the babies they carry.
Diabetic mothers and mothers older than thirtyfive years have higher than usual
chances of delivering babies who have birth defects.
Many other factors inferred from the medical history an expecting
mother provides to her physician can indicate the possible appearance
of birth defects. In some cases, knowledge of possible
physical problems in a fetus may allow their treatment in the uterus
and save the newborn from problems that could persist throughout
life or lead to death in early childhood. Information is obtained
through the examination of the amniotic fluid in which the fetus is
suspended throughout pregnancy. The process of obtaining this
fluid is called “amniocentesis.”
Diagnosing Diseases Before Birth
Amniocentesis is carried out in several steps. First, the placenta
and the fetus are located by the use of ultrasound techniques. Next,
the expecting mother may be given a local anesthetic; a long needle
is then inserted carefully into the amniotic sac. As soon as amniotic
fluid is seen, a small sample (about four teaspoons) is drawn into a
hypodermic syringe and the syringe is removed. Amniocentesis is
nearly painless, and most patients feel only a little abdominal pressure
during the procedure.
The amniotic fluid of early pregnancy resembles blood serum.
As pregnancy continues, its content of substances from fetal urine
and other fetal secretions increases. The fluid also contains fetal cells
from skin and from the gastrointestinal, reproductive, and respiratory
tracts. Therefore, it is of great diagnostic use. Immediately after
the fluid is removed from the fetus, the fetal cells are separated out.
Then, the cells are used for genetic analysis and the amniotic fluid is
examined by means of various biochemical techniques.
One important use of the amniotic fluid from amniocentesis is
the determination of its lecithin and sphingomyelin content. Lecithins
and sphingomyelins are two types of body lipids (fatty molecules)
that are useful diagnostic tools. Lecithins are important because
they are essential components of the so-called pulmonary
surfactant of mature lungs. The pulmonary surfactant acts at lung
surfaces to prevent the collapse of the lung air sacs (alveoli) when a
person exhales.
Subnormal lecithin production in a fetus indicates that it most
likely will exhibit respiratory distress syndrome or a disease called
“hyaline membrane disease” after birth. Both diseases can be fatal,
so it is valuable to determine whether fetal lecithin levels are adequate
for appropriate lung function in the newborn baby. This is
particularly important in fetuses being carried by diabetic mothers,
who frequently produce newborns with such problems. Often, when
the risk of respiratory distress syndrome is identified through amniocentesis,
the fetus in question is injected with hormones that help it
produce mature lungs. This effect is then confirmed by the repeated
use of amniocentesis. Many other problems can also be identified by
the use of amniocentesis and corrected before the baby is born.
Consequences
In the years that have followed Bevis’s original observation, many
improvements in the methodology of amniocentesis and in the techniques
used in gathering and analyzing the genetic and biochemical
information obtained have led to good results. Hundreds of debilitating
hereditary diseases can be diagnosed and some ameliorated—by
the examination of amniotic fluid and fetal cells isolated by amniocentesis.
For many parents who have had a child afflicted by some hereditary
disease, the use of the technique has become a major consideration
in family planning. Furthermore, many physicians recommend strongly
that all mothers over the age of thirty-four be tested by amniocentesis
to assist in the diagnosis of Down syndrome, a congenital but nonhereditary
form of mental deficiency.
There remains the question of whether such solutions are morally
appropriate, but parents—and society—now have a choice resulting
from the techniques that have developed since Bevis’s 1952
observation. It is also hoped that these techniques will lead to
means for correcting and preventing diseases and preclude the need
for considering the therapeutic termination of any pregnancy.
See also : Abortion pill ; Birth control pill ; CAT scanner ; Electrocardiogram ; Electroencephalogram ; Mammography ; Nuclear magnetic resonance ; Pap test ; Amniocentesis
Labels:
Amniocentesis,
Aubrey Milunsky,
Douglas Bevis,
info,
informations,
invention,
inventor
Wednesday, November 19, 2008
Ammonia
The invention:
The first successful method for converting nitrogen
from the atmosphere and combining it with hydrogen to synthesize
ammonia, a valuable compound used as a fertilizer.
The person behind the invention:
Fritz Haber (1868-1934), a German chemist who won the 1918
Nobel Prize in Chemistry
The Need for Nitrogen
The nitrogen content of the soil, essential to plant growth, is
maintained normally by the deposition and decay of old vegetation
and by nitrates in rainfall. If, however, the soil is used extensively
for agricultural purposes, more intensive methods must be used to
maintain soil nutrients such as nitrogen. One such method is crop
rotation, in which successive divisions of a farm are planted in rotation
with clover, corn, or wheat, for example, or allowed to lie fallow
for a year or so. The clover is able to absorb nitrogen fromthe air and
deposit it in the soil through its roots. As population has increased,
however, farming has become more intensive, and the use of artificial
fertilizers—some containing nitrogen—has become almost universal.
Nitrogen-bearing compounds, such as potassium nitrate and
ammonium chloride, have been used for many years as artificial fertilizers.
Much of the nitrate used, mainly potassium nitrate, came
from Chilean saltpeter, of which a yearly amount of half a million
tons was imported at the beginning of the twentieth century into
Europe and the United States for use in agriculture. Ammonia was
produced by dry distillation of bituminous coal and other lowgrade
fuel materials. Originally, coke ovens discharged this valuable
material into the atmosphere, but more economical methods
were found later to collect and condense these ammonia-bearing
vapors.
At the beginning of the twentieth century, Germany had practically
no source of fertilizer-grade nitrogen; almost all of its supply
came from the deserts of northern Chile. As demand for nitrates increased,
it became apparent that the supply from these vast deposits
would not be enough. Other sources needed to be found, and the almost
unlimited supply of nitrogen in the atmosphere (80 percent nitrogen)
was an obvious source.
Temperature and Pressure
When Fritz Haber and coworkers began his experiments on ammonia
production in 1904, Haber decided to repeat the experiments
of the British chemist Sir William Ramsay and Sydney Young, who
in 1884 had studied the decomposition of ammonia at about 800 degrees
Celsius. They had found that a certain amount of ammonia
was always left undecomposed. In other words, the reaction between
ammonia and its constituent elements—nitrogen and hydrogen—
had reached a state of equilibrium.
Haber decided to determine the point at which this equilibrium
took place at temperatures near 1,000 degrees Celsius. He tried several
approaches, reacting pure hydrogen with pure nitrogen, and
starting with pure ammonia gas and using iron filings as a catalyst.
(Catalytic agents speed up a reaction without affecting it otherwise).
Having determined the point of equilibrium, he next tried different
catalysts and found nickel to be as effective as iron, and calcium
and manganese even better. At 1,000 degrees Celsius, the rate of reaction
was enough to produce practical amounts of ammonia continuously.
Further work by Haber showed that increasing the pressure also
increased the percentage of ammonia at equilibrium. For example,
at 300 degrees Celsius, the percentage of ammonia at equilibrium at
1 atmosphere of pressure was very small, but at 200 atmospheres,
the percentage of ammonia at equilibrium was far greater. A pilot
plant was constructed and was successful enough to impress a
chemical company, Badische Anilin-und Soda-Fabrik (BASF). BASF
agreed to study Haber’s process and to investigate different catalysts
on a large scale. Soon thereafter, the process became a commercial
success.
Impact
With the beginning of World War I, nitrates were needed more
urgently for use in explosives than in agriculture. After the fall of
Antwerp, 50,000 tons of Chilean saltpeter were discovered in the
harbor and fell into German hands. Because the ammonia from
Haber’s process could be converted readily into nitrates, it became
an important war resource. Haber’s other contribution to the German
war effort was his development of poison gas, which was used
for the chlorine gas attack on Allied troops at Ypres in 1915. He also
directed research on gas masks and other protective devices.
At the end of the war, the 1918 Nobel Prize in Chemistry was
awarded to Haber for his development of the process for making
synthetic ammonia. Because the war was still fresh in everyone’s
memory, it became one of the most controversial Nobel awards ever
made. Aheadline in The New York Times for January 26, 1920, stated:
“French Attack Swedes for Nobel Prize Award: Chemistry Honor
Given to Dr. Haber, Inventor of German Asphyxiating Gas.” In a letter
to the Times on January 28, 1920, the Swedish legation in Washington,
D.C., defended the award.
Haber left Germany in 1933 under duress from the anti-Semitic
policies of the Nazi authorities. He was invited to accept a position
with the University of Cambridge, England, and died on a trip to
Basel, Switzerland, a few months later, a great man whose spirit had
been crushed by the actions of an evil regime.
Fritz Haber
Fritz Haber’s career is a warning to inventors: Beware of
what you create, even if your intentions are honorable.
Considered a leading chemist of his age, Haber was born in
Breslau (nowWroclaw, Poland) in 1868. Abrilliant student, he
earned a doctorate quickly, specializing in organic chemistry,
and briefly worked as an industrial chemist. Although he soon
took an academic job, throughout his career Haber believed
that science must benefit society—new theoretical discoveries
must find practical applications.
Beginning in 1904, he applied new chemical techniques
to fix atmospheric nitrogen in the form of ammonia.
Nitrogen in the form of nitrates was urgently
sought because nitrates were necessary to fertilize
crops and natural sources were becoming rare. Only
artificial nitrates could sustain the amount of agriculture
needed to feed expanding populations.
In 1908 Haber succeeded in finding an efficient, cheap process
to make ammonia and convert it to nitrates, and
by 1910 German manufacturers had built large plants
to exploit his techniques.
He was lauded as a great benefactor to humanity.
However, his efforts to help Germany during World War I,
even though he hated war, turned his life into a nightmare. His
wife committed suicide because of his chlorine gas research,
which also poisoned his international reputation and tainted
his 1918 Nobel Prize in Chemistry. After the war he redirected
his energies to helping Germany rebuild its economy. Eight
years of experiments in extracting gold from seawater ended in
failure, but he did raise the Kaiser Wilhelm Institute for Physical
Chemistry, which he directed, to international prominence.
Nonetheless, Haber had to flee Adolf Hitler’s Nazi regime in
1933 and died a year later, better known for his war research
than for his fundamental service to agriculture and industry.
See also : Fuel cell ; Refrigerant gas ; Silicones ;
Labels:
Ammonia,
fritz,
Fritz Haber,
haber,
info,
informations,
invention,
inventor
Monday, November 17, 2008
Alkaline storage battery
The invention:
The nickel-iron alkaline battery was a lightweight,
inexpensive portable power source for vehicles with electric motors.
The people behind the invention:
Thomas Alva Edison (1847-1931), American chemist, inventor,
and industrialist
Henry Ford (1863-1947), American inventor and industrialist
Charles F. Kettering (1876-1958), American engineer and
inventor
A Three-Way Race
The earliest automobiles were little more than pairs of bicycles
harnessed together within a rigid frame, and there was little agreement
at first regarding the best power source for such contraptions.
The steam engine, which was well established for railroad and ship
transportation, required an external combustion area and a boiler.
Internal combustion engines required hand cranking, which could
cause injury if the motor backfired. Electric motors were attractive
because they did not require the burning of fuel, but they required
batteries that could store a considerable amount of energy and
could be repeatedly recharged. Ninety percent of the motorcabs in
use in New York City in 1899 were electrically powered.
The first practical storage battery, which was invented by the
French physicist Gaston Planté in 1859, employed electrodes (conductors
that bring electricity into and out of a conducting medium)
of lead and lead oxide and a sulfuric acid electrolyte (a solution
that conducts electricity). In somewhat improved form, this
remained the only practical rechargeable battery at the beginning
of the twentieth century. Edison considered the lead acid cell (battery)
unsuitable as a power source for electric vehicles because using
lead, one of the densest metals known, resulted in a heavy
battery that added substantially to the power requirements of a
motorcar. In addition, the use of an acid electrolyte required that
the battery container be either nonmetallic or coated with a nonmetal
and thus less dependable than a steel container.
The Edison Battery
In 1900, Edison began experiments aimed at developing a rechargeable
battery with inexpensive and lightweight metal electrodes and an
alkaline electrolyte so that a metal container could be used. He had already
been involved in manufacturing the nonrechargeable battery
known as the Lalande cell, which had zinc and copper oxide electrodes
and a highly alkaline sodium hydroxide electrolyte. Zinc electrodes
could not be used in a rechargeable cell because the zinc would
dissolve in the electrolyte. The copper electrode also turned out to be
unsatisfactory. After much further experimentation, Edison settled
on the nickel-iron system for his new storage battery. In this system,
the power-producing reaction involved the conversion of nickel oxide
to nickel hydroxide together with the oxidation of iron metal to
iron oxide, with both materials in contact with a potassium hydroxide
solution. When the battery was recharged, the nickel hydroxide
was converted into oxide and the iron oxide was converted back to
the pure metal. Although the basic ingredients of the Edison cell were
inexpensive, they could not readily be obtained in adequate purity
for battery use.
Edison set up a new chemical works to prepare the needed materials.
He purchased impure nickel alloy, which was then dissolved
in acid, purified, and converted to the hydroxide. He prepared
pure iron powder by using a multiple-step process. For use
in the battery, the reactant powders had to be packed in pockets
made of nickel-plated steel that had been perforated to al-
low the iron and nickel powders to come into contact with the electrolyte.
Because the nickel compounds were poor electrical conductors,
a flaky type of graphite was mixed with the nickel hydroxide at
this stage.
Sales of the new Edison storage battery began in 1904, but within
six months it became apparent that the battery was subject to losses
in power and a variety of other defects. Edison took the battery off
the market in 1905 and offered full-price refunds for the defective
batteries. Not a man to abandon an invention, however, he spent the
next five years examining the failed batteries and refining his design.
He discovered that the repeated charging and discharging of
the battery caused a shift in the distribution of the graphite in the
nickel hydroxide electrode. By using a different type of graphite, he
was able to eliminate this problem and produce a very dependable
power source.
The Ford Motor Company, founded by Henry Ford, a former
Edison employee, began the large-scale production of gasolinepowered
automobiles in 1903 and introduced the inexpensive, easyto-
drive Model T in 1908. The introduction of the improved Edison
battery in 1910 gave a boost to electric car manufacturers, but their
new position in the market would be short-lived. In 1911, Charles
Kettering invented an electric starter for gasoline-powered vehicles
that eliminated the need for troublesome and risky hand cranking.
By 1915, this device was available on all gasoline-powered automobiles,
and public interest in electrically powered cars rapidly diminished.
Although the Kettering starter required a battery, it required
much less capacity than an electric motor would have and was almost
ideally suited to the six-volt lead-acid battery.
Impact
Edison lost the race to produce an electrical power source that
would meet the needs of automotive transportation. Instead, the internal
combustion engine developed by Henry Ford became the standard.
Interest in electrically powered transportation diminished as
immense reserves of crude oil, from which gasoline could be obtained,
were discovered first in the southwestern United States and
then on the Arabian peninsula. Nevertheless, the Edison cell found
a variety of uses and has been manufactured continuously throughout
most of the twentieth century much as Edison designed it.
Electrically powered trucks proved to be well suited for local deliveries,
and some department stores maintained fleets of such
trucks into the mid-1920’s. Electrical power is still preferable to internal
combustion for indoor use, where exhaust fumes are a significant
problem, so forklifts in factories and passenger transport vehi-
cles at airports still make use of the Edison-type power source. The
Edison battery also continues to be used in mines, in railway signals,
in some communications equipment, and as a highly reliable
source of standby emergency power.
Thomas Alva Edison
Thomas Alva Edison (1847-1931) was America’s most famous
and prolific inventor. His astonishing success story, rising
from a home-schooled child who worked as a newsboy to
a leader in American industry, was celebrated in children’s
books, biographies, and movies. Corporations still bear his
name, and his inventions and improvements of others’ inventions—
such as the light bulb, phonograph, and motion picture—
shaped the way Americans live, work, and entertain
themselves. The U.S. Patent Office issued Edison 1,093 patents
during his lifetime, the most granted to one person.
Hailed as a genius, Edison himself emphasized the value of
plain determination. Genius is one percent inspiration and 99
percent perspiration, he insisted. He also understood the value
of working with others. In fact, one of his greatest contributions
to American technology involved organized research. At age
twenty-three he sold the rights to his first major invention,
an improved ticker-tape machine for Wall Street brokers, for
$40,000. He invested the money in building an industrial research
laboratory, the first ever. It led to his large facilities at
Menlo Park, New Jersey, and, later, labs in other locations. At
times as many as one hundred people worked for him, some of
whom, such as Nikola Tesla and Reginald Fessenden, became
celebrated inventors in their own right.
At his labs Edison not only developed electrical items, such
as the light bulb and storage battery; he also produced an efficient
mimeograph and worked on innovations in metallurgy,
organic chemistry, photography and motion pictures, and phonography.
The phonograph, he once said, was his favorite invention.
Edison never stopped working. He was still receiving patents
the year he died.
Saturday, November 15, 2008
Airplane
The invention:
The first heavier-than-air craft to fly, the airplane
revolutionized transportation and symbolized the technological
advances of the twentieth century.
The people behind the invention:
Wilbur Wright (1867-1912), an American inventor
Orville Wright (1871-1948), an American inventor
Octave Chanute (1832-1910), a French-born American civil
engineer
A Careful Search
Although people have dreamed about flying since the time of the
ancient Greeks, it was not until the late eighteenth century that hotair
balloons and gliders made human flight possible. It was not until
the late nineteenth century that enough experiments had been done
with kites and gliders that people could begin to think seriously
about powered, heavier-than-air flight. Two of these people were
Wilbur and Orville Wright.
TheWright brothers were more than just tinkerers who accidentally
found out how to build a flying machine. In 1899,Wilbur wrote
the Smithsonian Institution for a list of books to help them learn
about flying. They used the research of people such as George
Cayley, Octave Chanute, Samuel Langley, and Otto Lilienthal to
help them plan their own experiments with birds, kites, and gliders.
They even built their own wind tunnel. They never fully trusted the
results of other people’s research, so they repeated the experiments
of others and drew their own conclusions. They shared these results
with Octave Chanute, who was able to offer them lots of good advice.
They were continuing a tradition of excellence in engineering
that began with careful research and avoided dangerous trial and
error.
Slow Success
Before the brothers had set their minds to flying, they had built
and repaired bicycles. This was a great help to them when they put
their research into practice and actually built an airplane. From
building bicycles, they knew how to work with wood and metal to
make a lightweight but sturdy machine. Just as important, from riding
bicycles, they got ideas about how an airplane needed to work.
They could see that both bicycles and airplanes needed to be fast
and light. They could also see that airplanes, like bicycles, needed to
be kept under constant control to stay balanced, and that this control
would probably take practice. This was a unique idea. Instead
of building something solid that was controlled by levers and wheels
like a car, theWright brothers built a flexible airplane that was controlled
partly by the movement of the pilot, like a bicycle.
The result was the 1903 Wright Flyer. The Flyer had two sets of
wings, one above the other, which were about 12 meters from tip to
tip. They made their own 12-horsepower engine, as well as the two
propellers the engine spun. The craft had skids instead of wheels.
On December 14, 1903, theWright brothers took the Wright Flyer to
the shores of Kitty Hawk, North Carolina, where Wilbur Wright
made the first attempt to fly the airplane.
The first thingWilbur found was that flying an airplane was not
as easy as riding a bicycle. One wrong move sent him tumbling into
the sand only moments after takeoff.Wilbur was not seriously hurt,
but a few more days were needed to repair the Wright Flyer.
On December 17, 1903, at 10:35 a.m., after eight years of research
and planning, OrvilleWright took to the air for a historic twelve sec-
onds. He covered 37 meters of ground and 152 meters of air space.
Both brothers took two flights that morning. On the fourth flight,
Wilbur flew for fifty-nine seconds over 260 meters of ground and
through more than 800 meters of air space. After he had landed, a
sudden gust of wind struck the plane, damaging it beyond repair.
Yet no one was able to beat their record for three years.
Impact
Those first flights in 1903 got little publicity. Only a few people,
such as Octave Chanute, understood the significance of the Wright
brothers’ achievement. For the next two years, they continued to
work on their design, and by 1905 they had built theWright Flyer III.
Although Chanute tried to get them to enter flying contests, the
brothers decided to be cautious and try to get their machine patented
first, so that no one would be able to steal their ideas.
News of their success spread slowly through the United States
and Europe, giving hope to others who were working on airplanes
of their own. When theWright brothers finally went public with the
Wright Flyer III, they inspired many new advances. By 1910, when
the brothers started flying in air shows and contests, their feats were
matched by another American, Glen Hammond Curtiss. The age of
the airplane had arrived.
Later in the decade, the Wright brothers began to think of military
uses for their airplanes. They signed a contract with the U.S.
Army Signal Corps and agreed to train military pilots.
Aside from these achievements, the brothers from Dayton, Ohio,
set the standard for careful research and practical experimentation.
They taught the world not only how to fly but also how to design
airplanes. Indeed, their methods of purposeful, meaningful, and
highly organized research had an impact not only on airplane design
but also on the field of aviation science in general.
The Wright Brothers
Orville and his older brother Wilbur first got interested in
aircraft when their father gave them a toy helicopter in 1878.
Theirs was a large, supportive family. Their father, a minister,
and their mother, a college graduate and inventor of household
gadgets, encouraged all five of the children to be creative. AlthoughWilbur,
born in 1867, was four years older than Orville,
they were close as children. While in high school, they put out a
weekly newspaper together, West Side News, and they opened
their bicycle shop in 1892. Orville was the mechanically adept
member of the team, the tinkerer; Wilbur was the deliberative
one, the planner and designer.
Since the bicycle business was seasonal, they had time to
pursue their interest in aircraft, puzzling out the technical problems
and studying the successes and failures of others. They
started with gliders, flying their first, which had a five-foot
wing span, in 1899. They developed their own technique to control
the gliders, the “wing-warping technique,” after watching
how birds fly. They attached wires to the trailing edges of the
wings and pulled the wires to deform the wings’ shape. They
built a sixteen-foot glider in 1900 and spent a vacation in North
Carolina gaining flying experience. Further designs and many
more tests followed, including more than two hundred shapes
of wing studied in their home-built wind tunnel, before their
first successful engine-powered flight in 1903.
Neither man ever married. After Wilbur died of typhoid in
1912, Orville was stricken by the loss of his brother but continued
to run their business until 1915. He last piloted an airplane
himself in 1918 and died thirty years later.
Their first powered airplane, theWright Flyer, lives on at the
National Air and Space Museum in Washington, D.C. Small
parts from the aircraft were taken to the Moon by Neil Armstrong
and Edwin Aldrin when they made the first landing
there in 1969.
See also here !
Labels:
airplane,
Brothers,
info,
informations,
invention,
inventor,
Octave Chanute,
Orville Wright,
the,
Wilbur Wright,
Wright
Subscribe to:
Posts (Atom)