Tuesday, January 27, 2009

Assembly line





The invention: Amanufacturing technique pioneered in the automobile
industry by Henry Ford that lowered production costs
and helped bring automobile ownership within the reach of millions
of Americans in the early twentieth century.
The people behind the invention:
Henry Ford (1863-1947), an American carmaker
Eli Whitney (1765-1825), an American inventor
Elisha King Root (1808-1865), the developer of division of labor
Oliver Evans (1755-1819), the inventor of power conveyors
Frederick Winslow Taylor (1856-1915), an efficiency engineer
A Practical Man
Henry Ford built his first “horseless carriage” by hand in his
home workshop in 1896. In 1903, the Ford Motor Company was
born. Ford’s first product, the Model A, sold for less than one thousand
dollars, while other cars at that time were priced at five to ten
thousand dollars each. When Ford and his partners tried, in 1905, to
sell a more expensive car, sales dropped. Then, in 1907, Ford decided
that the Ford Motor Company would build “a motor car for
the great multitude.” It would be called the Model T.
The Model T came out in 1908 and was everything that Henry Ford
said it would be. Ford’s Model T was a low-priced (about $850), practical
car that came in one color only: black. In the twenty years during
which the Model T was built, the basic design never changed. Yet the
price of the Model T, or “Tin Lizzie,” as it was affectionately called,
dropped over the years to less than half that of the original Model T. As
the price dropped, sales increased, and the Ford Motor Company
quickly became the world’s largest automobile manufacturer.
The last of more than 15 million Model T’s was made in 1927. Although
it looked and drove almost exactly like the first Model T,
these two automobiles were built in an entirely different way. The
first was custom-built, while the last came off an assembly line.
At first, Ford had built his cars in the same way everyone else
did: one at a time. Skilled mechanics would work on a car from start
to finish, while helpers and runners brought parts to these highly
paid craftsmen as they were needed. After finishing one car, the mechanics
and their helpers would begin the next.
The Quest for Efficiency
Custom-built products are good when there is little demand and
buyers are willing to pay the high labor costs. This was not the case
with the automobile. Ford realized that in order to make a large
number of quality cars at a low price, he had to find a more efficient
way to build cars. To do this, he looked to the past and the work of
others. He found four ideas: interchangeable parts, continuous flow,
division of labor, and elimination of wasted motion.
Eli Whitney, the inventor of the cotton gin, was the first person to
use interchangeable parts successfully in mass production. In 1798, the
United States government asked Whitney to make several thousand
muskets in two years. Instead of finding and hiring gunsmiths to make
the muskets by hand, Whitney used most of his time and money to design
and build special machines that could make large numbers of identical parts—one machine for each part that was needed to build a
musket. These tools, and others Whitney made for holding, measuring,
and positioning the parts, made it easy for semiskilled, and even
unskilled, workers to build a large number of muskets.
Production can be made more efficient by carefully arranging the
different stages of production to create a “continuous flow.” Ford
borrowed this idea from at least two places: the meat-packing
houses of Chicago and an automatic grain mill run by Oliver Evans.
Ford’s idea for a moving assembly line came from Chicago’s
great meat-packing houses in the late 1860’s. Here, the bodies of animals
were moved along an overhead rail past a number of workers,
each ofwhommade a certain cut, or handled one part of the packing
job. This meant that many animals could be butchered and packaged
in a single day.
Ford looked to Oliver Evans for an automatic conveyor system.
In 1783, Evans had designed and operated an automatic grain mill
that could be run by only two workers. As one worker poured grain
into a funnel-shaped container, called a “hopper,” at one end of the
mill, a second worker filled sacks with flour at the other end. Everything
in between was done automatically, as Evans’s conveyors
passed the grain through the different steps of the milling process
without any help.
The idea of “division of labor” is simple: When one complicated
job is divided into several easier jobs, some things can be made
faster, with fewer mistakes, by workers who need fewer skills than
ever before. Elisha King Root had used this principle to make the famous
Colt “Six-Shooter.” In 1849, Root went to work for Samuel
Colt at his Connecticut factory and proved to be a manufacturing
genius. By dividing the work into very simple steps, with each step
performed by one worker, Root was able to make many more guns
in much less time.
Before Ford applied Root’s idea to the making of engines, it took
one worker one day to make one engine. By breaking down the
complicated job of making an automobile engine into eighty-four
simpler jobs, Ford was able to make the process much more efficient.
By assigning one person to each job, Ford’s company was able
to make 352 engines per day—an increase of more than 400 percent.
Frederick Winslow Taylor has been called the “original efficiency
expert.” His idea was that inefficiency was caused by wasted time
and wasted motion. So Taylor studied ways to eliminate wasted
motion. He proved that, in the long run, doing a job too quickly was
as bad as doing it too slowly. “Correct speed is the speed at which
men can work hour after hour, day after day, year in and year out,
and remain continuously in good health,” he said. Taylor also studied
ways to streamline workers’ movements. In this way, he was
able to keep wasted motion to a minimum.
Impact
The changeover from custom production to mass production
was an evolution rather than a revolution. Henry Ford applied the
four basic ideas of mass production slowly and with care, testing
each new idea before it was used. In 1913, the first moving assembly
line for automobiles was being used to make Model T’s. Ford was
able to make his Tin Lizzies faster than ever, and his competitors
soon followed his lead. He had succeeded in making it possible for
millions of people to buy automobiles.
Ford’s work gave a new push to the Industrial Revolution. It
showed Americans that mass production could be used to improve
quality, cut the cost of making an automobile, and improve profits.
In fact, the Model T was so profitable that in 1914 Ford was able to
double the minimum daily wage of his workers, so that they too
could afford to buy Tin Lizzies.
Although Americans account for only about 6 percent of the
world’s population, they now own about 50 percent of its wealth.
There are more than twice as many radios in the United States as
there are people. The roads are crowded with more than 180 million
automobiles. Homes are filled with the sounds and sights emitting
from more than 150 million television sets. Never have the people of
one nation owned so much. Where did all the products—radios,
cars, television sets—come from? The answer is industry, which still
depends on the methods developed by Henry Ford.

Sunday, January 25, 2009

Aspartame





The invention

An artificial sweetener with a comparatively natural taste widely used in carbonated beverages.



The people behind the invention

Arthur H. Hayes, Jr. (1933- ), a physician and commissioner of the U.S. Food
and Drug Administration (FDA)

James M. Schlatter (1942- ), an American chemist

Michael Sveda (1912- ), an American chemist and inventor

Ludwig Frederick Audrieth (1901- ), an American chemist and educator

Ira Remsen (1846-1927), an American chemist and educator Constantin Fahlberg (1850-1910), a German chemist.



Sweetness Without Calories


People have sweetened food and beverages since before recorded
history. The most widely used sweetener is sugar, or sucrose. The
only real drawback to the use of sucrose is that it is a nutritive sweetener:
In addition to adding a sweet taste, it adds calories. Because sucrose is
readily absorbed by the body, an excessive amount can be life-threatening to diabetics. This fact alone would make the development of nonsucrose
sweeteners attractive.
There are three common nonsucrose sweeteners in use around the world:
saccharin, cyclamates, and aspartame. Saccharin was the first of this group
to be discovered, in 1879. Constantin Fahlberg synthesized saccharin based
on the previous experimental work of Ira Remsen using toluene (derived from petroleum).
This product was found to be three hundred to five hundred times as sweet as
sugar, although some people could detect a bitter aftertaste.
In 1944, the chemical family of cyclamates was discovered by Ludwig Frederick Audrieth and Michael Sveda. Although these compounds are only thirty to eighty times as sweet as sugar, there was no detectable aftertaste.
By the mid-1960’s, cyclamates had resplaced saccharin as the leading nonnutritive sweetener in theUnited States.
Although cyclamates are still in use throughout the world, in October, 1969, FDA removed them from the list of approved food additives because of tests that indicated possible health hazards.



A Political Additive


Aspartame is the latest in artificial sweeteners that are derived from natural ingredients—in this case, two amino acids, one from milk and one from bananas. Discovered by accident in 1965 by American chemist James M. Schlatter when he licked his fingers during an experiment, aspartame is 180 times as sweet as sugar. In 1974, the FDAapproved its use in dry foods such as gum and cerealand as a sugar replacement.
Shortly after its approval for this limited application, the FDA held public hearings on the safety concerns raised by JohnW. Olney, a professor of neuropathology at Washington University in St. Louis.
There was some indication that aspartame, when combined with the common food additive monosodium glutamate, caused brain damage in children. These fears were confirmed, but the risk of brain damage was limited to a small percentage of individuals with a rare genetic disorder.
At this point, the public debate took a political turn:
Senator William Proxmire charged FDA Commissioner AlexanderM. Schmidt with public misconduct.
This controversy resulted in aspartame being taken off the market in 1975.
In 1981, the new FDA commissioner, Arthur H. Hayes, Jr., resapproved aspartame for use in the same applications: as a tabletop sweetener, as a cold-cereal additive, in chewing gum, and for other miscellaneous uses.
In 1983, the FDAapproved aspartame for use in carbonated beverages, its largest application to date.
Later safety studies revealed that children with a rare metabolic disease, phenylketonuria,could not ingest this sweetener without severe health
risks because of the presence of phenylalanine in aspartame.
This condition results in a rapid buildup in phenylalanine in the blood.
Laboratories simulated this condition in rats and found that high doses of aspartame inhibited the synthesis of dopamine, a neurotransmitter.
Once this happens, an increase in the frequency of seizures can occur.
There was no direct evidence, however, that aspartame actually caused
seizures in these experiments.
Many other compounds are being tested for use as sugar replacements,
the sweetest being a relative of aspartame. This compound is seventeen
thousand to fifty-two thousand times sweeter than sugar.



Impact


The business fallout from the approval of a new low-calorie sweetener occurred
over a short span of time. In 1981, sales of thisartificial sweetener by G. D. Searle and Company were $74 million.
In 1983, sales rose to $336 million and exceeded half a billion dollars
the following year.
These figures represent sales of more than 2,500tons of this product.
In 1985, 3,500 tons of aspartame were consumed.
Clearly, this product’s introduction was a commercial success for Searle.
During this same period, the percentage of reduced calorie carbonated
beverages containing saccharin declined from100 percent to 20 percent in an industry that had $4 billion in sales.
Universally, consumers preferred products containing aspartame; the bitter aftertaste of saccharin was rejected in favor of the new, less
powerful sweetener.
There is a trade-off in using these products. The FDA found evidence linking
both saccharin and cyclamates to an elevated incidence of cancer.
Cyclamates were banned in the United States for this reason. Public resistance
to this measure caused the agency to back away from its position.
The rationale was that, compared toother health risks associated with the consumption of sugar (especially for diabetics and overweight persons),
the chance of getting cancer was slight and therefore a risk that many people
wouldchoose to ignore. The total domination of aspartame in the sweetener
market seems to support this assumption.

Friday, January 16, 2009

Artificial satellite





The invention

Sputnik I, the first object put into orbit around the
earth, which began the exploration of space.

The people behind the invention

Sergei P. Korolev (1907-1966), a Soviet rocket scientist

Konstantin Tsiolkovsky (1857-1935), a Soviet schoolteacher and the founder of rocketry in the Soviet Union

Robert H. Goddard (1882-1945), an American scientist and the founder of rocketry in the United States

Wernher von Braun (1912-1977), a German who worked on rocket projects

Arthur C. Clarke (1917- ), the author of more than fifty books and the visionary behind telecommunications satellites




A Shocking Launch

In Russian, sputnik means “satellite” or “fellow traveler.”
On October4, 1957, Sputnik 1, the first artificial satellite to orbit Earth,
wasplaced into successful orbit by the Soviet Union. The launch of this
small aluminum sphere, 0.58 meter in diameter and weighing 83.6
kilograms, opened the doors to the frontiers of space.
Orbiting Earth every 96 minutes, at 28,962 kilometers per hour,
Sputnik 1 came within 215 kilometers of Earth at its closest point and
939 kilometers away at its farthest point. It carried equipment to
measure the atmosphere and to experiment with the transmission
of electromagnetic waves from space. Equipped with two radio
transmitters (at different frequencies) that broadcast for twenty-one
days, Sputnik 1 was in orbit for ninety-two days, until January 4,
1958, when it disintegrated in the atmosphere.
Sputnik 1 was launched using a Soviet intercontinental ballistic
missile (ICBM) modified by Soviet rocket expert Sergei P. Korolev.
After the launch of Sputnik 2, less than a month later, Chester
Bowles, a former United States ambassador to India and Nepal,
wrote: “Armed with a nuclear warhead, the rocket which launched
Sputnik 1 could destroy New York, Chicago, or Detroit 18 minutes
after the button was pushed in Moscow.”
Although the launch of Sputnik 1 came as a shock to the general
public, it came as no surprise to those who followed rocketry. In
June, 1957, the United States Air Force had issued a nonclassified
memo stating that there was “every reason to believe that the Rus-
sian satellite shot would be made on the hundredth anniversary” of
Konstantin Tsiolkovsky’s birth.




Thousands of Launches

Rockets have been used since at least the twelfth century, when
Europeans and the Chinese were using black powder devices. In
1659, the Polish engineer Kazimir Semenovich published his Roketten
für Luft und Wasser (rockets for air and water), which had a drawing
of a three-stage rocket. Rockets were used and perfected for warfare
during the nineteenth and twentieth centuries. Nazi Germany’s V-2
rocket (thousands of which were launched by Germany against England
during the closing years of World War II) was the model for
American and Soviet rocket designers between 1945 and 1957. In
the Soviet Union, Tsiolkovsky had been thinking about and writing
about space flight since the last decade of the nineteenth century,
and in the United States, Robert H. Goddard had been thinking
about and experimenting with rockets since the first decade of the
twentieth century.
Wernher von Braun had worked on rocket projects for Nazi Germany
duringWorldWar II, and, as the war was ending in May, 1945,
von Braun and several hundred other people involved in German
rocket projects surrendered to American troops in Europe. Hundreds
of other German rocket experts ended up in the Soviet Union
to continue with their research. Tom Bower pointed out in his book
The Paperclip Conspiracy: The Hunt for the Nazi Scientists (1987)—so
named because American “recruiting officers had identified [Nazi]
scientists to be offered contracts by slipping an ordinary paperclip
onto their files”—that American rocketry research was helped
tremendously by Nazi scientists who switched sides after World
War II.
The successful launch of Sputnik 1 convinced people that space
travel was no longer simply science fiction. The successful launch of
Sputnik 2 on November 3, 1957, carrying the first space traveler, a
dog named Laika (who was euthanized in orbit because there were
no plans to retrieve her), showed that the launch of Sputnik 1 was
only the beginning of greater things to come.


Consequences


After October 4, 1957, the Soviet Union and other nations launched
more experimental satellites. On January 31, 1958, the United
States sent up Explorer 1, after failing to launch a Vanguard satellite
on December 6, 1957.
Arthur C. Clarke, most famous for his many books of science fiction,
published a technical paper in 1945 entitled “Extra-Terrestrial
Relays: Can Rocket Stations GiveWorld-Wide Radio Coverage?” In
that paper, he pointed out that a satellite placed in orbit at the correct
height and speed above the equator would be able to hover over
the same spot on Earth. The placement of three such “geostationary”
satellites would allow radio signals to be transmitted around
the world. By the 1990’s, communications satellites were numerous.
In the first twenty-five years after Sputnik 1 was launched, from
1957 to 1982, more than two thousand objects were placed into various
Earth orbits by more than twenty-four nations. On the average,
something was launched into space every 3.82 days for this twentyfive-
year period, all beginning with Sputnik 1.

Thursday, January 8, 2009

Artificial kidney






The invention

A machine that removes waste end-products and poisons out of the blood when human kidneys are not working properly.


The people behind the invention

John Jacob Abel (1857-1938), a pharmacologist and biochemist known as the “father of American pharmacology”

Willem Johan Kolff (1911- ), a Dutch American clinician who pioneered the artificial kidney and the artificial heart.





Cleansing the Blood


In the human body, the kidneys are the dual organs that remove waste matter from the bloodstream and send it out of the system as urine. If the kidneys fail to work properly, this cleansing process must be done artifically—such as by a machine.
John Jacob Abel was the first professor of pharmacology at Johns Hopkins University School of Medicine. Around 1912, he began to study the by-products of metabolism that are carried in the blood.
This work was difficult, he realized, because it was nearly impossible to detect even the tiny amounts of the many substances in blood.
Moreover, no one had yet developed a method or machine for taking these substances out of the blood.
In devising a blood filtering system, Abel understood that he needed a saline solution and a membrane that would let some substances pass through but not others. Working with Leonard Rowntree and Benjamin B. Turner, he spent nearly two years figuring out how to build a machine that would perform dialysis—that is, remove metabolic by-products from blood. Finally their efforts succeeded.
The first experiments were performed on rabbits and dogs. In operating the machine, the blood leaving the patient was sent flowing through a celloidin tube that had been wound loosely around a drum. An anticlotting substance (hirudin, taken out of leeches) was added to blood as the blood flowed through the tube. The drum, which was immersed in a saline and dextrose solution, rotated slowly. As blood flowed through the immersed tubing, the pressure of osmosis removed urea and other substances, but not the plasma or cells, from the blood.
The celloidin membranes allowed oxygen to pass from the saline and dextrose solution into the blood, so that purified, oxygenated blood then flowed back into the arteries.
Abel studied the substances that his machine had removed from the blood, and he found that they included not only urea but also free amino acids. He quickly realized that his machine could be useful for taking care of people whose kidneys were not working properly.
Reporting on his research, he wrote, “In the hope of providing a substitute in such emergencies, which might tide over a dangerous crisis . . . a method has been devised by which the blood of a living animal may be submitted to dialysis outside the body,
and again returned to the natural circulation.” Abel’s machine removed large quantities of urea and other poisonous substances fairly quickly, so that the process, which he called “vividiffusion,” could serve as an artificial kidney during cases of kidney failure.
For his physiological research, Abel found it necessary to remove, study, and then replace large amounts of blood from living animals, all without dissolving the red blood cells, which carry oxygen to the body’s various parts. He realized that this process, which
he called “plasmaphaeresis,” would make possible blood banks, where blood could be stored for emergency use.
In 1914, Abel published these two discoveries in a series of three articles in the Journal of Pharmacology and Applied Therapeutics, and he demonstrated his techniques in London, England, and Groningen,The Netherlands. Though he had suggested that his techniques could be used for medical purposes, he himself was interested mostly in continuing his biochemical research. So he turned to other projects in pharmacology, such as the crystallization of insulin,and never returned to studying vividiffusion.



Refining the Technique


Georg Haas, a German biochemist working in Giessen,West Germany, was also interested in dialysis; in 1915, he began to experiment with “blood washing.” After reading Abel’s 1914 writings,Haas tried substituting collodium for the celloidin that Abel had used as a filtering membrane and using commercially prepared heparin instead of the homemade hirudin Abel had used to prevent blood clotting. He then used this machine on a patient and found that it showed promise, but he knew that many technical problems had to be worked out before the procedure could be used on many patients.
In 1937,Willem Johan Kolff was a young physician at Groningen.He felt sad to see patients die from kidney failure, and he wanted to find a way to cure others. Having heard his colleagues talk about the possibility of using dialysis on human patients, he decided to build a dialysis machine.
Kolff knew that cellophane was an excellent membrane for dialyzing, and that heparin was a good anticoagulant, but he also realized that his machine would need to be able to treat larger volumes of blood than Abel’s and Haas’s had. During World War II (1939-1945), with the help of the director of a nearby enamel factory, Kolff built an artificial kidney that was first tried on a patient on March 17, 1943. Between March, 1943, and July 21, 1944, Kolff used his secretly constructed dialysis machines on fifteen patients, of whom only one survived. He published the results of his research in Acta Medica Scandinavica. Even though most of his patients had not survived,he had collected information and developed the technique until he was sure dialysis would eventually work.
Kolff brought machines to Amsterdam and The Hague and encouraged other physicians to try them; meanwhile, he continued to study blood dialysis and to improve his machines.
In 1947, he brought improved machines to London and the United States. By the time he reached Boston, however, he had given away all of his machines. He did, however, explain the technique to John P.Merrill, a physician at the Harvard Medical School, who soon became the leading American developer of kidney dialysis and kidney-transplant surgery.
Kolff himself moved to the United States, where he became an expert not only in artificial kidneys but also in artificial hearts. He helped develop the Jarvik-7 artificial heart (named for its chief inventor,Robert Jarvik), which was implanted in a patient in 1982.



Impact


Abel’s work showed that the blood carried some substances that had not been previously known and led to the development of the first dialysis machine for humans. It also encouraged interest in the possibility of organ transplants.
After World War II, surgeons had tried to transplant kidneys from one animal to another, but after a few days the recipient began to reject the kidney and die. In spite of these failures, researchers in Europe and America transplanted kidneys in several patients, and they used artificial kidneys to take care of the patients who were waiting for transplants.
In 1954, Merrill—to whom Kolff had demonstrated an artificial kidney—successfully transplanted kidneys in identical twins.After immunosuppressant drugs (used to prevent the body from rejecting newly transplanted tissue) were discovered in 1962,transplantation surgery became much more practical. After kidney transplants became common, the artificial kidney became simply a way of keeping a person alive until a kidney donor could befound.