Monday, August 31, 2009

Microwave cooking

The invention: System of high-speed cooking that uses microwave radition to agitate liquid molecules to raise temperatures by friction. The people behind the invention: Percy L. Spencer (1894-1970), an American engineer Heinrich Hertz (1857-1894), a German physicist James Clerk Maxwell (1831-1879), a Scottish physicist The Nature of Microwaves Microwaves are electromagnetic waves, as are radio waves, X rays, and visible light. Water waves

and sound waves are waveshaped disturbances of particles in the media—water in the case of water waves and air or water in the case of sound waves—through which they travel. Electromagnetic waves, however, are wavelike variations of intensity in electric and magnetic fields. Electromagnetic waves were first studied in 1864 by James Clerk Maxwell, who explained mathematically their behavior and velocity. Electromagnetic waves are described in terms of their “wavelength” and “frequency.” The wavelength is the length of one cycle, which is the distance from the highest point of one wave to the highest point of the next wave, and the frequency is the number of cycles that occur in one second. Frequency is measured in units called “hertz,” named for the German physicist Heinrich Hertz. The frequencies of microwaves run from 300 to 3,000 megahertz (1 megahertz equals 1 million hertz, or 1 million cycles per second), corresponding to wavelengths of 100 to 10 centimeters. Microwaves travel in the same way that light waves do; they are reflected by metallic objects, absorbed by some materials, and transmitted by other materials. When food is subjected to microwaves, it heats up because the microwaves make the water molecules in foods (water is the most common compound in foods) vibrate. Water is a “dipole molecule,” which means that it contains both positive and negative charges. When the food is subjected to microwaves, the dipole water molecules try to align themselves with the alternating electromagnetic field of the microwaves. This causes the water molecules to collide with one another and with other molecules in the food. Consequently, heat is produced as a result of friction. Development of the Microwave Oven Percy L. Spencer apparently discovered the principle of microwave cooking while he was experimenting with a radar device at the Raytheon Company. A candy bar in his pocket melted after being exposed to microwaves. After realizing what had happened, Spencer made the first microwave oven from a milk can and applied for two patents, “Method of Treating Foodstuffs” and “Means for Treating Foodstuffs,” on October 8, 1945, giving birth to microwaveoven technology. Spencer wrote that his invention “relates to the treatment of foodstuffs and, more particularly, to the cooking thereof through the use of electromagnetic energy.” Though the use of electromagnetic energy for heating was recognized at that time, the frequencies that were used were lower than 50 megahertz. Spencer discovered that heating at such low frequencies takes a long time. He eliminated the time disadvantage by using shorter wavelengths in the microwave region. Wavelengths of 10 centimeters or shorter were comparable to the average dimensions of foods. When these wavelengths were used, the heat that was generated became intense, the energy that was required was minimal, and the process became efficient enough to be exploited commercially. Although Spencer’s patents refer to the cooking of foods with microwave energy, neither deals directly with a microwave oven. The actual basis for a microwave oven may be patents filed by other researchers at Raytheon. Apatent by Karl Stiefel in 1949 may be the forerunner of the microwave oven, and in 1950, Fritz Gross received a patent entitled “Cooking Apparatus,” which specifically describes an oven that is very similar to modern microwave ovens. Perhaps the first mention of a commercial microwave oven was made in the November, 1946, issue of Electronics magazine. This article described the newly developed Radarange as a device that could bake biscuits in 29 seconds, cook hamburgers in 35 seconds,and grill a hot dog in 8 to 10 seconds. Another article that appeared a month later mentioned a unit that had been developed specifically for airline use. The frequency used in this oven was 3,000 megahertz. Within a year, a practical model 13 inches wide, 14 inches deep, and 15 inches high appeared, and several new models were operating in and around Boston. In June, 1947, Electronics magazine reported the installation of a Radarange in a restaurant, signaling the commercial use of microwave cooking. It was reported that this method more than tripled the speed of service. The Radarange became an important addition to a number of restaurants, and in 1948, Bernard Proctor and Samuel Goldblith used it for the first time to conduct research into microwave cooking. In the United States, the radio frequencies that can be used for heating are allocated by the Federal Communications Commission (FCC). The two most popular frequencies for microwave cooking are 915 and 2,450 megahertz, and the 2,450 frequency is used in home microwave ovens. It is interesting that patents filed by Spencer in 1947 mention a frequency on the order of 2,450 megahertz. This fact is another example of Spencer’s vision in the development of microwave cooking principles. The Raytheon Company concentrated on using 2,450 megahertz, and in 1955, the first domestic microwave oven was introduced. It was not until the late 1960’s, however, that the price of the microwave oven decreased sufficiently for the device to become popular. The first patent describing a microwave heating system being used in conjunction with a conveyor was issued to Spencer in 1952. Later, based on this development, continuous industrial applications of microwaves were developed. Impact Initially, microwaves were viewed as simply an efficient means of rapidly converting electric energy to heat. Since that time, however, they have become an integral part of many applications. Because of the pioneering efforts of Percy L. Spencer, microwave applications in the food industry for cooking and for other processing operations have flourished. In the early 1970’s, there were eleven microwave oven companies worldwide, two of which specialized in food processing operations, but the growth of the microwave oven industry has paralleled the growth in the radio and television industries. In 1984, microwave ovens accounted for more shipments than had ever been achieved by any appliance—9.1 million units. By 1989, more than 75 percent of the homes in the United States had microwave ovens, and in the 1990’s, microwavable foods were among the fastest-growing products in the food industry. Microwave energy facilitates reductions in operating costs and required energy, higher-quality and more reliable products, and positive environmental effects. To some degree, the use of industrial microwave energy remains in its infancy.Newand improved applications of microwaves will continue to appear.

Memory metal

Memory metal The invention: Known as nitinol, a metal alloy that returns to its original shape, after being deformed, when it is heated to the proper temperature. The person behind the invention: William Buehler (1923- ), an American metallurgist The Alloy with a Memory In 1960,William Buehler developed an alloy that consisted of 53 to 57 percent nickel (by weight) and the balance titanium. This alloy, which is called nitinol, turned out to have remarkable properties. Nitinol is a “memory metal,” which means that, given the proper conditions, objects made of nitinol can be restored to their original shapes even after they have been radically deformed. The return to the original shape

is triggered by heating the alloy to a moderate temperature. As the metal “snaps back” to its original shape, considerable force is exerted and mechanical work can be done. Alloys made of nickel and titanium have great potential in a wide variety of industrial and government applications. These include: for the computer market, a series of high-performance electronic connectors; for the medical market, intravenous fluid devices that feature precise fluid control; for the consumer market, eyeglass frame components; and, for the industrial market, power cable couplings that provide durability at welded joints. The Uncoiling Spring At one time, the “uncoiling spring experiment” was used to amuse audiences, and a number of scientists have had fun with nitinol in front of unsuspecting viewers. It is now generally recognized that the shape memory effect involves a thermoelastic transformation at the atomic level. This process is unique in that the transformation back to the original shape occurs as a result of stored elastic energy that assists the chemical driving force that is unleashed by heating the metal.The mechanism, simply stated, is that shape memory alloys are rather easily deformed below their “critical temperature.” Provided that the extent of the deformation is not too great, the original, undeformed state can be recovered by heating the alloy to a temperature just below the critical temperature. It is also significant that substantial stresses are generated when a deformed specimen “springs back” to its original shape. This phenomenon is very peculiar compared to the ordinary behavior of most materials. Researchers at the Naval Ordnance Laboratory discovered nitinol by accident in the process of trying to learn how to make titanium less brittle. They tried adding nickel, and when they were showing a wire of the alloy to some administrators, someone smoking a cigar held his match too close to the sample, causing the nitinol to spring back into shape. One of the first applications of the discovery was a new way to link hydraulic lines on the Navy’s F-14 fighter jets. The nitinol “sleeve” was cooled with liquid nitrogen, which enlarged the sample. Then it was slipped into place between two pipes. When the sleeve was warmed up, it contracted, clamping the pipes together and keeping them clamped with a force of nearly 50,000 pounds per square inch. Nitinol is not an easy alloy with which to work. When it is drilled or passed through a lathe, it becomes hardened and resists change. Welding nitinol and electroplating it have become manufacturing nightmares. It also resists taking on a desired shape. The frictional forces of many processes heat the nitinol, which activates its memory. Its fantastic elasticity also causes difficulties. If it is placed in a press with too little force, the spring comes out of the die unchanged.With too much force, the metal breaks into fragments. Using oil as a cooling lubricant and taking a step-wise approach to altering the alloy, however, allows it to be fashioned into particular shapes. One unique use of nitinol occurs in cardiac surgery. Surgical tools made of nitinol can be bent up to 90 degrees, allowing them to be passed into narrow vessels and then retrieved. The tools are then straightened out in an autoclave so that they can be reused.Many of the technical problems of working with nitinol have been solved, and manufacturers of the alloy are selling more than twenty different nitinol products to countless companies in the fields of medicine, transportation, consumer products, and toys. Nitinol toys include blinking movie posters, butterflies with flapping wings, and dinosaurs whose tails move; all these applications are driven by a contracting bit of wire that is connected to a watch battery. The “Thermobile” and the “Icemobile” are toys whose wheels are set in motion by hot water or by ice cubes. Orthodontists sometimes use nitinol wires and springs in braces because the alloy pulls with a force that is more gentle and even than that of stainless steel, thus causing less pain. Nitinol does not react with organic materials, and it is also useful as a new type of blood-clot filter. Best of all, however, is the use of nitinol for eyeglass frames. If the wearer deforms the frames by sitting on them (and people do so frequently), the optometrist simply dips the crumpled frames in hot water and the frames regain their original shape. From its beginnings as an “accidental” discovery, nitinol has gone on to affect various fields of science and technology, from the “Cryofit” couplings used in the hydraulic tubing of aircraft to the pin-and-socket contacts used in electrical circuits. Nitinol has also found its way into integrated circuit packages. In an age of energy conservation, the unique phase transformation of nickel-titanium alloys allows them to be used in lowtemperature heat engines. The world has abundant resources of low-grade thermal energy, and the recovery of this energy can be accomplished by the use of materials such as nitinol. Despite the limitations imposed on heat engines working at low temperatures across a small temperature change, sources of low-grade heat are so widespread that the economical conversion of a fractional percentage of that energy could have a significant impact on the world’s energy supply. Nitinol has also become useful as a material capable of absorbing internal vibrations in structural materials, and it has been used as “Harrington rods” to treat scoliosis (curvature of the spine).

Mass spectrograph

The invention: The first device used to measure the mass of atoms, which was found to be the result of the combination of isotopes. The people behind the invention: Francis William Aston (1877-1945), an English physicist who was awarded the 1922 Nobel Prize in Chemistry Sir Joseph John Thomson (1856-1940), an English physicist William Prout (1785-1850), an English biochemist Ernest Rutherford (1871-1937), an English physicist Same Element, Different Weights Isotopes are different forms of a chemical element that act similarly in chemical or physical reactions. Isotopes differ in two ways: They possess different atomic weights and different radioactive transformations. In 1803, John Dalton proposed a new atomic theory of chemistry that claimed that chemical elements in a compound combine by weight in whole number proportions to one another. By 1815, William Prout had taken Dalton’s hypothesis one step further and claimed that the atomic weights of elements were integral (the integers are the positive and negative whole numbers and zero) multiples

of the hydrogen atom. For example, if the weight of hydrogen was 1, then the weight of carbon was 12, and that of oxygen 16. Over the next decade, several carefully controlled experiments were conducted to determine the atomic weights of a number of elements. Unfortunately, the results of these experiments did not support Prout’s hypothesis. For example, the atomic weight of chlorine was found to be 35.5. It took a theory of isotopes, developed in the early part of the twentieth century, to verify Prout’s original theory. After his discovery of the electron, Sir Joseph John Thomson, the leading physicist at the Cavendish Laboratory in Cambridge, England, devoted much of his remaining research years to determining the nature of “positive electricity.” (Since electrons are negatively charged, most electricity is negative.) While developing an instrument sensitive enough to analyze the positive electron, Thomson invited FrancisWilliam Aston to work with him at the Cavendish Laboratory. Recommended by J. H. Poynting, who had taught Aston physics at Mason College, Aston began a lifelong association at Cavendish, and Trinity College became his home. When electrons are stripped from an atom, the atom becomes positively charged. Through the use of magnetic and electrical fields, it is possible to channel the resulting positive rays into parabolic tracks. By examining photographic plates of these tracks, Thomson was able to identify the atoms of different elements. Aston’s first contribution at Cavendish was to improve the instrument used to photograph the parabolic tracks. He developed a more efficient pump to create the required vacuum and devised a camera that would provide sharper photographs. By 1912, the improved apparatus had provided proof that the individual molecules of a substance have the same mass. While working on the element neon, however, Thomson obtained two parabolas, one with a mass of 20 and the other with a mass of 22, which seemed to contradict the previous findings that molecules of any substance have the same mass. Aston was given the task of resolving this mystery. Treating Particles Like Light In 1919, Aston began to build a device called a “mass spectrograph.” The idea was to treat ionized or positive atoms like light. He reasoned that, because light can be dispersed into a rainbowlike spectrum and analyzed by means of its different colors, the same procedure could be used with atoms of an element such as neon. By creating a device that used magnetic fields to focus the stream of particles emitted by neon, he was able to create a mass spectrum and record it on a photographic plate. The heavier mass of neon (the first neon isotope) was collected on one part of a spectrum and the lighter neon (the second neon isotope) showed up on another. This mass spectrograph was a magnificent apparatus: The masses could be analyzed without reference to the velocity of the particles, which was a problem with the parabola method devised by Thomson. Neon possessed two isotopes: one with a mass of 20 and the other with a mass of 22, in a ratio of 10:1. When combined, this gave the atomic weight 20.20, which was the accepted weight of neon.Aston’s accomplishment in developing the mass spectrograph was recognized immediately by the scientific community. His was a simple device that was capable of accomplishing a large amount of research quickly. The field of isotope research, which had been opened up by Aston’s research, ultimately played an important part in other areas of physics.Impact The years following 1919 were highly charged with excitement, since month after month new isotopes were announced. Chlorine had two; bromine had isotopes of 79 and 81, which gave an almost exact atomic weight of 80; krypton had six isotopes; and xenon had even more. In addition to the discovery of nonradioactive isotopes, the “whole-number rule” for chemistry was verified: Protons were the basic building blocks for different atoms, and they occurred exclusively in whole numbers. Aston’s original mass spectrograph had an accuracy of 1 in 1,000. In 1927, he built an even more accurate instrument, which was ten times more accurate. The new apparatus was sensitive enough to measure Albert Einstein’s law of mass energy conversion during a nuclear reaction. Between 1927 and 1935, Aston reviewed all the elements that he had worked on earlier and published updated results. He also began to build a still more accurate instrument, which proved to be of great value to nuclear chemistry. The discovery of isotopes opened the way to further research in nuclear physics and completed the speculations begun by Prout during the previous century. Although radioactivity was discovered separately, isotopes played a central role in the field of nuclear physics and chain reactions.

Mark I calculator

The invention: Early digital calculator designed to solve differential equations that was a forerunner of modern computers. The people behind the invention: Howard H. Aiken (1900-1973), Harvard University professor and architect of the Mark I Clair D. Lake (1888-1958), a senior engineer at IBM Francis E. Hamilton (1898-1972), an IBM engineer Benjamin M. Durfee (1897-1980), an IBM engineer The Human Computer The physical world can be described by means of mathematics. In principle, one can accurately describe nature down to the smallest detail.

In practice, however, this is impossible except for the simplest of atoms. Over the years, physicists have had great success in creating simplified models of real physical processes whose behavior can be described by the branch of mathematics called “calculus.” Calculus relates quantities that change over a period of time. The equations that relate such quantities are called “differential equations,” and they can be solved precisely in order to yield information about those quantities. Most natural phenomena, however, can be described only by differential equations that can be solved only approximately. These equations are solved by numerical means that involve performing a tremendous number of simple arithmetic operations (repeated additions and multiplications). It has been the dream of many scientists since the late 1700’s to find a way to automate the process of solving these equations. In the early 1900’s, people who spent day after day performing the tedious operations that were required to solve differential equations were known as “computers.” During the two world wars, these human computers created ballistics tables by solving the differential equations that described the hurling of projectiles and the dropping of bombs from aircraft. The war effort was largely responsible for accelerating the push to automate the solution to these problems.The ten-year period from 1935 to 1945 can be considered the prehistory of the development of the digital computer. (In a digital computer, digits represent magnitudes of physical quantities. These digits can have only certain values.) Before this time, all machines for automatic calculation were either analog in nature (in which case, physical quantities such as current or voltage represent the numerical values of the equation and can vary in a continuous fashion) or were simplistic mechanical or electromechanical adding machines. This was the situation that faced Howard Aiken. At the time, he was a graduate student working on his doctorate in physics. His dislike for the tremendous effort required to solve the differential equations used in his thesis drove him to propose, in the fall of 1937, constructing a machine that would automate the process. He proposed taking existing business machines that were commonly used in accounting firms and combining them into one machine that would be controlled by a series of instructions. One goal was to eliminate all manual intervention in the process in order to maximize the speed of the calculation. Aiken’s proposal came to the attention of Thomas Watson, who was then the president of International Business Machines Corporation (IBM). At that time, IBM was a major supplier of business machines and did not see much of a future in such “specialized” machines. It was the pressure provided by the computational needs of the military inWorldWar II that led IBM to invest in building automated calculators. In 1939, a contract was signed in which IBM agreed to use its resources (personnel, equipment, and finances) to build a machine for Howard Aiken and Harvard University. IBM brought together a team of seasoned engineers to fashion a working device from Aiken’s sketchy ideas. Clair D. Lake, who was selected to manage the project, called on two talented engineers— Francis E. Hamilton and Benjamin M. Durfee—to assist him. After four years of effort, which was interrupted at times by the demands of the war, a machine was constructed that worked remarkably well. Completed in January, 1943, at Endicott, New York, it was then disassembled and moved to Harvard University in Cambridge, Massachusetts, where it was reassembled. Known as the IBM automatic sequence controlled calculator (ASCC), it began operation in the spring of 1944 and was formally dedicated and revealed to the public on August 7, 1944. Its name indicates the machine’s distinguishing feature: the ability to load automatically the instructions that control the sequence of the calculation. This capability was provided by punching holes, representing the instructions, in a long, ribbonlike paper tape that could be read by the machine. Computers of that era were big, and the ASCC I was particularly impressive. It was 51 feet long by 8 feet tall, and it weighed 5 tons. It contained more than 750,000 parts, and when it was running, it sounded like a room filled with sewing machines. The ASCC later became known as the Harvard Mark I. Impact Although this machine represented a significant technological achievement at the time and contributed ideas that would be used in subsequent machines, it was almost obsolete fromthe start. It was electromechanical, since it relied on relays, but it was built at the dawn of the electronic age. Fully electronic computers offered better reliability and faster speeds. Howard Aiken continued, without the help of IBM, to develop successors to the Mark I. Because he resisted using electronics, however, his machines did not significantly affect the direction of computer development. For all its complexity, the Mark I operated reasonably well, first solving problems related to the war effort and then turning its attention to the more mundane tasks of producing specialized mathematical tables. It remained in operation at the Harvard Computational Laboratory until 1959, when it was retired and disassembled. Parts of this landmark computational tool are now kept at the Smithsonian Institute.

Monday, August 24, 2009

Mammography




The invention: The first X-ray procedure for detecting and diagnosing
breast cancer.
The people behind the invention:
Albert Salomon, the first researcher to use X-ray technology
instead of surgery to identify breast cancer
Jacob Gershon-Cohen (1899-1971), a breast cancer researcher
Studying Breast Cancer
Medical researchers have been studying breast cancer for more
than a century. At the end of the nineteenth century, however, no one
knew how to detect breast cancer until it was quite advanced. Often,
by the time it was detected, it was too late for surgery; many patients
who did have surgery died. So after X-ray technology first appeared
in 1896, cancer researchers were eager to experiment with it.
The first scientist to use X-ray techniques in breast cancer experiments
was Albert Salomon, a German surgeon. Trying to develop a
biopsy technique that could tell which tumors were cancerous and
thereby avoid unnecessary surgery, he X-rayed more than three
thousand breasts that had been removed from patients during breast
cancer surgery. In 1913, he published the results of his experiments,
showing that X rays could detect breast cancer. Different types of Xray
images, he said, showed different types of cancer.
Though Salomon is recognized as the inventor of breast radiology,
he never actually used his technique to diagnose breast cancer.
In fact, breast cancer radiology, which came to be known as “mammography,”
was not taken up quickly by other medical researchers.
Those who did try to reproduce his research often found that their
results were not conclusive.
During the 1920’s, however, more research was conducted in Leipzig,
Germany, and in South America. Eventually, the Leipzig researchers,
led by Erwin Payr, began to use mammography to diagnose
cancer. In the 1930’s, a Leipzig researcher named W. Vogel
published a paper that accurately described differences between
cancerous and noncancerous tumors as they appeared on X-ray photographs. Researchers in the United States paid little attention to
mammography until 1926. That year, a physician in Rochester, New
York, was using a fluoroscope to examine heart muscle in a patient
and discovered that the fluoroscope could be used to make images of
breast tissue as well. The physician, Stafford L. Warren, then developed
a stereoscopic technique that he used in examinations before
surgery. Warren published his findings in 1930; his article also described
changes in breast tissue that occurred because of pregnancy,
lactation (milk production), menstruation, and breast disease. Yet
Stafford’s technique was complicated and required equipment that
most physicians of the time did not have. Eventually, he lost interest
in mammography and went on to other research.
Using the Technique
In the late 1930’s, Jacob Gershon-Cohen became the first clinician
to advocate regular mammography for all women to detect breast
cancer before it became a major problem. Mammography was not
very expensive, he pointed out, and it was already quite accurate. A
milestone in breast cancer research came in 1956, when Gershon-
Cohen and others began a five-year study of more than 1,300 women
to test the accuracy of mammography for detecting breast cancer.
Each woman studied was screened once every six months. Of the
1,055 women who finished the study, 92 were diagnosed with benign
tumors and 23 with malignant tumors. Remarkably, out of all
these, only one diagnosis turned out to be wrong.
During the same period, Robert Egan of Houston began tracking
breast cancer X rays. Over a span of three years, one thousand X-ray
photographs were used to make diagnoses. When these diagnoses
were compared to the results of surgical biopsies, it was confirmed
that mammography had produced 238 correct diagnoses of cancer,
out of 240 cases. Egan therefore joined the crusade for regular breast
cancer screening.
Once mammography was finally accepted by doctors in the late
1950’s and early 1960’s, researchers realized that they needed a way
to teach mammography quickly and effectively to those who would
use it. A study was done, and it showed that any radiologist could
conduct the procedure with only five days of training.In the early 1970’s, the American Cancer Society and the National
Cancer Institute joined forces on a nationwide breast cancer
screening program called the “Breast Cancer Detection Demonstration
Project.” Its goal in 1971 was to screen more than 250,000
women over the age of thirty-five.
Since the 1960’s, however, some people had argued that mammography
was dangerous because it used radiation on patients. In
1976, Ralph Nader, a consumer advocate, stated that women who
were to undergo mammography should be given consent forms
that would list the dangers of radiation. In the years that followed,
mammography was refined to reduced the amount of radiation
needed to detect cancer. It became a standard tool for diagnosis, and
doctors recommended that women have a mammogram every two
or three years after the age of forty.
Impact
Radiology is not a science that concerns only breast cancer screening.
While it does provide the technical facilities necessary to practice
mammography, the photographic images obtained must be interpreted
by general practitioners, as well as by specialists. Once Gershon-Cohen had demonstrated the viability of the technique, a
means of training was devised that made it fairly easy for clinicians
to learn how to practice mammography successfully. Once all these
factors—accuracy, safety, simplicity—were in place, mammography
became an important factor in the fight against breast cancer.
The progress made in mammography during the twentieth century
was a major improvement in the effort to keep more women
from dying of breast cancer. The disease has always been one of the
primary contributors to the number of female cancer deaths that occur
annually in the United States and around the world. This high
figure stems from the fact that women had no way of detecting the
disease until tumors were in an advanced state.
Once Salomon’s procedure was utilized, physicians had a means
by which they could look inside breast tissue without engaging in
exploratory surgery, thus giving women a screening technique that
was simple and inexpensive. By 1971, a quarter million women over
age thirty-five had been screened. Twenty years later, that number
was in the millions.

Monday, August 17, 2009

Long-distance telephone




The invention: System for conveying voice signals via wires over
long distances.
The people behind the invention:
Alexander Graham Bell (1847-1922), a Scottish American
inventor
Thomas A. Watson (1854-1934), an American electrical engineer
The Problem of Distance
The telephone may be the most important invention of the nineteenth
century. The device developed by Alexander Graham Bell
and Thomas A. Watson opened a new era in communication and
made it possible for people to converse over long distances for the
first time. During the last two decades of the nineteenth century and
the first decade of the twentieth century, the American Telephone
and Telegraph (AT&T) Company continued to refine and upgrade
telephone facilities, introducing such innovations as automatic dialing
and long-distance service.
One of the greatest challenges faced by Bell engineers was to
develop a way of maintaining signal quality over long distances.
Telephone wires were susceptible to interference from electrical
storms and other natural phenomena, and electrical resistance
and radiation caused a fairly rapid drop-off in signal strength,
which made long-distance conversations barely audible or unintelligible.
By 1900, Bell engineers had discovered that signal strength could
be improved somewhat by wrapping the main wire conductor with
thinner wires called “loading coils” at prescribed intervals along
the length of the cable. Using this procedure, Bell extended longdistance
service from New York to Denver, Colorado, which was
then considered the farthest point that could be reached with acceptable
quality. The result, however, was still unsatisfactory, and
Bell engineers realized that some form of signal amplification would
be necessary to improve the quality of the signal.A breakthrough came in 1906, when Lee de Forest invented the
“audion tube,” which could send and amplify radio waves. Bell scientists
immediately recognized the potential of the new device for
long-distance telephony and began building amplifiers that would
be placed strategically along the long-distance wire network.
Work progressed so quickly that by 1909, Bell officials were predicting
that the first transcontinental long-distance telephone service,
between New York and San Francisco, was imminent. In that
year, Bell president Theodore N. Vail went so far as to promise the
organizers of the Panama-Pacific Exposition, scheduled to open in
San Francisco in 1914, that Bell would offer a demonstration at
the exposition. The promise was risky, because certain technical
problems associated with sending a telephone signal over a 4,800-
kilometer wire had not yet been solved. De Forest’s audion tube was
a crude device, but progress was being made.
Two more breakthroughs came in 1912, when de Forest improved
on his original concept and Bell engineer Harold D. Arnold
improved it further. Bell bought the rights to de Forest’s vacuumtube
patents in 1913 and completed the construction of the New
York-San Francisco circuit. The last connection was made at the
Utah-Nevada border on June 17, 1914.
Success Leads to Further Improvements
Bell’s long-distance network was tested successfully on June 29,
1914, but the official demonstration was postponed until January
25, 1915, to accommodate the Panama-Pacific Exposition, which
had also been postponed. On that date, a connection was established
between Jekyll Island, Georgia, where Theodore Vail was recuperating
from an illness, and New York City, where Alexander
Graham Bell was standing by to talk to his former associate Thomas
Watson, who was in San Francisco. When everything was in place,
the following conversation took place. Bell: “Hoy! Hoy! Mr. Watson?
Are you there? Do you hear me?”Watson: “Yes, Dr. Bell, I hear
you perfectly. Do you hear me well?” Bell: “Yes, your voice is perfectly
distinct. It is as clear as if you were here in New York.”
The first transcontinental telephone conversation transmitted
by wire was followed quickly by another that was transmitted via radio. Although the Bell company was slow to recognize the potential
of radio wave amplification for the “wireless” transmission
of telephone conversations, by 1909 the company had made a significant
commitment to conduct research in radio telephony. On
April 4, 1915, a wireless signal was transmitted by Bell technicians
from Montauk Point on Long Island, New York, to Wilmington,
Delaware, a distance of more than 320 kilometers. Shortly thereafter,
a similar test was conducted between New York City and
Brunswick, Georgia, via a relay station at Montauk Point. The total
distance of the transmission was more than 1,600 kilometers. Finally,
in September, 1915, Vail placed a successful transcontinental radiotelephone
call from his office in New York to Bell engineering chief
J. J. Carty in San Francisco.
Only a month later, the first telephone transmission across the
Atlantic Ocean was accomplished via radio from Arlington, Virginia,
to the Eiffel Tower in Paris, France. The signal was detectable,
although its quality was poor. It would be ten years before true
transatlantic radio-telephone service would begin.
The Bell company recognized that creating a nationwide longdistance
network would increase the volume of telephone calls simply
by increasing the number of destinations that could be reached
from any single telephone station. As the network expanded, each
subscriber would have more reason to use the telephone more often,
thereby increasing Bell’s revenues. Thus, the company’s strategy
became one of tying local and regional networks together to create
one large system.
Impact
Just as the railroads had interconnected centers of commerce, industry,
and agriculture all across the continental United States in the
nineteenth century, the telephone promised to bring a new kind of
interconnection to the country in the twentieth century: instantaneous
voice communication. During the first quarter century after
the invention of the telephone and during its subsequent commercialization,
the emphasis of telephone companies was to set up central
office switches that would provide interconnections among
subscribers within a fairly limited geographical area. Large cities were wired quickly, and by the beginning of the twentieth century
most were served by telephone switches that could accommodate
thousands of subscribers.
The development of intercontinental telephone service was a
milestone in the history of telephony for two reasons. First, it was a
practical demonstration of the almost limitless applications of this
innovative technology. Second, for the first time in its brief history,
the telephone network took on a national character. It became clear
that large central office networks, even in large cities such as New
York, Chicago, and Baltimore, were merely small parts of a much
larger, universally accessible communication network that spanned
a continent. The next step would be to look abroad, to Europe and
beyond.

Long-distance radiotelephony




The invention: The first radio transmissions fromthe United States
to Europe opened a new era in telecommunications.
The people behind the invention:
Guglielmo Marconi (1874-1937), Italian inventor of transatlantic
telegraphy
Reginald Aubrey Fessenden (1866-1932), an American radio
engineer
Lee de Forest (1873-1961), an American inventor
Harold D. Arnold (1883-1933), an American physicist
John J. Carty (1861-1932), an American electrical engineer
An Accidental Broadcast
The idea of commercial transatlantic communication was first
conceived by Italian physicist and inventor Guglielmo Marconi, the
pioneer of wireless telegraphy. Marconi used a spark transmitter to
generate radio waves that were interrupted, or modulated, to form
the dots and dashes of Morse code. The rapid generation of sparks
created an electromagnetic disturbance that sent radio waves of different
frequencies into the air—a broad, noisy transmission that was
difficult to tune and detect.
The inventor Reginald Aubrey Fessenden produced an alternative
method that became the basis of radio technology in the twentieth
century. His continuous radio waves kept to one frequency,
making them much easier to detect at long distances. Furthermore,
the continuous waves could be modulated by an audio signal, making
it possible to transmit the sound of speech.
Fessenden used an alternator to generate electromagnetic waves
at the high frequencies required in radio transmission. It was specially
constructed at the laboratories of the General Electric Company.
The machine was shipped to Brant Rock, Massachusetts, in
1906 for testing. Radio messages were sent to a boat cruising offshore,
and the feasibility of radiotelephony was thus demonstrated.
Fessenden followed this success with a broadcast of messages and music between Brant Rock and a receiving station constructed at
Plymouth, Massachusetts.
The equipment installed at Brant Rock had a range of about 160
kilometers. The transmission distance was determined by the strength
of the electric power delivered by the alternator, which was measured
in watts. Fessenden’s alternator was rated at 500 watts, but it
usually delivered much less power.
Yet this was sufficient to send a radio message across the Atlantic.
Fessenden had built a receiving station at Machrihanish, Scotland,
to test the operation of a large rotary spark transmitter that he
had constructed. An operator at this station picked up the voice of
an engineer at Brant Rock who was sending instructions to Plymouth.
Thus, the first radiotelephone message had been sent across
the Atlantic by accident. Fessenden, however, decided not to make
this startling development public. The station at Machrihanish was
destroyed in a storm, making it impossible to carry out further tests.
The successful transmission undoubtedly had been the result of exceptionally
clear atmospheric conditions that might never again favor
the inventor.
One of the parties following the development of the experiments
in radio telephony was the American Telephone and Telegraph
(AT&T) Company. Fessenden entered into negotiations to sell his
system to the telephone company, but, because of the financial panic
of 1907, the sale was never made.
Virginia to Paris and Hawaii
The English physicist John Ambrose Fleming had invented a twoelement
(diode) vacuum tube in 1904 that could be used to generate
and detect radio waves. Two years later, the American inventor Lee
de Forest added a third element to the diode to produce his “audion”
(triode), which was a more sensitive detector. John J. Carty, head of a
research and development effort at AT&T, examined these new devices
carefully. He became convinced that an electronic amplifier, incorporating
the triode into its design, could be used to increase the
strength of telephone signals and to long distances.
On Carty’s advice, AT&T purchased the rights to de Forest’s
audion. A team of about twenty-five researchers, under the leadership of physicist Harold D. Arnold, were assigned the job of perfecting
the triode and turning it into a reliable amplifier. The improved
triode was responsible for the success of transcontinental cable telephone
service, which was introduced in January, 1915. The triode
was also the basis of AT&T’s foray into radio telephony.
Carty’s research plan called for a system with three components:
an oscillator to generate the radio waves, a modulator to add the
audio signals to the waves, and an amplifier to transmit the radio
waves. The total power output of the system was 7,500 watts,
enough to send the radio waves over thousands of kilometers.The apparatus was installed in the U.S. Navy’s radio tower in
Arlington, Virginia, in 1915. Radio messages from Arlington were
picked up at a receiving station in California, a distance of 4,000 kilometers,
then at a station in Pearl Harbor, Hawaii, which was 7,200
kilometers from Arlington. AT&T’s engineers had succeeded in
joining the company telephone lines with the radio transmitter at
Arlington; therefore, the president of AT&T, Theodore Vail, could
pick up his telephone and talk directly with someone in California.
The next experiment was to send a radio message fromArlington
to a receiving station set up in the Eiffel Tower in Paris. After several
unsuccessful attempts, the telephone engineers in the Eiffel Tower
finally heard Arlington’s messages on October 21, 1915. The AT&T
receiving station in Hawaii also picked up the messages. The two receiving
stations had to send their reply by telegraph to the United
States because both stations were set up to receive only. Two-way
radio communication was still years in the future.
Impact
The announcement that messages had been received in Paris was
front-page news and brought about an outburst of national pride in
the United States. The demonstration of transatlantic radio telephony
was more important as publicity for AT&T than as a scientific
advance. All the credit went to AT&T and to Carty’s laboratory.
Both Fessenden and de Forest attempted to draw attention to their
contributions to long-distance radio telephony, but to no avail. The
Arlington-to-Paris transmission was a triumph for corporate public
relations and corporate research.
The development of the triode had been achieved with large
teams of highly trained scientists—in contrast to the small-scale efforts
of Fessenden and de Forest, who had little formal scientific
training. Carty’s laboratory was an example of the new type of industrial
research that was to dominate the twentieth century. The
golden days of the lone inventor, in the mold of Thomas Edison or
Alexander Graham Bell, were gone.
In the years that followed the first transatlantic radio telephone
messages, little was done by AT&T to advance the technology or to
develop a commercial service. The equipment used in the 1915 demonstration was more a makeshift laboratory apparatus than a prototype
for a new radio technology. The messages sent were short and
faint. There was a great gulf between hearing “hello” and “goodbye”
amid the static. The many predictions of a direct telephone
connection between New York and other major cities overseas were
premature. It was not until 1927 that a transatlantic radio circuit was
opened for public use. By that time, a new technological direction
had been taken, and the method used in 1915 had been superseded
by shortwave radio communication.

Laser vaporization




The invention: Technique using laser light beams to vaporize the
plaque that clogs arteries.
The people behind the invention:
Albert Einstein (1879-1955), a theoretical American physicist
Theodore Harold Maiman (1927- ), inventor of the laser
Light, Lasers, and Coronary Arteries
Visible light, a type of electromagnetic radiation, is actually a
form of energy. The fact that the light beams produced by a light
bulb can warm an object demonstrates that this is the case. Light
beams are radiated in all directions by a light bulb. In contrast, the
device called the “laser” produces light that travels in the form of a
“coherent” unidirectional beam. Coherent light beams can be focused
on very small areas, generating sufficient heat to melt steel.
The term “laser” was invented in 1957 by R. Gordon Gould of
Columbia University. It stands for light amplification by stimulated
emission of radiation, the means by which laser light beams are
made. Many different materials—including solid ruby gemstones,
liquid dye solutions, and mixtures of gases—can produce such
beams in a process called “lasing.” The different types of lasers yield
light beams of different colors that have many uses in science, industry,
and medicine. For example, ruby lasers, which were developed
in 1960, are widely used in eye surgery. In 1983, a group of
physicians in Toulouse, France, used a laser for cardiovascular treatment.
They used the laser to vaporize the “atheroma” material that
clogs the arteries in the condition called “atherosclerosis.” The technique
that they used is known as “laser vaporization surgery.”
Laser Operation, Welding, and Surgery
Lasers are electronic devices that emit intense beams of light
when a process called “stimulated emission” occurs. The principles
of laser operation, including stimulated emission, were established
by Albert Einstein and other scientists in the first third of the twentieth century. In 1960, Theodore H. Maiman of the Hughes Research
Center in Malibu, California, built the first laser, using a ruby crystal
to produce a laser beam composed of red light.
All lasers are made up of three main components. The first of
these, the laser’s “active medium,” is a solid (like Maiman’s ruby
crystal), a liquid, or a gas that can be made to lase. The second component
is a flash lamp or some other light energy source that puts
light into the active medium. The third component is a pair of mirrors
that are situated on both sides of the active medium and are designed
in such a way that one mirror transmits part of the energy
that strikes it, yielding the light beam that leaves the laser.
Lasers can produce energy because light is one of many forms of
energy that are called, collectively, electromagnetic radiation (among
the other forms of electromagnetic radiation are X rays and radio
waves). These forms of electromagnetic radiation have different wavelengths;
the smaller the wavelength, the higher the energy level. The
energy level is measured in units called “quanta.” The emission of
light quanta from atoms that are said to be in the “excited state” produces
energy, and the absorption of quanta by unexcited atoms—
atoms said to be in the “ground state”—excites those atoms.
The familiar light bulb spontaneously and haphazardly emits
light of many wavelengths from excited atoms. This emission occurs
in all directions and at widely varying times. In contrast, the
light reflection between the mirrors at the ends of a laser causes all
of the many excited atoms present in the active medium simultaneously
to emit light waves of the same wavelength. This process is
called “stimulated emission.”
Stimulated emission ultimately causes a laser to yield a beam of
coherent light, which means that the wavelength, emission time,
and direction of all the waves in the laser beam are the same. The
use of focusing devices makes it possible to convert an emitted laser
beam into a point source that can be as small as a few thousandths of
an inch in diameter. Such focused beams are very hot, and they can
be used for such diverse functions as cutting or welding metal objects
and performing delicate surgery. The nature of the active medium
used in a laser determines the wavelength of its emitted light
beam; this in turn dictates both the energy of the emitted quanta and
the appropriate uses for the laser.Maiman’s ruby laser, for example, has been used since the 1960’s
in eye surgery to reattach detached retinas. This is done by focusing
the laser on the tiny retinal tear that causes a retina to become detached.
The very hot, high-intensity light beam then “welds” the
retina back into place, bloodlessly, by burning it to produce scar tissue.
The burning process has no effect on nearby tissues. Other
types of lasers have been used in surgeries on the digestive tract and
the uterus since the 1970’s.
In 1983, a group of physicians began using lasers to treat cardiovascular
disease. The original work, which was carried out by a
number of physicians in Toulouse, France, involved the vaporization
of atheroma deposits (atherosclerotic plaque) in a human artery. This very exciting event added a new method to medical science’s
arsenal of life-saving techniques.
Consequences
Since their discovery, lasers have been used for many purposes
in science and industry. Such uses include the study of the laws of
chemistry and physics, photography, communications, and surveying.
Lasers have been utilized in surgery since the mid-1960’s, and
their use has had a tremendous impact on medicine. The first type
of laser surgery to be conducted was the repair of detached retinas
via ruby lasers. This technique has become the method of choice for
such eye surgery because it takes only minutes to perform rather
than the hours required for conventional surgical methods. It is also
beneficial because the lasing of the surgical site cauterizes that site,
preventing bleeding.
In the late 1970’s, the use of other lasers for abdominal cancer
surgery and uterine surgery began and flourished. In these
forms of surgery, more powerful lasers are used. In the 1980’s,
laser vaporization surgery (LVS) began to be used to clear atherosclerotic
plaque (atheromas) from clogged arteries. This methodology
gives cardiologists a useful new tool. Before LVS was
available, surgeons dislodged atheromas by means of “transluminal
angioplasty,” which involved pushing small, fluoroscopeguided
inflatable balloons through clogged arteries.

Wednesday, August 12, 2009

Laser eye surgery





The invention: The first significant clinical ophthalmic application
of any laser system was the treatment of retinal tears with a
pulsed ruby laser.
The people behind the invention:
Charles J. Campbell (1926- ), an ophthalmologist
H. Christian Zweng (1925- ), an ophthalmologist
Milton M. Zaret (1927- ), an ophthalmologist
Theodore Harold Maiman (1927- ), the physicist who
developed the first laser
Monkeys and Rabbits
The term “laser” is an acronym for light amplification by the
stimulated emission of radiation. The development of the laser for
ophthalmic (eye surgery) surgery arose from the initial concentration
of conventional light by magnifying lenses.
Within a laser, atoms are highly energized. When one of these atoms
loses its energy in the form of light, it stimulates other atoms to
emit light of the same frequency and in the same direction. A cascade
of these identical light waves is soon produced, which then oscillate
back and forth between the mirrors in the laser cavity. One
mirror is only partially reflective, allowing some of the laser light to
pass through. This light can be concentrated further into a small
burst of high intensity.
On July 7, 1960, Theodore Harold Maiman made public his discovery
of the first laser—a ruby laser. Shortly thereafter, ophthalmologists
began using ruby lasers for medical purposes.
The first significant medical uses of the ruby laser occurred in
1961, with experiments on animals conducted by Charles J. Campbell
in New York, H. Christian Zweng, and Milton M. Zaret. Zaret and his
colleagues produced photocoagulation (a thickening or drawing together
of substances by use of light) of the eyes of rabbits by flashes
froma ruby laser. Sufficient energy was delivered to cause immediate
thermal injury to the retina and iris of the rabbit. The beam also was directed to the interior of the rabbit eye, resulting in retinal coagulations.
The team examined the retinal lesions and pointed out both
the possible advantages of laser as a tool for therapeutic photocoagulation
and the potential applications in medical research.
In 1962, Zweng, along with several of his associates, began experimenting
with laser photocoagulation on the eyes of monkeys
and rabbits in order to establish parameters for the use of lasers on
the human eye.
Reflected by Blood
The vitreous humor, a transparent jelly that usually fills the vitreous
cavity of the eyes of younger individuals, commonly shrinks with age,
with myopia, or with certain pathologic conditions. As these conditions
occur, the vitreous humor begins to separate from the adjacent
retina. In some patients, the separating vitreous humor produces a
traction (pulling), causing a retinal tear to form. Through this opening in
the retina, liquefied vitreous humor can pass to a site underneath the
retina, producing retinal detachment and loss of vision.
Alaser can be used to cause photocoagulation of a retinal tear. As a
result, an adhesive scar forms between the retina surrounding the
tear and the underlying layers so that, despite traction, the retina
does not detach. If more than a small area of retina has detached, the
laser often is ineffective and major retinal detachment surgery must
be performed. Thus, in the experiments of Campbell and Zweng, the
ruby laser was used to prevent, rather than treat, retinal detachment.
In subsequent experiments with humans, all patients were treated
with the experimental laser photocoagulator without anesthesia.
Although usually no attempt was made to seal holes or tears, the
diseased portions of the retina were walled off satisfactorily so that
no detachments occurred. One problem that arose involved microaneurysms.
A“microaneurysm” is a tiny aneurysm, or blood-filled
bubble extending from the wall of a blood vessel. When attempts to
obliterate microaneurysms were unsuccessful, the researchers postulated
that the color of the ruby pulse so resembled the red of blood
that the light was reflected rather than absorbed. They believed that
another lasing material emitting light in another part of the spectrum
might have performed more successfully.Previously, xenon-arc lamp photocoagulators had been used to
treat retinal tears. The long exposure time required of these systems,
combined with their broad spectral range emission (versus
the single wavelength output of a laser), however, made the retinal
spot on which the xenon-arc could be focused too large for many
applications. Focused laser spots on the retina could be as small as
50 microns.
Consequences
The first laser in ophthalmic use by Campbell, Zweng, and Zaret,
among others, was a solid laser—Maiman’s ruby laser. While the results
they achieved with this laser were more impressive than with
the previously used xenon-arc, in the decades following these experiments,
argon gas replaced ruby as the most frequently used material
in treating retinal tears.
Argon laser energy is delivered to the area around the retinal tear
through a slit lamp or by using an intraocular probe introduced directly
into the eye. The argon wavelength is transmitted through the
clear structures of the eye, such as the cornea, lens, and vitreous.
This beam is composed of blue-green light that can be effectively
aimed at the desired portion of the eye. Nevertheless, the beam can
be absorbed by cataracts and by vitreous or retinal blood, decreasing
its effectiveness.
Moreover, while the ruby laser was found to be highly effective
in producing an adhesive scar, it was not useful in the treatment of
vascular diseases of the eye. Aseries of laser sources, each with different
characteristics, was considered, investigated, and used clinically
for various durations during the period that followed Campbell
and Zweng’s experiments.
Other laser types that are being adapted for use in ophthalmology
are carbon dioxide lasers for scleral surgery (surgery on the
tough, white, fibrous membrane covering the entire eyeball except
the area covered by the cornea) and eye wall resection, dye lasers to
kill or slow the growth of tumors, eximer lasers for their ability to
break down corneal tissue without heating, and pulsed erbium lasers
used to cut intraocular membranes.

Laser-diode recording process




The invention: Video and audio playback system that uses a lowpower
laser to decode information digitally stored on reflective
disks.
The organization behind the invention:
The Philips Corporation, a Dutch electronics firm
The Development of Digital Systems
Since the advent of the computer age, it has been the goal of
many equipment manufacturers to provide reliable digital systems
for the storage and retrieval of video and audio programs. A need
for such devices was perceived for several reasons. Existing storage
media (movie film and 12-inch, vinyl, long-playing records) were
relatively large and cumbersome to manipulate and were prone to
degradation, breakage, and unwanted noise. Thus, during the late
1960’s, two different methods for storing video programs on disc
were invented. A mechanical system was demonstrated by the
Telefunken Company, while the Radio Corporation of America
(RCA) introduced an electrostatic device (a device that used static
electricity). The first commercially successful system, however, was
developed during the mid-1970’s by the Philips Corporation.
Philips devoted considerable resources to creating a digital video
system, read by light beams, which could reproduce an entire feature-
length film from one 12-inch videodisc. An integral part of this
innovation was the fabrication of a device small enough and fast
enough to read the vast amounts of greatly compacted data stored
on the 12-inch disc without introducing unwanted noise. Although
Philips was aware of the other formats, the company opted to use an
optical scanner with a small “semiconductor laser diode” to retrieve
the digital information. The laser diode is only a fraction of a millimeter
in size, operates quite efficiently with high amplitude and relatively
low power (0.1 watt), and can be used continuously. Because
this configuration operates at a high frequency, its informationcarrying
capacity is quite large.Although the digital videodisc system (called “laservision”) works
well, the low level of noise and the clear images offered by this system
were masked by the low quality of the conventional television
monitors on which they were viewed. Furthermore, the high price
of the playback systems and the discs made them noncompetitive
with the videocassette recorders (VCRs) that were then capturing
the market for home systems. VCRs had the additional advantage
that programs could be recorded or copied easily. The Philips Corporation
turned its attention to utilizing this technology in an area
where low noise levels and high quality would be more readily apparent—
audio disc systems. By 1979, they had perfected the basic
compact disc (CD) system, which soon revolutionized the world of
stereophonic home systems.
Reading Digital Discs with Laser Light
Digital signals (signals composed of numbers) are stored on
discs as “pits” impressed into the plastic disc and then coated with a
thin reflective layer of aluminum. A laser beam, manipulated by
delicate, fast-moving mirrors, tracks and reads the digital information
as changes in light intensity. These data are then converted to a
varying electrical signal that contains the video or audio information.
The data are then recovered by means of a sophisticated
pickup that consists of the semiconductor laser diode, a polarizing
beam splitter, an objective lens, a collective lens system, and a
photodiode receiver. The beam from the laser diode is focused by a
collimator lens (a lens that collects and focuses light) and then
passes through the polarizing beam splitter (PBS). This device acts
like a one-way mirror mounted at 45 degrees to the light path. Light
from the laser passes through the PBS as if it were a window, but the
light emerges in a polarized state (which means that the vibration of
the light takes place in only one plane). For the beam reflected from
the CD surface, however, the PBS acts like a mirror, since the reflected
beam has an opposite polarization. The light is thus deflected
toward the photodiode detector. The objective lens is needed
to focus the light onto the disc surface. On the outer surface of the
transparent disc, the main spot of light has a diameter of 0.8 millimeter,
which narrows to only 0.0017 millimeter at the reflective surface. At the surface, the spot is about three times the size of the microscopic
pits (0.0005 millimeter).
The data encoded on the disc determine the relative intensity of
the reflected light, on the basis of the presence or absence of pits.
When the reflected laser beam enters the photodiode, a modulated
light beam is changed into a digital signal that becomes an analog
(continuous) audio signal after several stages of signal processing
and error correction.
Consequences
The development of the semiconductor laser diode and associated
circuitry for reading stored information has made CD audio
systems practical and affordable. These systems can offer the quality
of a live musical performance with a clarity that is undisturbed
by noise and distortion. Digital systems also offer several other significant
advantages over analog devices. The dynamic range (the
difference between the softest and the loudest signals that can be
stored and reproduced) is considerably greater in digital systems. In
addition, digital systems can be copied precisely; the signal is not
degraded by copying, as is the case with analog systems. Finally,
error-correcting codes can be used to detect and correct errors in
transmitted or reproduced digital signals, allowing greater precision
and a higher-quality output sound.
Besides laser video systems, there are many other applications
for laser-read CDs. Compact disc read-only memory (CD-ROM) is
used to store computer text. One standard CD can store 500 megabytes
of information, which is about twenty times the storage of a
hard-disk drive on a typical home computer. Compact disc systems
can also be integrated with conventional televisions (called CD-V)
to present twenty minutes of sound and five minutes of sound with
picture. Finally, CD systems connected with a computer (CD-I) mix
audio, video, and computer programming. These devices allow the
user to stop at any point in the program, request more information,
and receive that information as sound with graphics, film clips, or
as text on the screen.

Laser



The invention: Taking its name from the acronym for light amplification
by the stimulated emission of radiation, a laser is a
beam of electromagnetic radiation that is monochromatic, highly
directional, and coherent. Lasers have found multiple applications
in electronics, medicine, and other fields.
The people behind the invention:
Theodore Harold Maiman (1927- ), an American physicist
Charles Hard Townes (1915- ), an American physicist who
was a cowinner of the 1964 Nobel Prize in Physics
Arthur L. Schawlow (1921-1999), an American physicist,
cowinner of the 1981 Nobel Prize in Physics
Mary Spaeth (1938- ), the American inventor of the tunable
laser
Coherent Light
Laser beams differ from other forms of electromagnetic radiation
in being consisting of a single wavelength, being highly directional,
and having waves whose crests and troughs are aligned. A laser
beam launched from Earth has produced a spot a few kilometers
wide on the Moon, nearly 400,000 kilometers away. Ordinary light
would have spread much more and produced a spot several times
wider than the Moon. Laser light can also be concentrated so as to
yield an enormous intensity of energy, more than that of the surface
of the Sun, an impossibility with ordinary light.
In order to appreciate the difference between laser light and ordinary
light, one must examine how light of any kind is produced. An
ordinary light bulb contains atoms of gas. For the bulb to light up,
these atoms must be excited to a state of energy higher then their
normal, or ground, state. This is accomplished by sending a current
of electricity through the bulb; the current jolts the atoms into the
higher-energy state. This excited state is unstable, however, and the
atoms will spontaneously return to their ground state by ridding
themselves of excess energy.As these atoms emit energy, light is produced. The light emitted
by a lamp full of atoms is disorganized and emitted in all directions
randomly. This type of light, common to all ordinary sources, from
fluorescent lamps to the Sun, is called “incoherent light.”
Laser light is different. The excited atoms in a laser emit their excess
energy in a unified, controlled manner. The atoms remain in the
excited state until there are a great many excited atoms. Then, they
are stimulated to emit energy, not independently, but in an organized
fashion, with all their light waves traveling in the same direction,
crests and troughs perfectly aligned. This type of light is called
“coherent light.”
Theory to Reality
In 1958, Charles Hard Townes of Columbia University, together
with Arthur L. Schawlow, explored the requirements of the laser in
a theoretical paper. In the Soviet Union, F. A. Butayeva and V. A.
Fabrikant had amplified light in 1957 using mercury; however, their
work was not published for two years and was not published in a
scientific journal. The work of the Soviet scientists, therefore, received virtually no attention in the Western world.
In 1960, Theodore Harold Maiman constructed the first laser in
the United States using a single crystal of synthetic pink ruby,
shaped into a cylindrical rod about 4 centimeters long and 0.5 centimeter
across. The ends, polished flat and made parallel to within
about a millionth of a centimeter, were coated with silver to make
them mirrors.
It is a property of stimulated emission that stimulated light
waves will be aligned exactly (crest to crest, trough to trough, and
with respect to direction) with the radiation that does the stimulating.
From the group of excited atoms, one atom returns to its ground state, emitting light. That light hits one of the other exited atoms and
stimulates it to fall to its ground state and emit light. The two light
waves are exactly in step. The light from these two atoms hits other
excited atoms, which respond in the same way, “amplifying” the total
sum of light.
If the first atom emits light in a direction parallel to the length of
the crystal cylinder, the mirrors at both ends bounce the light waves
back and forth, stimulating more light and steadily building up an
increasing intensity of light. The mirror at one end of the cylinder is
constructed to let through a fraction of the light, enabling the light to
emerge as a straight, intense, narrow beam.
Consequences
When the laser was introduced, it was an immediate sensation. In
the eighteen months following Maiman’s announcement that he had
succeeded in producing a working laser, about four hundred companies
and several government agencies embarked on work involving
lasers. Activity centered on improving lasers, as well as on exploring
their applications. At the same time, there was equal activity in publicizing
the near-miraculous promise of the device, in applications covering
the spectrum from “death” rays to sight-saving operations. A
popular film in the James Bond series, Goldfinger (1964), showed the
hero under threat of being sliced in half by a laser beam—an impossibility
at the time the film was made because of the low power-output
of the early lasers.
In the first decade after Maiman’s laser, there was some disappointment.
Successful use of lasers was limited to certain areas of
medicine, such as repairing detached retinas, and to scientific applications,
particularly in connection with standards: The speed of
light was measured with great accuracy, as was the distance to the
Moon. By 1990, partly because of advances in other fields, essentially
all the laser’s promise had been fulfilled, including the death
ray and James Bond’s slicer. Yet the laser continued to find its place
in technologies not envisioned at the time of the first laser. For example,
lasers are now used in computer printers, in compact disc
players, and even in arterial surgery.

Monday, August 10, 2009

Laminated glass




The invention: Double sheets of glass separated by a thin layer of
plastic sandwiched between them.
The people behind the invention:
Edouard Benedictus (1879-1930), a French artist
Katherine Burr Blodgett (1898-1979), an American physicist
The Quest for Unbreakable Glass
People have been fascinated for centuries by the delicate transparency
of glass and the glitter of crystals. They have also been frustrated
by the brittleness and fragility of glass. When glass breaks, it
forms sharp pieces that can cut people severely. During the 1800’s
and early 1900’s, a number of people demonstrated ways to make
“unbreakable” glass. In 1855 in England, the first “unbreakable”
glass panes were made by embedding thin wires in the glass. The
embedded wire grid held the glass together when it was struck or
subjected to the intense heat of a fire.Wire glass is still used in windows
that must be fire resistant. The concept of embedding the wire
within a glass sheet so that the glass would not shatter was a predecessor
of the concept of laminated glass.
A series of inventors in Europe and the United States worked on
the idea of using a durable, transparent inner layer of plastic between
two sheets of glass to prevent the glass from shattering when it was
dropped or struck by an impact. In 1899, Charles E.Wade of Scranton,
Pennsylvania, obtained a patent for a kind of glass that had a sheet or
netting of mica fused within it to bind it. In 1902, Earnest E. G. Street
of Paris, France, proposed coating glass battery jars with pyroxylin
plastic (celluloid) so that they would hold together if they cracked. In
Swindon, England, in 1905, John Crewe Wood applied for a patent
for a material that would prevent automobile windshields from shattering
and injuring people when they broke. He proposed cementing
a sheet of material such as celluloid between two sheets of glass.
When the window was broken, the inner material would hold the
glass splinters together so that they would not cut anyone.Remembering a Fortuitous Fall
In his patent application, Edouard Benedictus described himself
as an artist and painter. He was also a poet, musician, and
philosopher who was descended from the philosopher Baruch
Benedictus Spinoza; he seemed an unlikely contributor to the
progress of glass manufacture. In 1903, Benedictus was cleaning his laboratory when he dropped a glass bottle that held a nitrocellulose
solution. The solvents, which had evaporated during the
years that the bottle had sat on a shelf, had left a strong celluloid
coating on the glass. When Benedictus picked up the bottle, he was
surprised to see that it had not shattered: It was starred, but all the
glass fragments had been held together by the internal celluloid
coating. He looked at the bottle closely, labeled it with the date
(November, 1903) and the height from which it had fallen, and put
it back on the shelf.
One day some years later (the date is uncertain), Benedictus became
aware of vehicular collisions in which two young women received
serious lacerations from broken glass. He wrote a poetic account
of a daydream he had while he was thinking intently about
the two women. He described a vision in which the faintly illuminated
bottle that had fallen some years before but had not shattered
appeared to float down to him from the shelf. He got up, went into
his laboratory, and began to work on an idea that originated with his
thoughts of the bottle that would not splinter.
Benedictus found the old bottle and devised a series of experiments
that he carried out until the next evening. By the time he had
finished, he had made the first sheet of Triplex glass, for which he
applied for a patent in 1909. He also founded the Société du Verre
Triplex (The Triplex Glass Society) in that year. In 1912, the Triplex
Safety Glass Company was established in England. The company
sold its products for military equipment in World War I, which began
two years later.
Triplex glass was the predecessor of laminated glass. Laminated
glass is composed of two or more sheets of glass with a thin
layer of plastic (usually polyvinyl butyral, although Benedictus
used pyroxylin) laminated between the glass sheets using pressure
and heat. The plastic layer will yield rather than rupture when subjected
to loads and stresses. This prevents the glass from shattering
into sharp pieces. Because of this property, laminated glass is also
known as “safety glass.”
Impact
Even after the protective value of laminated glass was known,the product was not widely used for some years. There were a number
of technical difficulties that had to be solved, such as the discoloring
of the plastic layer when it was exposed to sunlight; the relatively
high cost; and the cloudiness of the plastic layer, which
obscured vision—especially at night. Nevertheless, the expanding
automobile industry and the corresponding increase in the number
of accidents provided the impetus for improving the qualities and
manufacturing processes of laminated glass. In the early part of the
century, almost two-thirds of all injuries suffered in automobile accidents
involved broken glass.
Laminated glass is used in many applications in which safety is
important. It is typically used in all windows in cars, trucks, ships,
and aircraft. Thick sheets of bullet-resistant laminated glass are
used in banks, jewelry displays, and military installations. Thinner
sheets of laminated glass are used as security glass in museums, libraries,
and other areas where resistance to break-in attempts is
needed. Many buildings have large ceiling skylights that are made
of laminated glass; if the glass is damaged, it will not shatter, fall,
and hurt people below. Laminated glass is used in airports, hotels,
and apartments in noisy areas and in recording studios to reduce
the amount of noise that is transmitted. It is also used in safety goggles
and in viewing ports at industrial plants and test chambers.
Edouard Benedictus’s recollection of the bottle that fell but did not
shatter has thus helped make many situations in which glass is used
safer for everyone.

Iron lung




The invention: Amechanical respirator that saved the lives of victims
of poliomyelitis.
The people behind the invention:
Philip Drinker (1894-1972), an engineer who made many
contributions to medicine
Louis Shaw (1886-1940), a respiratory physiologist who
assisted Drinker
Charles F. McKhann III (1898-1988), a pediatrician and
founding member of the American Board of Pediatrics
A Terrifying Disease
Poliomyelitis (polio, or infantile paralysis) is an infectious viral
disease that damages the central nervous system, causing paralysis
in many cases. Its effect results from the destruction of neurons
(nerve cells) in the spinal cord. In many cases, the disease produces
crippled limbs and the wasting away of muscles. In others, polio results
in the fatal paralysis of the respiratory muscles. It is fortunate
that use of the Salk and Sabin vaccines beginning in the 1950’s has
virtually eradicated the disease.
In the 1920’s, poliomyelitis was a terrifying disease. Paralysis of
the respiratory muscles caused rapid death by suffocation, often
within only a few hours after the first signs of respiratory distress
had appeared. In 1929, Philip Drinker and Louis Shaw, both of Harvard
University, reported the development of a mechanical respirator
that would keep those afflicted with the disease alive for indefinite
periods of time. This device, soon nicknamed the “iron lung,”
helped thousands of people who suffered from respiratory paralysis
as a result of poliomyelitis or other diseases.
Development of the iron lung arose after Drinker, then an assistant
professor in Harvard’s Department of Industrial Hygiene, was
appointed to a Rockefeller Institute commission formed to improve
methods for resuscitating victims of electric shock. The best-known
use of the iron lung—treatment of poliomyelitis—was a result of
numerous epidemics of the disease that occurred from 1898 until the 1920’s, each leaving thousands of Americans paralyzed.
The concept of the iron lung reportedly arose from Drinker’s observation
of physiological experiments carried out by Shaw and
Drinker’s brother, Cecil. The experiments involved the placement
of a cat inside an airtight box—a body plethysmograph—with the
cat’s head protruding from an airtight collar. Shaw and Cecil Drinker
then measured the volume changes in the plethysmograph to identify
normal breathing patterns. Philip Drinker then placed cats paralyzed
by curare inside plethysmographies and showed that they
could be kept breathing artificially by use of air from a hypodermic
syringe connected to the device.
Next, they proceeded to build a human-sized plethysmographlike
machine, with a five-hundred-dollar grant from the New York
Consolidated Gas Company. This was done by a tinsmith and the
Harvard Medical School machine shop.
Breath for Paralyzed Lungs
The first machine was tested on Drinker and Shaw, and after several
modifications were made, a workable iron lung was made
available for clinical use. This machine consisted of a metal cylinder
large enough to hold a human being. One end of the cylinder, which
contained a rubber collar, slid out on casters along with a stretcher
on which the patient was placed. Once the patient was in position
and the collar was fitted around the patient’s neck, the stretcher was
pushed back into the cylinder and the iron lung was made airtight.
The iron lung then “breathed” for the patient by using an electric
blower to remove and replace air alternatively inside the machine.
In the human chest, inhalation occurs when the diaphragm contracts
and powerful muscles (which are paralyzed in poliomyelitis
sufferers) expand the rib cage. This lowers the air pressure in the
lungs and allows inhalation to occur. In exhalation, the diaphragm
and chest muscles relax, and air is expelled as the chest cavity returns
to its normal size. In cases of respiratory paralysis treated with
an iron lung, the air coming into or leaving the iron lung alternately
compressed the patient’s chest, producing artificial exhalation, and
the allowed it to expand to so that the chest could fill with air. In this
way, iron lungs “breathed” for the patients using them.Careful examination of each patient was required to allow technicians
to adjust the rate of operation of the machine. Acooling system
and ports for drainage lines, intravenous lines, and the other
apparatus needed to maintain a wide variety of patients were included
in the machine.
The first person treated in an iron lung was an eight-year-old girl
afflicted with respiratory paralysis resulting from poliomyelitis. The
iron lung kept her alive for five days. Unfortunately, she died from
heart failure as a result of pneumonia. The next iron lung patient, a
Harvard University student, was confined to the machine for several
weeks and later recovered enough to resume a normal life.

The Internet



The invention: 



A worldwide network of interlocking computer

systems, developed out of a U.S. government project to improve

military preparedness.



The people behind the invention:



Paul Baran, a researcher for the RAND corporation

Vinton G. Cerf (1943- ), an American computer scientist

regarded as the “father of the Internet”








Cold War Computer Systems



In 1957, the world was stunned by the launching of the satellite

Sputnik I by the Soviet Union. The international image of the United

States as the world’s technology superpower and its perceived edge

in the ColdWar were instantly brought into question. As part of the

U.S. response, the Defense Department quickly created the Advanced

Research Projects Agency (ARPA) to conduct research into

“command, control, and communications” systems. Military planners

in the Pentagon ordered ARPA to develop a communications

network that would remain usable in the wake of a nuclear attack.

The solution, proposed by Paul Baran, a scientist at the RAND Corporation,

was the creation of a network of linked computers that

could route communications around damage to any part of the system.

Because the centralized control of data flow by major “hub”

computers would make such a system vulnerable, the system could

not have any central command, and all surviving points had to be

able to reestablish contact following an attack on any single point.

This redundancy of connectivity (later known as “packet switching”)

would not monopolize a single circuit for communications, as

telephones do, but would automatically break up computer messages

into smaller packets, each of which could reach a destination

by rerouting along different paths.

ARPA then began attempting to link university computers over

telephone lines. The historic connecting of four sites conducting

ARPAresearch was accomplished in 1969 at a computer laboratory

at the University of California at Los Angeles (UCLA), which was

connected to computers at the University of California at Santa

Barbara, the Stanford Research Institute, and the University of Utah.

UCLA graduate student Vinton Cerf played a major role in establishing

the connection, which was first known as “ARPAnet.” By

1971, more than twenty sites had been connected to the network, including

supercomputers at the Massachusetts Institute of Technology

and Harvard University; by 1981, there were more than two

hundred computers on the system.





The Development of the Internet



Because factors such as equipment failure, overtaxed telecommunications

lines, and power outages can quickly reduce or abort

(“crash”) computer network performance, the ARPAnet managers

and others quickly sought to build still larger “internetting” projects.

In the late 1980’s, the National Science Foundation built its

own network of five supercomputer centers to give academic researchers

access to high-power computers that had previously been

available only to military contractors. The “NSFnet” connected university

networks by linking them to the closest regional center; its

development put ARPAnet out of commission in 1990. The economic

savings that could be gained from the use of electronic mail

(“e-mail”), which reduced postage and telephone costs, were motivation

enough for many businesses and institutions to invest in

hardware and network connections.

The evolution of ARPAnet and NSFnet eventually led to the creation

of the “Internet,” an international web of interconnected government,

education, and business computer networks that has been

called “the largest machine ever constructed.” Using appropriate

software, a computer terminal or personal computer can send and

receive data via an “Internet Protocol” packet (an electronic envelope

with an address). Communications programs on the intervening

networks “read” the addresses on packets moving through the

Internet and forward the packets toward their destinations. From

approximately one thousand networks in the mid-1980’s, the Internet

grew to an estimated thirty thousand connected networks by

1994, with majority of Internet users live in the United States and Europe, but

the Internet has continued to expand internationally as telecommunications

lines are improved in other countries.



Impact



Most individual users access the Internet through modems attached

to their home personal computers by subscribing to local area

networks. These services make information sources available such as

on-line encyclopedias and magazines and embrace electronic discussion

groups and bulletin boards on nearly every specialized interest

area imaginable. Many universities converted large libraries to electronic

form for Internet distribution, with an ambitious example being

Cornell University’s conversion to electronic form of more than

100,000 books on the development of America’s infrastructure.

Numerous corporations and small businesses soon began to

market their products and services over the Internet. Problems soon

became apparent with the commercial use of the new medium,

however, as the protection of copyrighted material proved to be difficult;

data and other text available on the system can be “downloaded,”

or electronically copied. To protect their resources from

unauthorized use via the Internet, therefore, most companies set up

a “firewall” computer to screen incoming communications.

The economic policies of the Bill Clinton administration highlighted

the development of the “information superhighway” for

improving the delivery of social services and encouraging new

businesses; however, many governmental agencies and offices, including

the U.S. Senate and House of Representative, have been

slow to install high-speed fiber-optic network links. Nevertheless,

the Internet soon came to contain numerous information sites to improve

public access to the institutions of government.

are improved in other countries.



Vinton Cerf



Although Vinton Cerf is widely hailed as the “father of the

Internet,” he himself disavows that honor. He has repeatedly

emphasized that the Internet was built on the work of countless

others, and that he and his partner merely happened to make a

crucial contribution at a turning point in Internet development.

The path leading Cerf to the Internet began early. He was

born in New Haven, Connecticut, in 1943. He read widely, devouring

L. Frank Baum’s Oz books and science fiction novels—

especially those dealing with real-science themes. When he was

ten, a book called The Boy Scientist fired his interest in science.

After starting high school in Los Angeles in 1958, he got his first

glimpse of computers, which were very different devices in

those days. During a visit to a Santa Monica lab, he inspected a

computer filling three rooms with wires and vacuum tubes that

analyzed data from a Canadian radar system built to detect

sneak missile attacks from the Soviet Union. Two years later he

and a friend began programming a paper-tape computer at

UCLA while they were still in high school.

After graduating from Stanford University in 1965 with a

degree in computer science, Cerf worked for IBM for two years,

then entered graduate school at UCLA. His work on multiprocessing

computer systems got sidetracked when a Defense

Department request came in asking for help on a packet-switching

project. This new project drew him into the brand-new field

of computer networking on a system that became known as the

ARPAnet. In 1972 Cerf returned to Stanford as an assistant professor.

There he and a colleague, Robert Kahn, developed the

concepts and protocols that became the basis of the modern Internet—

a term they coined in a paper they delivered in 1974.

Afterward Cerf made development of the Internet the focus

of his distinguished career, and he later moved back into the

business world. In 1994 he returned to MCI as senior vice president

of Internet architecture. Meanwhile, he founded the Internet

Society in 1992 and the Internet Societal Task Force in 1999.







See also: Cell phone; Communications satellite; Fax machine;

Personal computer.