Wednesday, March 31, 2010

Richter scale







The invention:



A scale for measuring the strength of earthquakes

based on their seismograph recordings.



The people behind the invention:



Charles F. Richter (1900-1985), an American seismologist

Beno Gutenberg (1889-1960), a German American seismologist

Kiyoo Wadati (1902- ), a pioneering Japanese seismologist

Giuseppe Mercalli (1850-1914), an Italian physicist,volcanologist, and meteorologist







Earthquake Study by Eyewitness Report



Earthquakes range in strength from barely detectable tremors to

catastrophes that devastate large regions and take hundreds of thousands

of lives. Yet the human impact of earthquakes is not an accurate

measure of their power; minor earthquakes in heavily populated regions

may cause great destruction, whereas powerful earthquakes in

remote areas may go unnoticed. To study earthquakes, it is essential

to have an accurate means of measuring their power.

The first attempt to measure the power of earthquakes was the

development of intensity scales, which relied on damage effects

and reports by witnesses to measure the force of vibration. The

first such scale was devised by geologists Michele Stefano de Rossi

and François-Alphonse Forel in 1883. It ranked earthquakes on a

scale of 1 to 10. The de Rossi-Forel scale proved to have two serious

limitations: Its level 10 encompassed a great range of effects, and its

description of effects on human-made and natural objects was so specifically

European that it was difficult to apply the scale elsewhere.

To remedy these problems, Giuseppe Mercalli published a revised

intensity scale in 1902. The Mercalli scale, as it came to be

called, added two levels to the high end of the de Rossi-Forel scale,

making its highest level 12. It also was rewritten to make it more

globally applicable. With later modifications by Charles F. Richter,

the Mercalli scale is still in use.

Intensity measurements, even though they are somewhat subjective, are very useful in mapping the extent of earthquake effects.

Nevertheless, intensity measurements are still not ideal measuring

techniques. Intensity varies from place to place and is strongly influenced

by geologic features, and different observers frequently report

different intensities. There is a need for an objective method of

describing the strength of earthquakes with a single measurement.





Measuring Earthquakes One Hundred Kilometers Away



An objective technique for determining the power of earthquakes

was devised in the early 1930’s by Richter at the California Institute

of Technology in Pasadena, California. The eventual usefulness of

the scale that came to be called the “Richter scale” was completely

unforeseen at first.

In 1931, the California Institute of Technology was preparing to

issue a catalog of all earthquakes detected by its seismographs in the

preceding three years. Several hundred earthquakes were listed,

most of which had not been felt by humans, but detected only by instruments.

Richter was concerned about the possible misinterpretations

of the listing. With no indication of the strength of the earthquakes,

the public might overestimate the risk of earthquakes in

areas where seismographs were numerous and underestimate the

risk in areas where seismographs were few.

To remedy the lack of a measuring method, Richter devised the

scale that now bears his name. On this scale, earthquake force is expressed

in magnitudes, which in turn are expressed in whole numbers

and decimals. Each increase of one magnitude indicates a tenfold jump

in the earthquake’s force. These measurements were defined for a

standard seismograph located one hundred kilometers fromthe earthquake.

By comparing records for earthquakes recorded on different devices at different distances,

Richter was able to create conversion tables

for measuring magnitudes for any instrument at any distance.





Impact



Richter had hoped to create a rough means of separating small,

medium, and large earthquakes, but he found that the scale was capable

of making much finer distinctions. Most magnitude estimates

made with a variety of instruments at various distances from earthquakes

agreed to within a few tenths of a magnitude. Richter formally

published a description of his scale in January, 1935, in the

Bulletin of the Seismological Society of America. Other systems of estimating

magnitude had been attempted, notably that of KiyooWadati,

published in 1931, but Richter’s system proved to be the most workable

scale yet devised and rapidly became the standard.

Over the next few years, the scale was refined. One critical refinement

was in the way seismic recordings were converted into magnitude.

Earthquakes produce many types of waves, but it was not

known which type should be the standard for magnitude. So-called

surface waves travel along the surface of the earth. It is these waves

that produce most of the damage in large earthquakes; therefore, it

seemed logical to let these waves be the standard. Earthquakes deep

within the earth, however, produce few surface waves. Magnitudes

based on surface waves would therefore be too small for these earthquakes.

Deep earthquakes produce mostly waves that travel through

the solid body of the earth; these are the so-called body waves.

It became apparent that two scales were needed: one based on

surface waves and one on body waves. Richter and his colleague

Beno Gutenberg developed scales for the two different types of

waves, which are still in use. Magnitudes estimated from surface

waves are symbolized by a capital M, and those based on body

waves are denoted by a lowercase m.

From a knowledge of Earth movements associated with seismic

waves, Richter and Gutenberg succeeded in defining the energy

output of an earthquake in measurements of magnitude. A magnitude

6 earthquake releases about as much energy as a one-megaton

nuclear explosion; a magnitude 0 earthquake releases about as

much energy as a small car dropped off a two-story building.











Charles F. Richter

























Charles Francis Richter was born in Ohio in 1900. After his

mother divorced his father, she moved the family to Los Angles

in 1909. Aprecocious student, Richter entered the University of

Southern California at sixteen and transferred to Stanford University

a year later, majoring in physics. He graduated in 1920

and finished a doctorate in theoretical physics at the California

Institute of Technology in 1928.

While Richter was a graduate student at Caltech, Noble laureate

Robert A. Millikan lured him away from his original interest,

astronomy, to become an assistant at the seismology laboratory.

Richter realized that seismology was then a relatively new

discipline and that he could help it mature. He stayed with it—

and Caltech—for the rest of his university career, retiring as

professor emeritus in 1970. In 1971 he opened a consulting

firm—Lindvall, Richter and Associates—to assess the earthquake

readiness of structures.

Richter published more than two hundred articles about

earthquakes and earthquake engineering and two influential

books, Elementary Seismology and Seismicity of the Earth (with

Beno Gutenberg). These works, together with his teaching,

trained a generation of earthquake researchers and gave them a

basic tool, the Richter scale, to work with. He died in California

in 1985.

Tuesday, March 23, 2010

Rice and wheat strains





The invention:



Artificially created high-yielding wheat and rice

varieties that are helping food producers in developing countries

keep pace with population growth

The people behind the invention:



Orville A. Vogel (1907-1991), an agronomist who developed

high-yielding semidwarf winter wheats and equipment for

wheat research

Norman E. Borlaug (1914- ), a distinguished agricultural

scientist

Robert F. Chandler, Jr. (1907-1999), an international agricultural

consultant and director of the International Rice Research

Institute, 1959-1972

William S. Gaud (1907-1977), a lawyer and the administrator of

the U.S. Agency for International Development, 1966-1969



The Problem of Hunger



In the 1960’s, agricultural scientists created new, high-yielding

strains of rice and wheat designed to fight hunger in developing

countries. Although the introduction of these new grains raised levels

of food production in poor countries, population growth and

other factors limited the success of the so-called “Green Revolution.”

Before World War II, many countries of Asia, Africa, and Latin

America exported grain toWestern Europe. After the war, however,

these countries began importing food, especially from the United

States. By 1960, they were importing about nineteen million tons of

grain a year; that level nearly doubled to thirty-six million tons in

1966. Rapidly growing populations forced the largest developing

countries—China, India, and Brazil in particular—to import huge

amounts of grain. Famine was averted on the Indian subcontinent

in 1966 and 1967 only by the United States shipping wheat to the region.

The United States then changed its food policy. Instead of contributing

food aid directly to hungry countries, the U.S. began working to help such countries feed themselves.

The new rice and wheat strains were introduced just as countries

in Africa and Asia were gaining their independence from the European

nations that had colonized them. The ColdWar was still going

strong, and Washington and other Western capitals feared that the

Soviet Union was gaining influence in the emerging countries. To

help counter this threat, the U.S. Agency for International Development

(USAID) was active in the ThirdWorld in the 1960’s, directing

or contributing to dozens of agricultural projects, including building

rural infrastructure (farm-to-market roads, irrigation projects,

and rural electric systems), introducing modern agricultural techniques,

and importing fertilizer or constructing fertilizer factories in

other countries. By raising the standard of living of impoverished

people in developing countries through applying technology to agriculture,

policymakers hoped to eliminate the socioeconomic conditions

that would support communism.





The Green Revolution



It was against this background thatWilliam S. Gaud, administrator

of USAID from 1966 to 1969, first talked about a “green revolution”

in a 1968 speech before the Society for International Development

in Washington, D.C. The term “green revolution” has

been used to refer to both the scientific development of highyielding

food crops and the broader socioeconomic changes in a

country’s agricultural sector stemming from farmers’ adoption of

these crops.

In 1947, S. C. Salmon, a United States Department of Agriculture

(USDA) scientist, brought a wheat-dwarfing gene to the United

States. Developed in Japan, the gene produced wheat on a short

stalk that was strong enough to bear a heavy head of grain. Orville

Vogel, another USDA scientist, then introduced the gene into local

wheat strains, creating a successful dwarf variety known as Gaines

wheat. Under irrigation, Gaines wheat produced record yields. After

hearing about Vogel’s work, Norman Borlaug, who headed

the Rockefeller Foundation’s wheat-breeding program in Mexico,

adapted Gaines wheat, later called “miracle wheat,” to a variety of

growing conditions in Mexico.

Success with the development of high-yielding wheat varieties

persuaded the Rockefeller and Ford foundations to pursue similar

ends in rice culture. The foundations funded the International Rice

Research Institute (IRRI) in Los Banos, Philippines, appointing as director

Robert F. Chandler, Jr., an international agricultural consultant.

Under his leadership, IRRI researchers cross-bred Peta, a tall variety

of rice from Indonesia, with Deo-geo-woo-gen, a dwarf rice from Taiwan,

to produce a new strain, IR-8. Released in 1966 and dubbed

“miracle rice,” IR-8 produced yields double those of other Asian rice

varieties and in a shorter time, 120 days in contrast to 150 to 180 days.

Statistics from India illustrate the expansion of the new grain varieties.

During the 1966-1967 growing season, Indian farmers planted

improved rice strains on 900,000 hectares, or 2.5 percent of the total

area planted in rice. By 1984-1985, the surface area planted in improved

rice varieties stood at 23.4 million hectares, or 56.9 percent of

the total. The rate of adoption was even faster for wheat. In 1966-

1967, improved varieties covered 500,000 hectares, comprising 4.2

percent of the total wheat crop. By the 1984-1985 growing season,

the surface area had expanded to 19.6 million hectares, or 82.9 percent

of the total wheat crop.

To produce such high yields, IR-8 and other genetically engineered

varieties of rice and wheat required the use of irrigation, fertilizers,

and pesticides. Irrigation further increased food production

by allowing year-round farming and the planting of multiple crops

on the same plot of land, either two crops of high-yielding grain varieties

or one grain crop and another food crop.

Expectations

The rationale behind the introduction of high-yielding grains in

developing countries was that it would start a cycle of improvement

in the lives of the rural poor. High-yielding grains would lead to

bigger harvests and better-nourished and healthier families. If better

nutrition enabled more children to survive, the need to have large

families to ensure care for elderly parents would ease. Ahigher survival

rate of children would lead couples to use family planning,

slowing overall population growth and allowing per capita food intake

to rise.

The greatest impact of the Green Revolution has been seen in

Asia, which experienced dramatic increases in rice production, and

on the Indian subcontinent, with increases in rice and wheat yields.

Latin America, especially Mexico, enjoyed increases in wheat harvests.

Subsaharan Africa initially was left out of the revolution, as

scientists paid scant attention to increasing the yields of such staple

food crops as yams, cassava, millet, and sorghum. By the 1980’s,

however, this situation was being remedied with new research directed

toward millet and sorghum.

Research is conducted by a network of international agricultural

research centers. Backed by both public and private funds, these

centers cooperate with international assistance agencies, private

foundations, universities, multinational corporations, and government

agencies to pursue and disseminate research into improved

crop varieties to farmers in the Third World. IRRI and the International

Maize and Wheat Improvement Center (IMMYT) in Mexico

City are two of these agencies.



Impact



Expectations went unrealized in the first few decades following

the green revolution. Despite the higher yields from millions of

tons of improved grain seeds imported into the developing world,

lower-yielding grains still accounted for much of the surface area

planted in grain. The reasons for this explain the limits and impact

of the Green Revolution.

The subsistence mentality dies hard. The main targets of Green

Revolution programs were small farmers, people whose crops provide

barely enough to feed their families and provide seed for the

next crop. If an experimental grain failed, they faced starvation.

Such farmers hedged their bets when faced with a new proposition,

for example, by intercropping, alternating rows of different grains

in the same field. In this way, even if one crop failed, another might

feed the family.

Poor farmers in developing countries also were likely to be illiterate

and not eager to try something they did not fully understand.

Also, by definition, poor farmers often did not have the means to

purchase the inputs—irrigation, fertilizer, and pesticides—required

to grow the improved varieties.

In many developing countries, therefore, rich farmers tended to be

the innovators. More likely than poor farmers to be literate, they also

had the money to exploit fully the improved grain varieties. They

also were more likely than subsistence-level farmers to be in touch

with the monetary economy, making purchases from the agricultural

supply industry and arranging sales through established marketing

channels, rather than producing primarily for personal or family use.

Once wealthy farmers adopted the new grains, it often became

more difficult for poor farmers to do so. Increased demand for limited

supplies, such as pesticides and fertilizers, raised costs, while

bigger-than-usual harvests depressed market prices.With high sales

volumes, owners of large farms could withstand the higher costs and

lower-per-unit profits, but smaller farmers often could not.

Often, the result of adopting improved grains was that small

farmers could no longer make ends meet solely by farming. Instead,

they were forced to hire themselves out as laborers on large farms.

Surges of laborers into a limited market depressed rural wages,

making it even more difficult for small farmers to eke out a living.

The result was that rich farmers got richer and poor farmers got

poorer. Often, small farmers who could no longer support their

families would leave rural areas and migrate to the cities, seeking

work and swelling the ranks of the urban poor.



Mixed Results



The effects of the Green Revolution were thus mixed. The dissemination

of improved grain varieties unquestionably increased

grain harvests in some of the poorest countries of the world. Seed

companies developed, produced, and sold commercial quantities of

improved grains, and fertilizer and pesticide manufacturers logged

sales to developing countries thanks to USAID-sponsored projects.

Along with disrupting the rural social structure and encouraging

rural flight to the cities, the Green Revolution has had other negative

effects. For example, the millions of tube wells sunk in India to

irrigate crops reduced groundwater levels in some regions faster

than they could be recharged. In other areas, excessive use of pesticides

created health hazards, and fertilizer use led to streams and

ponds being clogged by weeds. The scientific community became

concerned that the use of improved varieties of grain, many of

which were developed from the same mother variety, reduced the

genetic diversity of the world’s food crops, making them especially

vulnerable to attack by disease or pests.

Perhaps the most significant impact of the Green Revolution is

the change it wrought in the income and class structure of rural areas;

often, malnutrition was not eliminated in either the countryside

or the cities. Almost without exception, the relative position of peasants

deteriorated. Many analysts admit that the Green Revolution

did not end world hunger, but they argue that it did buy time. The

poorest of the poor would be even worse off without it.

Reserpine





The invention: A drug with unique hypertension-decreasing effects

that provides clinical medicine with a versatile and effective

tool.

The people behind the invention:

Robert Wallace Wilkins (1906- ), an American physician and

clinical researcher

Walter E. Judson (1916- ) , an American clinical researcher

Treating Hypertension

Excessively elevated blood pressure, clinically known as “hypertension,”

has long been recognized as a pervasive and serious human

malady. In a few cases, hypertension is recognized as an effect

brought about by particular pathologies (diseases or disorders). Often,

however, hypertension occurs as the result of unknown causes.

Despite the uncertainty about its origins, unattended hypertension

leads to potentially dramatic health problems, including increased

risk of kidney disease, heart disease, and stroke.

Recognizing the need to treat hypertension in a relatively straightforward

and effective way, Robert Wallace Wilkins, a clinical researcher

at Boston University’s School of Medicine and the head of

Massachusetts Memorial Hospital’s Hypertension Clinic, began to

experiment with reserpine in the early 1950’s. Initially, the samples

that were made available to Wilkins were crude and unpurified.

Eventually, however, a purified version was used.

Reserpine has a long and fascinating history of use—both clinically

and in folk medicine—in India. The source of reserpine is the

root of the shrub Rauwolfia serpentina, first mentioned in Western

medical literature in the 1500’s but virtually unknown, or at least

unaccepted, outside India until the mid-twentieth century. Crude

preparations of the shrub had been used for a variety of ailments in

India for centuries prior to its use in the West.

Wilkins’s work with the drug did not begin on an encouraging

note, because reserpine does not act rapidly—a fact that had been

noted in Indian medical literature. The standard observation in

Western pharmacotherapy, however, was that most drugs work

rapidly; if a week has elapsed without positive effects being shown

by a drug, the conventional Western wisdom is that it is unlikely

to work at all. Additionally, physicians and patients alike tend to

look for rapid improvement or at least positive indications. Reserpine

is deceptive in this temporal context, andWilkins and his

coworkers were nearly deceived. In working with crude preparations

of Rauwolfia serpentina, they were becoming very pessimistic,

when a patient who had been treated for many consecutive

days began to show symptomatic relief. Nevertheless, only after

months of treatment did Wilkins become a believer in the drug’s

beneficial effects.





The Action of Reserpine



When preparations of pure reserpine became available in 1952,

the drug did not at first appear to be the active ingredient in the

crude preparations. When patients’ heart rate and blood pressure

began to drop after weeks of treatment, however, the investigators

saw that reserpine was indeed responsible for the improvements.

Once reserpine’s activity began, Wilkins observed a number of

important and unique consequences. Both the crude preparations

and pure reserpine significantly reduced the two most meaningful

measures of blood pressure. These two measures are systolic blood

pressure and diastolic blood pressure. Systolic pressure represents

the peak of pressure produced in the arteries following a contraction

of the heart. Diastolic pressure is the low point that occurs

when the heart is resting. To lower the mean blood pressure in the

system significantly, both of these pressures must be reduced. The

administration of low doses of reserpine produced an average drop

in pressure of about 15 percent, a figure that was considered less

than dramatic but still highly significant. The complex phenomenon

of blood pressure is determined by a multitude of factors, including

the resistance of the arteries, the force of contraction of the

heart, and the heartbeat rate. In addition to lowering the blood pressure,

reserpine reduced the heartbeat rate by about 15 percent, providing

an important auxiliary action.

In the early 1950’s, various therapeutic drugs were used to treat

hypertension. Wilkins recognized that reserpine’s major contribution

would be as a drug that could be used in combination with

drugs that were already in use. His studies established that reserpine,

combined with at least one of the drugs already in use, produced

an additive effect in lowering blood pressure. Indeed, at

times, the drug combinations produced a “synergistic effect,” which

means that the combination of drugs created an effect that was more

effective than the sum of the effects of the drugs when they were administered

alone. Wilkins also discovered that reserpine was most

effective when administered in low dosages. Increasing the dosage

did not increase the drug’s effect significantly, but it did increase the

likelihood of unwanted side effects. This fact meant that reserpine

was indeed most effective when administered in low dosages along

with other drugs.

Wilkins believed that reserpine’s most unique effects were not

those found directly in the cardiovascular system but those produced

indirectly by the brain. Hypertension is often accompanied

by neurotic anxiety, which is both a consequence of the justifiable

fears of future negative health changes brought on by

prolonged hypertension and contributory to the hypertension itself.

Wilkins’s patients invariably felt better mentally, were less

anxious, and were sedated, but in an unusual way. Reserpine

made patients drowsy but did not generally cause sleep, and if

sleep did occur, patients could be awakened easily. Such effects

are now recognized as characteristic of tranquilizing drugs, or

antipsychotics. In effect, Wilkins had discovered a new and important

category of drugs: tranquilizers.



Impact



Reserpine holds a vital position in the historical development of

antihypertensive drugs for two reasons. First, it was the first drug

that was discovered to block activity in areas of the nervous system

that use norepinephrine or its close relative dopamine as transmitter

substances. Second, it was the first hypertension drug to be

widely accepted and used. Its unusual combination of characteristics

made it effective in most patients.

Since the 1950’s, medical science has rigorously examined cardiovascular

functioning and diseases such as hypertension. Many

new factors, such as diet and stress, have been recognized as factors

in hypertension. Controlling diet and life-style help tremendously

in treating hypertension, but if the nervous system could not be partially

controlled, many cases of hypertension would continue to be

problematic. Reserpine has made that control possible.

Thursday, March 11, 2010

Refrigerant gas





The invention: A safe refrigerant gas for domestic refrigerators,

dichlorodifluoromethane helped promote a rapid growth in the

acceptance of electrical refrigerators in homes.

The people behind the invention:

Thomas Midgley, Jr. (1889-1944), an American engineer and

chemist

Charles F. Kettering (1876-1958), an American engineer and

inventor who was the head of research for General Motors

Albert Henne (1901-1967), an American chemist who was

Midgley’s chief assistant

Frédéric Swarts (1866-1940), a Belgian chemist

Toxic Gases

Refrigerators, freezers, and air conditioners have had a major impact

on the way people live and work in the twentieth century.With

them, people can live more comfortably in hot and humid areas,

and a great variety of perishable foods can be transported and

stored for extended periods. As recently as the early nineteenth century,

the foods most regularly available to Americans were bread

and salted meats. Items now considered essential to a balanced diet,

such as vegetables, fruits, and dairy products, were produced and

consumed only in small amounts.





Through the early part of the twentieth century, the pattern of

food storage and distribution evolved to make perishable foods

more available. Farmers shipped dairy products and frozen meats

to mechanically refrigerated warehouses. Smaller stores and most

American households used iceboxes to keep perishable foods fresh.

The iceman was a familiar figure on the streets of American towns,

delivering large blocks of ice regularly.

In 1930, domestic mechanical refrigerators were being produced

in increasing numbers. Most of them were vapor compression machines,

in which a gas was compressed in a closed system of pipes

outside the refrigerator by a mechanical pump and condensed to a liquid. The liquid was pumped into a sealed chamber in the refrigerator

and allowed to evaporate to a gas. The process of evaporation

removes heat from the environment, thus cooling the interior of the

refrigerator.

The major drawback of early home refrigerators involved the

types of gases used. In 1930, these included ammonia, sulfur dioxide,

and methyl chloride. These gases were acceptable if the refrigerator’s

gas pipes never sprang a leak. Unfortunately, leaks sometimes

occurred, and all these gases are toxic. Ammonia and sulfur

dioxide both have unpleasant odors; if they leaked, at least they

would be detected rapidly. Methyl chloride however, can form a

dangerously explosive mixture with air, and it has only a very faint,

and not unpleasant, odor. In a hospital in Cleveland during the

1920’s, a refrigerator with methyl chloride leaked, and there was a

disastrous explosion of the methyl chloride-air mixture. After that,

methyl chloride for use in refrigerators was mixed with a small

amount of a very bad-smelling compound to make leaks detectable.

(The same tactic is used with natural gas.)

Three-Day Success

General Motors, through its Frigidaire division, had a substantial

interest in the domestic refrigerator market. Frigidaire refrigerators

used sulfur dioxide as the refrigerant gas. Charles F. Kettering,

director of research for General Motors, decided that Frigidaire

needed a new refrigerant gas that would have good thermal properties

but would be nontoxic and nonexplosive. In early 1930, he sent

Lester S. Keilholtz, chief engineer of General Motors’ Frigidaire division,

to Thomas Midgley, Jr., a mechanical engineer and selftaught

chemist. He challenged them to develop such a new gas.

Midgley’s associates, Albert Henne and Robert McNary, researched

what types of compounds might already fit Kettering’s specifications.

Working with research that had been done by the Belgian

chemist Frédéric Swarts in the late nineteenth and early twentieth

centuries, Midgley, Henne, and McNary realized that dichlorodifluoromethane

would have ideal thermal properties and the right

boiling point for a refrigerant gas. The only question left to be answered

was whether the compound was toxic.

The chemists prepared a few grams of dichlorodifluoromethane

and put it, along with a guinea pig, into a closed chamber. They

were delighted to see that the animal seemed to suffer no ill effects

at all and was able to breathe and move normally. They were briefly

puzzled when a second batch of the compound killed a guinea pig

almost instantly. Soon, they discovered that an impurity in one of

the ingredients had produced a potent poison in their refrigerant

gas. A simple washing procedure completely removed the poisonous

contaminant.

This astonishingly successful research project was completed in

three days. The boiling point of dichlorodifluoromethane is -5.6 degrees

Celsius. It is nontoxic and nonflammable and possesses excellent

thermal properties. When Midgley was awarded the Perkin

Medal for industrial chemistry in 1937, he gave the audience a

graphic demonstration of the properties of dichlorodifluoromethane:

He inhaled deeply of its vapors and exhaled gently into a jar

containing a burning candle. The candle flame promptly went out.

This visual evidence proved that dichlorodifluoromethane was not

poisonous and would not burn.

Impact

The availability of this safe refrigerant gas, which was renamed

Freon, led to drastic changes in the United States. The current patterns

of food production, distribution, and consumption are a direct

result, as is air conditioning. Air conditioning was developed early

in the twentieth century; by the late 1970’s, most American cars and

residences were equipped with air conditioning, and other countries

with hot climates followed suit. Consequently, major relocations

of populations and businesses have become possible. Since

World War II, there have been steady migrations to the “Sun Belt,”

the states spanning the United States from southeast to southwest,

because air conditioners have made these areas much more livable.

Freon is a member of a family of chemicals called “chlorofluorocarbons.”

In addition to refrigeration, it is also used as a propellant

in aerosols and in the production of polystyrene plastics. In 1974,

scientists began to suspect that chlorofluorocarbons, when released

into the air, might have a serious effect on the environment. They

speculated that the compounds might migrate into the stratosphere,

where they could be decomposed by the intense ultraviolet light

from the sunlight that is prevented from reaching the earth’s surface

by the thin but vital layer of ozone in the stratosphere. In the process,

large amounts of the ozone layer might also be destroyed—

letting in the dangerous ultraviolet light. In addition to possible climatic

effects, the resulting increase in ultraviolet light reaching the

earth’s surface would raise the incidence of skin cancers. As a result,

chemical manufacturers are trying to develop alternative refrigerant

gases that will not harm the ozone layer.

Radio interferometer



The invention: An astronomical instrument that combines multiple

radio telescopes into a single system that makes possible the

exploration of distant space.

The people behind the invention:

Sir Martin Ryle (1918-1984), an English astronomer

Karl Jansky (1905-1950), an American radio engineer

Hendrik Christoffel van de Hulst (1918- ), a Dutch radio

astronomer

Harold Irving Ewan (1922- ), an American astrophysicist

Edward Mills Purcell (1912-1997), an American physicist

Seeing with Radio

Since the early 1600’s, astronomers have relied on optical telescopes

for viewing stellar objects. Optical telescopes detect the

visible light from stars, galaxies, quasars, and other astronomical

objects. Throughout the late twentieth century, astronomers developed

more powerful optical telescopes for peering deeper into the

cosmos and viewing objects located hundreds of millions of lightyears

away from the earth.





In 1933, Karl Jansky, an American radio engineer with Bell Telephone

Laboratories, constructed a radio antenna receiver for locating

sources of telephone interference. Jansky discovered a daily radio

burst that he was able to trace to the center of the Milky Way

galaxy. In 1935, Grote Reber, another American radio engineer, followed

up Jansky’s work with the construction of the first dishshaped

“radio” telescope. Reber used his 9-meter-diameter radio

telescope to repeat Jansky’s experiments and to locate other radio

sources in space. He was able to map precisely the locations of various

radio sources in space, some of which later were identified as

galaxies and quasars.

Following World War II (that is, after 1945), radio astronomy

blossomed with the help of surplus radar equipment. Radio astronomy

tries to locate objects in space by picking up the radio waves that they emit. In 1944, the Dutch astronomer Hendrik Christoffel

van de Hulst had proposed that hydrogen atoms emit radio waves

with a 21-centimeter wavelength. Because hydrogen is the most

abundant element in the universe, van de Hulst’s discovery had explained

the nature of extraterrestrial radio waves. His theory later

was confirmed by the American radio astronomers Harold Irving

Ewen and Edward Mills Purcell of Harvard University.

By coupling the newly invented computer technology with radio

telescopes, astronomers were able to generate a radio image of a star

almost identical to the star’s optical image. Amajor advantage of radio

telescopes over optical telescopes is the ability of radio telescopes

to detect extraterrestrial radio emissions day or night, as well as their

ability to bypass the cosmic dust that dims or blocks visible light.

More with Less

After 1945, major research groups were formed in England, Australia,

and The Netherlands. Sir Martin Ryle was head of the Mullard

Radio Astronomy Observatory of the Cavendish Laboratory,

University of Cambridge. He had worked with radar for the Telecommunications

Research Establishment during World War II.

The radio telescopes developed by Ryle and other astronomers

operate on the same basic principle as satellite television receivers.

A constant stream of radio waves strikes the parabolic-shaped reflector

dish, which aims all the radio waves at a focusing point

above the dish. The focusing point directs the concentrated radio

beam to the center of the dish, where it is sent to a radio receiver,

then an amplifier, and finally to a chart recorder or computer.

With large-diameter radio telescopes, astronomers can locate

stars and galaxies that cannot be seen with optical telescopes. This

ability to detect more distant objects is called “resolution.” Like

optical telescopes, large-diameter radio telescopes have better resolution

than smaller ones. Very large radio telescopes were constructed

in the late 1950’s and early 1960’s (Jodrell Bank, England;

Green Bank, West Virginia; Arecibo, Puerto Rico). Instead of just

building larger radio telescopes to achieve greater resolution, however,

Ryle developed a method called “interferometry.” In Ryle’s

method, a computer is used to combine the incoming radio waves of two or more movable radio telescopes pointed at the same stellar

object.

Suppose that one had a 30-meter-diameter radio telescope. Its radio

wave-collecting area would be limited to its diameter. If a second

identical 30-meter-diameter radio telescope was linked with

the first, then one would have an interferometer. The two radio telescopes

would point exactly at the same stellar object, and the radio

emissions from this object captured by the two telescopes would be

combined by computer to produce a higher-resolution image. If the

two radio telescopes were located 1.6 kilometers apart, then their

combined resolution would be equivalent to that of a single radio

telescope dish 1.6 kilometers in diameter.

Ryle constructed the first true radio telescope interferometer at

the Mullard Radio Astronomy Observatory in 1955. He used combinations

of radio telescopes to produce interferometers containing

about twelve radio receivers. Ryle’s interferometer greatly improved

radio telescope resolution for detecting stellar radio sources, mapping

the locations of stars and galaxies, assisting in the discovery of  “quasars” (quasi-stellar radio sources), measuring the earth’s rotation

around the Sun, and measuring the motion of the solar system

through space.

Consequences

Following Ryle’s discovery, interferometers were constructed at

radio astronomy observatories throughout the world. The United

States established the National Radio Astronomy Observatory (NRAO)

in rural Green Bank, West Virginia. The NRAO is operated by nine

eastern universities and is funded by the National Science Foundation.

At Green Bank, a three-telescope interferometer was constructed,

with each radio telescope having a 26-meter-diameter

dish. During the late 1970’s, theNRAOconstructed the largest radio

interferometer in the world, the Very Large Array (VLA). The VLA,

located approximately 80 kilometers west of Socorro, New Mexico,

consists of twenty-seven 25-meter-diameter radio telescopes linked

by a supercomputer. The VLA has a resolution equivalent to that of

a single radio telescope 32 kilometers in diameter.

Even larger radio telescope interferometers can be created with

a technique known as “very long baseline interferometry” (VLBI).

VLBI has been used to construct a radio telescope having an effective

diameter of several thousand kilometers. Such an arrangement

involves the precise synchronization of radio telescopes located

in several different parts of the world. Supernova 1987A in

the Large Magellanic Cloud was studied using a VLBI arrangement

between observatories located in Australia, South America,

and South Africa.

Launching radio telescopes into orbit and linking them with

ground-based radio telescopes could produce a radio telescope

whose effective diameter would be larger than that of the earth.

Such instruments will enable astronomers to map the distribution

of galaxies, quasars, and other cosmic objects, to understand the

origin and evolution of the universe, and possibly to detect meaningful

radio signals from extraterrestrial civilizations.

Radio crystal sets











The invention: The first primitive radio receivers, crystal sets led

to the development of the modern radio.

The people behind the invention:

H. H. Dunwoody (1842-1933), an American inventor

Sir John A. Fleming (1849-1945), a British scientist-inventor

Heinrich Rudolph Hertz (1857-1894), a German physicist

Guglielmo Marconi (1874-1937), an Italian engineer-inventor

James Clerk Maxwell (1831-1879), a Scottish physicist

Greenleaf W. Pickard (1877-1956), an American inventor

From Morse Code to Music

In the 1860’s, James Clerk Maxwell demonstrated that electricity

and light had electromagnetic and wave properties. The conceptualization

of electromagnetic waves led Maxwell to propose that

such waves, made by an electrical discharge, would eventually be

sent long distances through space and used for communication

purposes. Then, near the end of the nineteenth century, the technology

that produced and transmitted the needed Hertzian (or radio)

waves was devised by Heinrich Rudolph Hertz, Guglielmo Marconi

(inventor of the wireless telegraph), and many others. The resultant

radio broadcasts, however, were limited to the dots and

dashes of the Morse code.





Then, in 1901, H. H. Dunwoody and Greenleaf W. Pickard invented

the crystal set. Crystal sets were the first radio receivers

that made it possible to hear music and the many other types of

now-familiar radio programs. In addition, the simple construction

of the crystal set enabled countless amateur radio enthusiasts

to build “wireless receivers” (the name for early radios) and

to modify them. Although, except as curiosities, crystal sets were

long ago replaced by more effective radios, they are where it all

began.

Crystals, Diodes, Transistors, and Chips

Radio broadcasting works by means of electromagnetic radio

waves, which are low-energy cousins of light waves. All electromagnetic

waves have characteristic vibration frequencies and wavelengths.

This article will deal mostly with long radio waves of frequencies

from 550 to 1,600 kilocycles (kilohertz), which can be seen

on amplitude-modulation (AM) radio dials. Frequency-modulation

(FM), shortwave, and microwave radio transmission use higherenergy

radio frequencies.

The broadcasting of radio programs begins with the conversion

of sound to electrical impulses by means of microphones. Then, radio

transmitters turn the electrical impulses into radio waves that

are broadcast together with higher-energy carrier waves. The combined

waves travel at the speed of light to listeners. Listeners hear

radio programs by using radio receivers that pick up broadcast

waves through antenna wires and reverse the steps used in broadcasting.

This is done by converting those waves to electrical impulses

and then into sound waves. The two main types of radio

broadcasting are AM and FM, which allow the selection (modulation)

of the power (amplitude) or energy (frequency) of the broadcast

waves.

The crystal set radio receiver of Dunwoody and Pickard had

many shortcomings. These led to the major modifications that produced

modern radios. Crystal sets, however, began the radio industry

and fostered its development. Today, it is possible to purchase

somewhat modified forms of crystal sets, as curiosity items. All

crystal sets, original or modern versions, are crudeAMradio receivers

that are composed of four components: an antenna wire, a crystal

detector, a tuning circuit, and a headphone or loudspeaker.

Antenna wires (aerials) pick up radio waves broadcast by external

sources. Originally simple wires, today’s aerials are made to

work better by means of insulation and grounding. The crystal detector

of a crystal set is a mineral crystal that allows radio waves to

be selected (tuned). The original detectors were crystals of a leadsulfur

mineral, galena. Later, other minerals (such as silicon and carborundum)

were also found to work. The tuning circuit is composed

of 80 to 100 turns of insulated wire, wound on a 0.33-inch support. Some surprising supports used in homemade tuning circuits

include cardboard toilet-paper-roll centers and Quaker Oats

cereal boxes. When realism is desired in collector crystal sets, the

coil is usually connected to a wire probe selector called a “cat’s

whisker.” In some such crystal sets, a condenser (capacitor) and additional

components are used to extend the range of tunable signals.

Headphones convert chosen radio signals to sound waves that are

heard by only one listener. If desired, loudspeakers can be used to

enable a roomful of listeners to hear chosen programs.

An interesting characteristic of the crystal set is the fact that its

operation does not require an external power supply. Offsetting

this are its short reception range and a great difficulty in tuning or

maintaining tuned-in radio signals. The short range of these radio

receivers led to, among other things, the use of power supplies

(house current or batteries) in more sophisticated radios. Modern

solutions to tuning problems include using manufactured diode

vacuum tubes to replace crystal detectors, which are a kind of natural

diode. The first manufactured diodes, used in later crystal sets

and other radios, were invented by John Ambrose Fleming, a colleague

of Marconi’s. Other modifications of crystal sets that led to

more sophisticated modern radios include more powerful aerials,

better circuits, and vacuum tubes. Then came miniaturization,

which was made possible by the use of transistors and silicon chips.

Impact

The impact of the invention of crystal sets is almost incalculable,

since they began the modern radio industry. These early radio receivers

enabled countless radio enthusiasts to build radios, to receive radio

messages, and to become interested in developing radio communication

systems. Crystal sets can be viewed as having spawned all

the variant modern radios. These include boom boxes and other portable

radios; navigational radios used in ships and supersonic jet

airplanes; and the shortwave, microwave, and satellite networks

used in the various aspects of modern communication.

The later miniaturization of radios and the development of sophisticated

radio system components (for example, transistors

and silicon chips) set the stage for both television and computers.

Certainly, if one tried to assess the ultimate impact of crystal sets by

simply counting the number of modern radios in the United States,

one would find that few Americans more than ten years old own

fewer than two radios. Typically, one of these is run by house electric

current and the other is a portable set that is carried almost everywhere.