Does Government Spending Harm The Environment?

A market failure occurs when there is a gap between the private and social costs of an activity. That is, the social costs are higher than the private costs. The activity itself is something economists call an externality. For example, consider a factory where the production process throws off disgusting waste. If the factory dumps this junk into a handy river instead of disposing it in a less convenient, less harmful place, the resulting pollution is an externality.

Of course, the problem is that by using the river as a dump, production costs are lower than if the factory disposed of its waste in a socially responsible way, so the factory owner has little incentive not to pollute. This leaves the people and businesses along the river, who suffer the bad effects, to bear the cost–hence the term “social cost.”

The failure of the market to cover the costs of an externality is taken as an invitation for the government to step in and make private parties deal with the social costs. In this example, the government could promulgate regulations to stop river pollution by making it unlawful for the factory to dump waste in the river, which, in turn, would raise the private cost of production but lower the social costs borne by society.

At any rate, during the 1970s, the “correction” of market failures accelerated the pace of social regulation. In 1970, both the Occupational Safety and Health Act and the Clean Air Act were passed. Two years later came the Marine Protection Act, the Water Pollution Act, and the Federal Insecticide and Rodenticide Act. These were followed by the Safe Drinking Water Act in 1974 and the Toxic Substances Control Act of 1975. There were also amendments to the Clean Air Act in 1977. A welter of new bureaucracies were created, among others: the Environmental Protection Agency, the Occupational Safety and Health Administration, the Consumer Product Safety Commission, and the National Highway Traffic Safety Administration.

The extravaganza continued in the 1980s. In environmental regulation, more than half a dozen laws were passed, including the Radon Gas and Indoor Air Quality Research Act in 1986; the Radon Pollution Control Act in 1988; the Clean Air Amendments in 1990; CERCLA (the Comprehensive Environmental Response, Compensation and Liabilities Act–the notorious Superfund) in 1980; the Hazardous and Solid Waste Amendments in 1984; FIFRA (the Federal Insecticide, Fungicide, and Rodenticide Act Amendments) in 1988.

Nineteen-ninety was a stellar year for regulatory excess. First came the mother of expensive regulation–additional amendments to the Clean Air Act. The law, which covers many businesses from giant utilities and auto companies to tiny bakeries and dry cleaners, could cost as much as $60 billion annually when fully implemented in the late 1990s. Then came the Disabled Americans Act, which requires owners of private businesses, stores, hotels, restaurants, and apartments to make specified physical modifications to accommodate the disabled. The initial conversion costs to bring these establishments into compliance range between $60 billion and $70 billion. There was also legislation requiring food manufacturers to affix labels to products carrying various nutritional information.

A good guide to assessing the growth of regulation is to look at the number of pages in the Federal Register, where all new regulations are published annually. Pages went steadily up during the 1970s, until they reached an all-time high of 87,000 in 1979. Indeed, not until the 1980s was there a respite, when the Reagan administration seriously slowed the trajectory of ever-more regulation; the number of pages in the Federal Register actually declined by 34,000, the number of federal workers involved in regulation fell, and the cost of administering federal regulatory programs was about flat.

Then the great reversal. During the first two years of the Bush administration, regulation got out of hand. The number of pages in the Federal Register increased to 70,000. The number of federal employees busy issuing and enforcing the stuff reached an all-time high of almost 125,000, and the amount of money devoted to administering these programs grew at double-digit rates. It got so bad, in fact, that, in 1992, Bush had to announce a regulatory reform against his own administration–no new rules for 90 days, later extended another 90 days. No matter. In came the Clinton administration and the growth of new pages resumed apace.

There is no one correct and clear way to measure the cost of regulation on the economy. But there are plenty of estimates. One of the most reasonable figures is that in 1992, regulation cost one-half trillion dollars, or 1% of the economy. This figure includes social regulation such as environment, health, and safety; economic regulation like trade restrictions, federal labor laws, and farm programs; and the paperwork involved in filling out forms, keeping records, paying accountants and lawyers.

Who Pays for All This? In general, the cost of regulating is initially expressed as a cost of doing business. Okay, but who pays this tariff? We all do, in one way or another.

Consider a standard situation in which a law requires certain practices to be followed in hiring or procedures to be used to assure product quality. The former will raise costs by forcing employers to expand their job search and fill out forms to prove compliance; the latter will raise costs by requiring changes in the production process. Sometimes, firms can pass these costs to consumers, making them pay more; sometimes, firms can’t pass them along at all, so they will have lower profits, which means that owners or shareholders foot the bill. But, often, employers pass these costs down the line with lower wages and salaries. Other times, when costs cannot be directly passed off to employees, employers will respond by either hiring fewer people or laying off those already employed. Either way, higher business costs from regulation will result in lower wages and/or higher unemployment.

Excessive regulation also discourages investment in domestic business: Why plop a factory down on regulated soil when unregulated opportunities beckon abroad? Moreover, the threat of regulatory changes creates uncertainty, which scares investors, who then demand higher returns, and tends to make planning horizons more short term.

Further, regulation stymies innovation. This has been especially true in the drug and medical-device industry. Long approval periods shorten the effective patent time for the results of expensive research and development and thus diminish returns on discoveries without lowering risk. A larger gap between risk and return renders many research and development projects too unprofitable to undertake.

And last, all of the above make it harder for domestic firms to compete in international markets in which many foreign-based firms do not have to contend with the effects of excessive regulation.

What makes all the direct and indirect costs, to say nothing of the mind-numbing frustration, even more wicked, is that taken all together, these costs slow economic growth. Rather than detailing the many subtle ways in which regulation impedes growth that are so favored by economists, here instead is just one basic example: Innovation requires research, but research might not be undertaken if regulation makes the fruit of that research susceptible to liability action. Innovation requires development, but development might not be undertaken if regulation draws out the period before approval is granted. In other words, firms need a secure environment for innovation before they will commit money, time, and other scarce resources to the process. Lacking that environment, there will be less business innovation, which leads to slower productivity growth, and that creates slower economic growth.

There are, of course, offsetting benefits to regulation, especially those that concern health and safety. People are less susceptible to sickness, injury, and death because of the many workplace and drug and food laws. Ditto for those who fly airplanes and drive cars. And certainly everybody who breathes the air, or swims in rivers or lakes, is better off than they would be if cities still generated dense smogs of pollution or waterways still caught on fire. But no respectable effort has been made to quantify these benefits overall.

It is, however, possible to estimate benefits in specific categories. For example, trade restrictions such as quotas on imported cars or textiles benefit the workers who keep their jobs–so just multiply the number of jobs “kept” by annual salaries to arrive at total benefit. The irony is that often when these benefits are quantified, the costs of the regulations outweigh them. For example, the total figure for the benefits from restricting imports that accrue to domestic car and textile industries, in profits and jobs, is smaller than the costs to consumers in the form of higher prices for cars and clothes.

Then there are regulations that don’t seem to produce any benefits at all. Consider, for instance, the billions of dollars spent to remove asbestos from buildings, although the very removal may be as health-threatening as if the stuff were just left alone, or the billions it costs to clean up hazardous-waste sites, despite the lack of evidence that those sites constitute a hazard. And, finally, there are categories in which regulation actually results in negative benefits–that is, in harm. The Food and Drug Administration, for instance, has delayed approval of many drugs and medical devices that could have saved lives; critics point to long approval times (measured in years) for Interleukin 2 for kidney cancer, for example.

For any readers isolated from the impact of regulatory zeal and therefore of the mind that I am overstating the case, there exists a real world example of the positive impact of unraveling regulation. Consider the move in the 1970s to deregulate large sectors of the economy. This economic event was aimed at freeing a few very basic industries in the country’s infrastructure–some of which had been seriously regulated as far back as the 19th century.

The Benefits of Deregulation. A real breakthrough came in 1975 when President Ford, asking Congress for fundamental changes in the laws regulating transportation–railroads, airlines, and trucking firms–used the word “cost” to discuss regulation. Ford complained that “regulation has been used to protect and support the growth of established firms rather than to promote competition.” And he pointed to the proliferation of commissions, agencies, bureaus, and offices to oversee new programs and the cumbersome and costly procedures to license, certify, review, and approve of new technologies and products.

This spirit continued in force under President Carter: Along with deregulation of transportation, there were major steps to deregulate the financial markets and telecommunications, and to decontrol energy prices. Early in his tenure, Carter was apparently impressed by the argument that an increase in regulation showed up as cost increases followed by price increases and ultimately as wage increases. In the president’s 1978 Economic Report, Carter wrote: “There is no question that the scope of regulation has become excessive and that too little attention is given to its economic costs…wherever possible, the extent of regulation should be reduced.” True to his word, in 1978 came the Airline Deregulation Act and a strong push for similar deregulation in trucking and rails. In 1979, Carter presented deregulation bills for telecommunications and financial services.

Deregulation of these infrastructure industries was the most significant policy event of the 1970s–and surely an important one in providing a long-term boost to the economy in the 1980s. Total benefits to the economy are probably around $40 billion a year. Consumers pay lower prices, enjoy more choices and better quality in goods and services from the deregulated industries, and the rate of economic growth has been much stronger than it would have been absent this effort.

How Far Should We Go? In fact, most of the global experience with deregulation has been so positive that some people think we should deregulate everything in sight. Should we? There are two equally appealing answers: yes and no. Yes, because of all the bad things that regulation does, as argued above. No, because notwithstanding all those bad things, people have already made their adjustments and expect to operate in a regulated environment; a sudden deregulation would create all sorts of new problems.

As attractive as both yes-and-no responses are, however, they don’t offer a good guide for future regulation. Since the yen to regulate things–and to hope for good outcomes–is not going to go away, we should figure out some way to regulate while limiting the damage that will surely result.

Take the category that offers the smallest contribution to social welfare and the largest amount of per capita suffering–paperwork. A good approach would be to cut paperwork requirements in half (at least) and then maintain that number–of pages of forms no matter what. If new paperwork is mandated by some new rule-making outburst, then the same number of pages of old paperwork would have to go. (We could take a giant step toward a reduction simply by changing the federal income tax system to a flat rate system so that the tax form could fit on a post card.)

Second, consider economic regulation–that category that is the most unnecessary and is aimed almost exclusively at cheering up special interests. We could just drop it entirely. What’s the good of having the discipline of the market if we won’t let the market discipline? Minimum wages? Free employers to pay employees based on the value of their work, not on the political calculations of Congress. Trade protection? Let firms and industries fail if they can’t cut the mustard internationally. Farm supports and other forms of economic welfare? Dismantle them.

Finally, what about the most expensive category–the one that, no surprise, turns out to have the biggest, most enthusiastic constituency–social regulation. There are plenty of ideas on how to make social regulation more cost-effective. Two are neat variations on the theme of making the federal government put its money near its mouth: making the government fund mandates to state and local governments and compensate owners for taking property for other uses (for example, by declaring the property inviolate as wetlands or as having historic interest).

Currently, the most fashionable idea for reform of social regulation is something called risk assessment or risk management. It attacks the presumption behind most social regulation, especially for environmental rules–that risk is controllable; that if breathing bad air or eating apples sprayed with pesticide increases the risk of cancer, then decreasing air pollution or banning pesticides will cut down on that risk.

Fine. Except, what if the cost of reducing risk was a million times the benefit derived? Or what if the cost of reducing risk was reasonable, but the risk itself was trivial? Or if the risk itself was grave, but the amount it could be reduced was trivial? Under any of these conditions, would it still make sense to go ahead and regulate?

That’s where risk assessment comes in. Using what its proponents like to call sound science, or good economics, the risk of a particular activity is measured against the cost of reducing it. And presumably, if the costs and/or benefits are way out of whack, the regulation will not be undertaken.

There are lots of problems with this approach–beyond the obvious of defining “sound” science or “good” economics–like how to quantify human life, since human life is what is at risk. But requiring a risk assessment investigation before approving new regulations would be a start toward reversing over 35 years of indulging an attitude most easily characterized as: “Eek! Look, a risk! Let’s regulate it away–right this minute–no matter the cost–hurry!”

The Solar Cycle, Earth’s Temperature And Climate Response

The Earth’s upper atmospheric winds are tied to the solar cycle. Above the equatorial zone the air temperature and wind direction change with a period of about 24 to 36 months, a variation called the Quasi-Biennial Oscillation (QBO). Karin Labitzke (Free University, Berlin) and Harry van Loon (National Center for Atmospheric Research) found that when these high-altitude winds come from the west, the upper-air temperature follows the 11-year solar cycle; when the QBO winds are from the east, the stratospheric temperatures anticorrelate with the cycle. Finally, Brian Tinsley. (University of Texas, Richardson) and his colleagues see a link between changes in solar magnetism and changes in the global electrical circuit of the Earth, with influences on cloud properties and thunderstorms, for example.

Cycle Of The Sun’s Magnetism

The observational record of sunspots begins around 1610 with systematic telescopic counts by Galileo, Christoph Scheiner, and others. In 1843, after 17 years of observing the Sun for evidence of the fictitious planet Vulcan, Samuel H. Schwabe noted roughly a 10-year periodicity in the number of sunspot groups and in the strings of days when no sunspots were seen. Then in 1908 George Ellery Hale at Mount Wilson Observatory found sunspots to have strong magnetic fields of up to several thousand gauss. Thus the historical record of sunspot numbers details the strength and extent of the Sun’s magnetic fields, with their 11-year cycle.

But strictly speaking, the period of the sunspot cycle is not 11 years; it varies from eight to 15 years. Other periods are also present, for instance the 80- to 90-year “Gleissberg Cycle” and a 200-year cycle. A period of roughly 2,200 years is suggested by records of solar magnetism from the radiocarbon abundance in bristlecone-pine tree rings that can be traced over several millenniums.

It wasn’t until the 1980s that satellites measured slight changes in the Sun’s total energy output, or irradiance, associated with the 11-year cycle. Although it may seem counterintuitive, near the height of the sunspot cycle, when the most dark spots are present, the Sun is brighter than at sunspot minimum. Careful study of the surface by several researchers showed that the dark spots are more than offset by larger bright areas of strong magnetism, or plages. Hence the sunspot cycle can be viewed as a curve of the Sun’s varying brightness, making our star a variable with a roughly 11-year period.

The best records of solar brightness changes, however, go back only 20 years, too short to determine whether these variations spur climate change. And the observed solar energy variations – 0.1 percent over a cycle – seem too small to be influential. But since the brightness changes correlate closely with changes in the Sun’s magnetic activity, and records of solar magnetic variability go back several millenniums, we may yet be able to learn something about the effects that long-term solar brightness changes may have on Earth.

Solar Magnetism Through Time

Lookin’ good, Ed.

In 1890 Edward Maunder examined the historical records of sunspots and commented on a lull in the 11-year cycle from roughly 1640 to 1720, which had been noted first by Gustav Sporer, Jean Picard, and Gian Domenico Cassini. And William Herschel had written in 1801 that sunspot records from 1695 to 1700 showed that “no spot could be found on the sun.” Thus, the Sun’s magnetic variability has an additional complexity – the potential for diminished magnetic activity over the course of decades.

The long-term record of the Sun’s magnetic activity is deduced from radioisotopes such as carbon-14 and beryllium-10, which form as byproducts of energetic cosmic rays hitting the Earth’s upper atmosphere. The Sun’s magnetic field, carried out past Earth by the solar wind, deflects some of these cosmic rays. With high solar activity, and therefore strong magnetic fields, more cosmic rays are deflected and the formation of carbon-14 and beryllium-10 is inhibited. Some of these isotopes are incorporated in geologic records. They can be measured in the laboratory from tree rings and ice cores, yielding the history of solar magnetism. Both isotope records confirm that the sharp decrease of sunspots in the 17th century, the Maunder Minimum, indeed matched a very low level of solar magnetic activity.

Also, during the 17th century the Earth was globally about 1 [degree] C cooler than today. Especially hard hit was Northern Europe, where glaciers expanded and winters lengthened. In the last several thousand years similar periods of low solar magnetism have occurred every few centuries or so, and nearly all correspond to a cooling of the Earth by about a degree.

The Sun’s history is also marked by unusually high levels of magnetism. The last four sunspot cycles were among the most active in our 350-year observational record, but they were not as intense as the average level of solar activity in the 11th and 12th centuries as indicated by the isotope records. During that time the Earth seems to have been warmer than at present. Vineyards, for instance, grew in areas of Great Britain that cannot support them today. This is more evidence that changes in the Sun’s magnetism can alter terrestrial climate.

Sun-Like Stars

Studies of solar variability can be augmented with measurements of Sun-like stars – those close in mass, age, and magnetic activity to the Sun. We can thus watch a sample of many stars over a short time interval to deduce long-term information on one star, the Sun. But how do we observe the equivalent of sunspots – starspots – on stars too distant for their surfaces to be seen?

In the late 19th century astronomers noted that the spectra of cool stars contain emission lines of singly ionized calcium. Hale and others successfully photographed the Sun in the calcium H and K lines (3968 and 3934 angstroms, respectively), which originate in the Sun’s chromosphere. During sunspot maximum, patches of bright calcium emission dapple the surface; at low activity levels this emission is sparse. Thus, without seeing the Sun’s surface, we can take the presence of bright H and K lines as surrogates of surface magnetic features, Since records of changes in these lines have proved to be good proxies for varying magnetic activity on the Sun, the same is presumably true for Sun-like stars.

Hale built the 60-inch telescope on Mount Wilson in part because he wanted to explore solarlike phenomena that he thought were discoverable on other stars. He wrote: “Thousands of stars, in the same stage of evolution as the sun, doubtless exhibit similar phenomena, which are hidden from us by distance. . . . In spite of the necessity, because of their feeble brightness, of basing our conclusions on spectra a few inches long, representing the combined light from all parts of the stellar disks, material progress could be made in this way.”

In the early 1930s Mount Wilson astronomer Seth Nicholson showed his chart of the 11-year solar cycle as seen in the calcium K line to Olin Wilson, a young astronomer newly arrived from Caltech. From the relatively large changes summed over the Sun’s surface, Wilson reasoned that the calcium emission from the plages changed by 20 percent or more during the cycle. Thus cycles on other stars might indeed be detectable in the varying calcium H and K lines.

Wilson obtained some photographic calcium H and K spectra of about two dozen dwarf stars in the 1930s, intending to reobserve them about a decade later. The follow-up Was delayed until after World War II, but in 1954 he reported no changes between the two sets of photographic spectra.

Undaunted, Wilson bided his time until more sensitive electronic detectors came along. In March 1966, he began a monthly survey of 91 stars on or near the lower main sequence, stars not radically different from the Sun. He used the coude scanning spectrograph of the 100-inch telescope on Mount Wilson, which was equipped with a photocell. The stars ranged from spectral type early F to early M and sported weak to strong H and K emission.

Wilson discovered three general classes of long-term variability in lower-main-sequence stars: cyclic variations similar to the Sun’s 11-year cycle; flat, or essentially no variations; and erratic variations, meaning substantial changes with no clear period. Roughly one-third of the stars fell in each group. Wilson’s vision was remarkable in betting that the chromospheric H and K lines would contribute to studies of solar and stellar magnetism.

In 1977, as Wilson’s retirement approached, Arthur Vaughan and George Preston, also at Mount Wilson, built a second-generation instrument to continue Olin Wilson’s program at the 60-inch telescope, where Hale had first envisioned such work 60 years earlier. Since 1980 the HK Project has made observations almost nightly of flux changes that reveal stellar rotation. Wilson’s program has been extended to giant stars, as well as to a large census of solar-neighborhood stars.

In addition to the use of H and K lines as benchmarks of surface magnetic activity, parallel observations are made of associated brightness changes. G. Wes Lockwood, Brian Skiff (Lowell Observatory), and Richard Radick (Sacramento Peak Observatory) obtained highly precise photometry of three dozen Mount Wilson stars. More recently, Greg Henry and Michael Busby (Tennessee State University) collaborated with us to construct 0.75-meter and 0.8-meter Automatic Photoelectric Telescopes (APTs) for cost-effective differential photometry of the entire list of stars. The APTs have achieved the astonishing precision of 100 to 200 millionths of a magnitude over a season. Such measurements allow the detection of brightness changes on Sun-like stars as small as 0.1 percent over a decade – comparable to the solar irradiance change. A comparison of the Sun’s calcium emission and irradiance to that of a solar-type star is shown above.

Like the Sun, solar-type stars brighten and dim as the calcium H and K emission lines trace their starspot cycles. However, the cycles do not repeat themselves exactly – the total brightness change over a cycle is proportional to the intensity of the star’s activity. This, in turn, suggests that the amount the Sun’s brightness varies will change as its activity increases or decreases from one cycle to another.

The observations of Sun-like stars in the Mount Wilson sample say that short cycles are intense. Sustained intervals of short cycles could result in more solar energy reaching the Earth – effecting a global warming. Conversely, intervals of long cycles and continued low magnetic activity should result in cooling:

Climate Response

Our most recent estimates suggest that the Sun changes in brightness by about a half percent between a phase similar to the Maunder Minimum and an active phase. Simulations of the Earth’s climate indicate that changes of several tenths of one percent in the Sun’s brightness sustained over several decades could cause temperature changes of 0.5 [degree] C or so on Earth. Thus solar-brightness variations can explain most of the past record of terrestrial global temperature fluctuations. Our understanding of the climate is far from complete, but one fact stands out: a varying Sun is one of the drivers of the Earth’s changing climate. Sallie Baliunas and Willie Soon are scientists at the Harvard-Smithsonian Center for Astrophysics and Mount Wilson Institute. The astronomer Olin Wilson is buried in the shade of the 100-inch telescope.

The Struggle Continues For Environmentalists Everywhere

Arriving in San Francisco last year, fresh from campaigning against road-building in Britain, I thought I was pretty aware of the global nature of the issues we’ve been tackling. But through getting to know the other winners of the 1995 Goldman Environmental Prize and talking to them about their work, I came to realize just how much the issues of “car culture,” and more broadly, of over-consumption and development, resonate around the world.

Congestion threatens us all.

In Britain over the past few years, the largest road-building program since the Romans threatened to reshape the country, from the white cliffs of Dover to the suburban streets of Manchester. But ordinary people have defended their landscapes and communities and have succeeded in putting the roads program into reverse. For the moment, the super tanker of British transport policy really does seem to be turning round, although there is still much to do to ensure that positive transport alternative are implemented.

In the US, car culture is endemic. There are, for example, more cars in Los Angeles than in the whole of China, India, and sub-Saharan Africa put together. Aurora Castillo, the winner for North America, comes from East LA – an area criss-crossed by seven freeways which literally split her community and pollute the air breathed by the children and grandchildren for whom she has fought so hard against environmental threats of all kinds.

With the roads lobby increasingly turning to the developing world in search of new markets, such problems are no longer confined to the West. Yul Choi of South Korea bears witness to this. He has seen the number of cars in his country grow from 40,000 in 1965 to 7 million in 1995 with the consequent increases in pollution, health problems, and landscape destruction. These are just some of the negative effects of the rapid industrialization of his country which he is tackling. While the computer age, along with great amounts of data stored on hard drives across the world has begun to lessen our need for pulp and paper, there is really only so many positive effects. And in the end, even hard disk drives are not as robust as you might think – ask yourself how many friends you know who have needed laptop data recovery – the likely answer is quite high.

Ricardo Navarro from El Salvador is striving to help his people develop a cleaner, more affordable technology than motor vehicles as part of his struggle against poverty and environmental degradation. As he sees it, the bicycle offers a solution – to everything from transport to milling corn.

Noah Idechong from the beautiful islands of Palau in the Pacific is working to prevent his nation’s despoliation by potential developers. Palau is a member of the Alliance of Small Island States – the group of nations most threatened by any rise in sea-level caused by climate change. The major greenhouse gas is carbon dioxide, and, in the West, transport is the fastest growing source of carbon dioxide.

It was a source of great sadness that none of us had the opportunity to meet Ken Saro-Wiwa. At the time of the ’95 Goldman Award ceremony, he was in prison in Nigeria, being denied medical treatment and awaiting trial for protesting the devastation of his homelands by Shell Oil - devastation caused in the process of producing the petrol that we in the West daily put in our cars. Ken is now dead. He and his co-defendants were hanged at the end of last year. His death sent waves of shock and outrage around the world. It has galvanized environmentalists everywhere and has demonstrated in the most stark and horrific way imaginable just what the car culture can mean.

Ken’s protest, together with those of the other ’95 Prize winners, has put anti-roads campaigning in Britain only too sharply into context. We are all involved – like Ken was – in the same struggle: tackling different aspects and from different angles, but, nevertheless, the same struggle. And there’s an awful lot left to do!

Reforestation And The Story Of The Brown Brothers

The Brown brothers, David and Peter, were iconoclasts – they loved the forests, and wanted to create their own. So they did – a 100 tree Redwood patch as well as a variety of exotic and native trees.

All the while they were hounded by a community of homesteaders who had trouble comprehending the sense of putting that time and money into trees that would never turn a profit. Today the value of the forest cannot be measured in board feet. In terms of age and variety, says Roy Forster of Vancouver’s VanDusen Botanical Gardens, there is nothing quite like it in Canada. “As an early private botanical collection, it’s important to have these trees to study. It shows what a unique climate we have in the Lower Mainland that allows giant redwoods and monkey puzzle trees from the coast of Chile to grow side by side.”

Illness set the twins apart from other children at an early age, when scarlet fever struck them deaf. Though proficient lip readers, they had the habit of speaking aloud to others but conversing silently between themselves – when talking to each other, their lips moved but no sound spilled from their mouths. It tended to unnerve people.

The twins’ father was enough of a horticulturist to earn the nickname “Cherry” David “Cherry” Brown Sr. had crisscrossed the continent prospecting for gold before settling in southern British Columbia. He made his home on a hill that looked over 90-metre-tall Douglas fir in the Hazelmere Valley, a stone’s throw from the American border. He taught his sons at an early age the delicate art of coaxing a slip into a sapling. On their 21st birthdays, in 1893, he offered David and Peter 32 hectares of southern-exposed hillside property.

The intention was that the boys would plant and operate a modest fruit and nut farm. The land was certainly ready for such enterprise – denuded by loggers three years earlier, a Pacific gale reportedly finished off what little the saws had missed.

Shortly after inheriting the property, David and Peter took a trip by train to northern California to visit a cousin. What they saw on the western slopes of the Sierra Nevadas changed their lives: they were overwhelmed by the majesty of the giant redwoods. When they boarded the train for the return trip to Canada, their pockets were filled with tiny specks of magic, the big tree’s seed.

The experience among the redwoods was the beginning of a worldwide search for seeds. The twins imported them from four continents and are said to have travelled as far as New York for candidates suitable for their burgeoning arboretum. They planted the most prized seeds close to their house. Most bizarre are the monkey puzzle trees. Thick, tangled rope-like branches mingle haphazardly. Sharp, tough, pointed green leaves leave little doubt about why the trees are avoided by small primates.

For contrast, they planted the incense cedar with its dense and narrow pyramidal crown; Japanese cryptomeria, a graceful skyline tree with a columnar trunk of peeling red bark; and Atlas cedar from North Africa.

Moved, perhaps, by some strange botanical premonition, they planted the dawn redwood from China. Years later, drill core samples taken from the hill revealed Tertiary Period fossils of the same tree.

Around the turn of the century three of the twins’ houses burned to the ground. The Brown family – David and Peter had six sisters and three brothers – blamed the losses as much on poor housekeeping as anything else: the twins’ residences were strewn with plants, herbs, drying grass and newspapers. After the third house was gutted, the brothers became convinced that malevolent forces were conspiring against them.

So in 1912, a tree house, built on four-metre posts, went up among the young Douglas fir. Rough-hewn cedar planks kept most of the wind and rain out. A kitchen on the first floor held a stove, a washbasin, scores of house plants and David’s collection of bric-a-brac. The twins slept on the top floor in beds made of newspapers and old coats. Rows of rusted barbed wire circled the base of the house, for by now their property and possessions had become the target of vandals.

A nearby elementary school harboured a few pranksters who thought it great sport to sneak up on the deaf twins. Sometimes the intruders were malicious, stealing tools, raiding gardens and trampling flower beds.To protect their interests, the twins fashioned a crow’s-nest atop one of the large Douglas fir adjacent to the tree house. From there they could survey their domain and, on occasion, fire a shotgun blast of rock salt at intruders. Local constables answered numerous complaints from parents, and the twins were often reprimanded. Trespassers sometimes left their bicycles behind in their rush for daylight, and over the years many rusted frames were plucked from the thick bush. During Prohibition in the United States, word had it that the twins were gun-toting rumrunners, the forest a staging area for booze which they allegedly carted across the border. Others with less imagination simply called them crazy.

The trees grew taller and, as they grew, they shut out the light of the outside world. Over time, the twins lost their trust in family, community and government. In the 1940s, the municipality of Surrey came close to seizing 16 hectares of the hillside in lieu of back taxes. The experience reinforced the brothers’ belief that a conspiracy was afoot to force them off their land.

Although the tree house continued to be their main residence, the brothers would retreat to a shack built on the ground whenever the weather worsened. Eventually, it became Peter’s home quite by accident. When he prematurely ignited a dynamite charge meant to clear roots, he and the roots were launched skyward. He returned to earth with two broken legs, one of which never mended properly. Climbing the ladder to the tree house became impractical, and Peter settled in the shack.

There he lived until the age of 86, reportedly spry and difficult to the end. He died in 1957, surviving David by eight years, his sanity still in question. Historian Stan McKinnon wrote in 1973: “Peter became nuttier than a fruitcake.”

According to surrey officials, Peter had a verbal agreement with them to leave the land to the community to be preserved in its natural state, free of baseball diamonds and soccer pitches. His will said otherwise. In it he left the forest to the Jehovah’s Witnesses, a Church that he felt shared his basic distrust of government. Surrey sued and the matter was eventually settled out of court. The ribbon to the city’s 30-hectare Redwood Park was cut in 1960.

Two decades later, as Peter and David had long feared, their home aloft was demolished by the city. The tree house was a magnet for kids, Surrey argued, and rotting timbers presented a safety risk. An interpretation centre was built in its place. Where Peter and David Brown had once scrambled down a ladder, shotguns at the ready, park patrons now read brochures and gaze in wonder at the towering redwoods.

The brothers’ dream that took root a century ago survived decades of antagonism and ridicule. As true stewards of the earth – and martyrs too – David and Peter demonstrated that in the end all that matters is the condition in which you leave the land.

Are The Keys To World Climate In Antarctica?

(From a classic Popular Science Article)

Getting to Antarctica requires an eight-hour flight to McMurdo Research Station from Christchurch, New Zealand, in the equivalent of a B-52 on skis. You wear full polar gear, including a pair of massive rubber moon boots, two sets of long underwear, polyester overalls, and a fur-trimmed parka. Passengers strap in, shoulder-to-shoulder, on benches made from cloth webbing.

After landing at McMurdo, I took a second flight to Shackleton Camp, where the dynamists’ team had been working for the past two months. Stiff from the long, cold ride, I stood up, stretched, and looked through the window for any sign of life in this foreboding, desolate land. I saw six Quonset huts, a handful of tents, and two outhouses perched on a slab of ice in the middle of a mountain valley. Once outside, my parka rippling against the freezing wind, I tried to imagine a land that the dynamists say once looked like the green fjords of Chile. I quickly gave up, and walked as fast as I could toward food and shelter. On the way to the hut, I ran into David Harwood of the University of Nebraska, who was returning to McMurdo to pack off his samples. There on the airstrip, he laid out the details of his story. By the end of the tale, the idea of water in valleys like this one seemed almost plausible.

Some penguins meet for a climate change seminar.

It all began in 1983 when Peter Webb, a geologist at Ohio State University, and Harwood, then a graduate student, performed a laboratory analysis of glacial sediment from the Transantarctic Mountains. What they discovered was astonishing: The sediment, known as the Sirius Group, contained diatoms, or marine microfossils, that were only three million years old. This suggested that the climate during the Pliocene may have been warm enough to melt the ice cap and allow the ocean that surrounds Antarctica to flood its subglacial basins. Later that year, Harwood left for the frozen continent, where he dug up additional sediment samples from the Reedy Glacier area. They too contained diatoms of three-million-year-old vintage.

At first the findings were dismissed. The few scientists who gave them any thought said that the diatoms must have somehow blown into the sediment from the sea floor. It was a fluke, they said.

Back in 1983, it was generally agreed that after Antarctica splintered from the southern supercontinent Gondwana over 100 million years ago, it slid into a deep freeze, accumulating an ice cap that has remained about the same size for the past 15 million years. There was no reason to doubt this picture. It was supported by geological studies of Antarctica, as well as deep-sea oxygen isotope measurements that reflect ice volume and temperature changes.

But in 1985, Webb and Harwood visited Beardmore Glacier, where they found bits of wood later identified as southern beech. This made them think that perhaps small trees once grew there. They confirmed this hypothesis in 1990 by unearthing beech tree leaves and roots. The survival of these plants, they say, indicates a prolonged warm period.

Three years later, beetle remains turned up in the rubble. With this find, Harwood and Webb’s notion of a dynamic ice sheet and climatic shifts finally grabbed the interest of a cadre of experts. “The survival of a beetle during the Pliocene in Antarctica implies that temperatures were significantly warmer than present,” says micropaleontologist Allan Ashworth of North Dakota State University in Fargo.

In the field season that ended in 1996, Harwood and Webb took further steps to investigate their theory. First, they revisited the Dominion Range to collect more fossils. Then, they scoured a new site called Bennett Platform, and studied its geology. Finally, they collected samples from 15 other Sirius sediment sites throughout the Transantarctic Mountains to sift for diatoms. “If they blew in, then there should be a uniform distribution of them,” Harwood says. They should also vary in age.

It will take several years to analyze the diatom populations. In the meantime, Harwood and his team are puzzling over a mosslike plant colony found at Bennett Platform. Harwood says it probably grew in wetlands that were eventually covered by silt from a nearby river or stream during a glacial advance. The geology of the area, which is part of the Sirius Group, suggests that a lot of water was present when the rock face formed.

Ashworth also made a fascinating discovery: He found seeds and sea shells in the box of rocks he brought back from the field last year. “They’re interesting in their own right,” he says, because neither fossil group had previously been found on Antarctica. But their true value may be in their ability to date the sediment in which they were found.

Meanwhile, a variety of other research has begun to yield supportive evidence. One study suggests that warm-blooded sea creatures, including dolphins, may have migrated closer to Antarctica during the Pliocene, which would have meant warmer water. Other studies have found evidence that, about three million years ago, sea levels were between 25 and 30 meters higher than they are now, perhaps due to ice cap melt.

“The evidence is quite compelling,” says paleontologist Brian Huber of the Smithsonian’s Museum of Natural History. But not to the stablists. A period of climatic warming at a time when other evidence suggests Antarctica was completely covered by ice remains unacceptable to many scientists. To hear the stablists’ side of the story, I flew to Antarctica’s Dry Valleys to meet with George Denton of the University of Maine, and David Marchant of Boston University.

As I flew by helicopter over McMurdo Sound toward the Dry Valleys, I soon realized that this place was like no other I had seen in Antarctica. There is no ice or snow in the Dry Valleys. Mummified seals lie in heaps. Wind-sculpted rocks decorate the hillsides. This area, which is about 1,260 square miles, has rifts and valleys no less impressive than the Grand Canyon, plus ephemeral streams, levees, and sand beaches.

Denton has worked here for more than 20 years, and knows more about this landscape than anyone else in the world. Now, to learn more about the climate during the Pliocene, Denton and Marchant are combining their study of the landscape (geomorphology) with that of ash that blows into the area from offshore volcanoes.

I land in Bull Pass, a desert pavement 20 miles from Mt. Fleming. It is so flat and so barren that it has been likened to Mars before the big freeze. Indeed, there is no visible sign of life except for a tiny camp consisting of two tents and a portable stove. As we hike over what look like the rounded backs of dinosaurs, Denton tells me that he believes Antarctica’s massive ice sheet has remained fairly stable for 10 million to 15 million years.

Marchant then strolls over to a sandy mound where he has been digging for ash with a trowel. He says that he reads the history of climate by examining how rocks weather, and where they are found. Then, for a chronological framework, he looks for nearby volcanic ash. Ash, like rocks, can reveal environmental clues through its content and current condition. And because it is the last thing to land on the surface of a formation, it can pinpoint a minimum age for the rocks.

Marchant and Denton have collected rock samples from Mt. Fleming and Table Mountain, nearby peaks with the same Sirius Group glacial sediments as the area around Shackleton Camp. The two scientists say these rocks have been preserved in dry and cold conditions, and show no sign of erosion. This suggests that the Dry Valleys have remained a cold desert for much longer than three million years.

The team has also mapped 75 ash deposits, and obtained dates for 50 of them. The samples show that the ash formed in a cold, dry environment, and remained undisturbed. It was found on rocks that are unmarked by any massive ice-sheet movement.

In addition, geologist David Sugden of the University of Edinburgh in Scotland has found an eight-million-year-old ice slice in the Dry Valleys that he says could not have survived a warming period. And cores extracted from the ocean floor around Antarctica show no curtailment of sediment from melting icebergs, as would be expected if the continent was partly ice free during the Pliocene.

Looking at the geology.

Taken together, these results speak volumes in favor of a dry, cold, steady state. They indicate that the Pliocene temperatures were only three to eight degrees Celsius above today’s temperatures, and that the ice sheet covering the Transantarctic Mountains overrode the area more than 10 million years ago.

“We do not deny that Harwood is finding the fossils,” says Merchant. In fact, he says, “we’d expect it.” Before Antarctica split from Gondwana, it supported many of the same plants and insects that are now found in South America and New Zealand. So these fossils may represent the last vestige of these life forms. “What we disagree with is the timing of it all,” Merchant says.

While Harwood and Webb contend that their miniature forest withered three million years ago, Denton and Merchant argue for at least 23 million years ago, when Antarctica was clenched firmly by the icy grip that still holds it today. “Their entire argument relies on the diatoms, which we think may have blown in,” Merchant says. Meanwhile, Harwood questions the validity of the Dry Valleys results, saying “that area is anomalous now, maybe it was then too.”

For now, the argument remains unsettled. Though Harwood says that both camps may eventually prove to be correct – that there was a long period of cold punctuated by relatively short bursts of heat – Merchant disagrees, saying “these are mutually exclusive positions and there is no evidence for an intermediate theory.”

About the only thing both camps agree on is that the key to solving this mystery is to establish, once and for all, the age of the Sirius sediment fossils. Although this will be difficult, because there is no definitive technology on which to rely, an attempt will be made this year. An independent group of researchers will drill a core from the Sirius Group rocks to determine whether diatoms exist below the surface (and therefore are unlikely to have been windblown), or only near cracks or at the top.

Both groups recognize the urgency of coming to a firm conclusion. The Antarctic ice sheet affects not only global sea level, but also world climate. So agreement on a clear picture of the past could help to cast a more accurate vision of Earth’s future. If the stablists are correct, and the eastern ice sheet remained frozen during the Pliocene, then there is little reason to worry about the fate of our coastal cities. A major temperature increase would be required to have any effect. But if the dynamists are right, and the ice sheet did melt down, then a moderate rise in the mercury could one day bring on the floods. For now, however, the answer lies hidden in Antarctica’s frozen landscape.

While the fate of Antarctica’s eastern ice sheet is uncertain, scientists have plenty of reasons to believe that the smaller western sheet could eventually slip into the ocean. It is the world’s only remaining marine ice sheet. The others, which existed in the Northern Hemisphere, disintegrated and melted away during the Pliocene period.

There are signs that the west Antarctic ice sheet is already breaking up. A huge iceberg broke free of the Larsen ice shelf in 1995. Shortly thereafter, a 40-mile-long crack opened in the adjoining shelf area. Now, ice streams that flow through the sheet are behaving erratically.

Last year, Stanley Jacobs of Columbia University’s Lamont Doherty Earth Observatory made the first oceanographic measurements across a deep channel beneath the leading edge of Pine Island Glacier. His findings show that the west Antarctic ice sheet is losing mass to the oceans. But whether this instability is symptomatic of an impending collapse remains unknown.

To further study the current state of the west Antarctic ice sheet, and to predict its future, the National Science Foundation is sponsoring a variety of Antarctica-based research projects. For example, Cal Tech scientists at Upstream Bravo Camp are sinking ice strings and digging ice cores to study the movement of fast-flowing ice streams. They’re also burying seismic monitors in snow fields to listen for “ice quakes” set off by colliding ice sheets. The study is using a variety of RAID 10 servers to store the data, a great improvement from the original data storage solution they looked at, which in the end required RAID 5 recovery, a common data recovery service offered by clean-room enabled hard drive recovery companies like Irvine’s HDRA.

How Antarctic Ice Affects World Climate

Think of the Antarctic ice sheet as Earth’s refrigeration unit: It exerts a major two-way control over today’s global environment.

First, the ice sheet (along with a raft of ice that surrounds it in the southern ocean) reflects back into space about 80 to 85 percent of the sun that hits it. So icy Antarctica, which records the coldest temperatures on Earth, helps to reduce the world’s overall heat budget.

Second, the near-freezing meltwater that runs off the ice cap, along with the water from melting icebergs, falls to the ocean floor and surges northward. This surge affects deep-sea circulation, which in turn influences climate. So, a major meltdown would not only raise sea level worldwide, but could also modify weather patterns.

For a better fix on the details, the National Oceanic and Atmospheric Administration is monitoring weather, ozone depletion, and long-term climate trends at the South Pole. In addition, scientists are refining their models of the oceans and the atmosphere by studying bottom water in Antarctica’s Weddell Sea, and satellite images of sea ice.

The Cycles Of Climate Change

According to the rock record, until about a million years ago the climate followed a neat 41,000-year cycle of ups and downs. Then, abruptly, that cycle stopped operating, and another one — 100,000 years long — took over. Why? Some scientists speculated that a cosmic catastrophe had reset the climatic clock. Others simply shrugged. Now two paleoclimatologists — Steven Clemens, at Brown University in Rhode Island, and Ralf Tiedemann, at the University of Kiel, Germany — have shown that people have been asking the wrong question. The 100,000-year cycle was there, unnoticed, all along; the real mystery is not where it came from, but what turned up the volume.

The 41,000-year cycle was no mystery at all. That’s the time it takes the tilt of Earth’s axis to wobble from 22 degrees to 25 degrees from the vertical and back again. Those changes in tilt, or obliquity, affect the amount of sun each region of the planet gets, especially near the poles. And that, in turn, affects the climate.

The 100,000-year climate cycle was harder to explain. Astrophysicists had found three orbital cycles — 95,000 years, 124,000 years, and 404,000 years long — which, in combination, cause Earth’s orbit to stretch from nearly circular to slightly elliptical and back again about every 100,000 years. The shape of the orbit (which physicists call, charmingly, “eccentricity”) determines how close Earth gets to the sun. In principle, that could affect the climate. But while cycles such as the 41,000-year rhythm change the amount of radiation from the sun by several percent, the eccentricity cycle changes it less than a tenth of a percent — too little, climatologists thought, to have much effect.

Clemens and Tiedemann’s results may change that view. The pair studied a 460-foot-long cylinder of mud bored from the ocean floor near the Cape Verde Islands, off the northwest coast of Africa. The mud was deposited continuously between 5.2 million and 1.2 million years ago. Much of it was made up of the remains of foraminifera — tiny one-celled organisms whose heaped-up shells record the vagaries of Earth’s changing climate.

It works like this: The shells are made of calcium carbonate, which is made, in part, of oxygen atoms absorbed from seawater. A normal oxygen atom has 16 particles in its nucleus. But a small proportion of oxygen atoms have 18. Water with O-16 is lighter than water with O-18, so it evaporates more easily. Usually the O-16 water rains back down and returns to the ocean. But when the climate turns cold, a lot of the water gets trapped on land as ice and snow, leaving the O-18 water in the ocean. And when there’s a lot of O-18 in the ocean, there’s a lot of O-18 in the forams’ shells.

Clemens and Tiedemann measured the O-18 in foram skeletons to see how it varied over the 4-million-year time span of the sediment core. To tease apart the several climate patterns that might show up in the core, they used a mathematical tool known as a Fourier transform. Just as a prism splits a beam of light into different colors, a Fourier transform takes a jumble of cycles and splits it into bands representing the underlying time periods The researchers found highs and lows that, as expected, matched well-known orbital shifts, including the 41,000-year cycle. But three other patterns showed up as well. They were at 95,000 years, 124,000 years, and 404,000 years — just the ones predicted by eccentricity cycles.

They came through loud and clear, even though the core spanned a time before the mysterious climate switchover. The tiny 0.1-percent difference in solar radiation seems to have made a difference after all. The 100,000-year cycle was ticking away much more than a million years ago; it was just too faint to be detected.

Why it got stronger is anybody’s guess. Rich Muller, a physicist at Lawrence Berkeley National Laboratory in California, thinks that orbital change is too feeble to do the job and that Clemens and Tiedemann’s evidence is an artifact of data processing. “The 100,000-year cycle of the ice ages could not possibly be explained by eccentricity,” Muller says. Instead, he believes that a million years ago, two asteroids collided, creating a large dust cloud. The 100,000-year climate cycle is caused by the Earth’s periodic passage into and out of that dust cloud.

Most other scientists reject Muller’s theory, but no one yet has convincingly explained why the 100,000-year eccentricity cycle suddenly grew powerful enough to overwhelm the 41,000-year obliquity cycle. “The thing that Muller’s right about is that we don’t know how eccentricity works,” says David Thomson, a mathematician at Bell Labs in Murray Hill, New Jersey, who has worked extensively with ocean sediment cores. “The few explanations I’ve heard seem contrived.”

Looking At El Ninos As A Climate Factor

How Does an El Nino Start?

The answer lies in the interactions between the equatorial Pacific Ocean and the overlying atmosphere. A small change in the usual sea surface temperature pattern can produce a change in the winds along the equator. In turn, these wind changes affect the currents, which change the pattern of sea surface temperatures even more. In some years, for reasons still not completely clear, this process continues, with ocean temperatures affecting winds, which affect currents, and in turn ocean temperatures. The small changes thus become larger and larger. Eventually, in the biggest El Nino events, the difference in temperature between the western and eastern equatorial Pacific Ocean can disappear altogether. This is what happened in the 1982-83 event. As a result, the whole pattern of climate and atmospheric circulation across the Pacific and Indian Oceans and the surrounding continents was disrupted, with droughts in normally wet areas and heavy rains over normally arid regions.

The changes associated with El Nino continue to grow for about a year. Then they usually collapse quite quickly. Sometimes a mirror-image pattern of climate disturbances–with flooding in Australia, India, Indonesia, and northeastern Brazil and dry conditions on the Pacific coast of South America–occurs. This set of conditions is called La Nina. La Nina episodes also usually last about a year or so. The world was in a weak La Nina through much of 1995 and ’96.

The tendency for El Nino and La Nina episodes to last about 12 months means that, once we have determined that an episode is under way, we can often predict how the climate will develop in countries where this phenomenon is a major climatic influence. So we carefully monitor what is going on in the equatorial Pacific and the overlying atmosphere. Buoys are moored along the equator to collect information about ocean temperatures at the surface and below. These can tell us a lot about whether an El Nino (or La Nina) is developing. Recently, computer models have been developed to predict the behavior of El Nino a year or more in advance. These models of the ocean-atmosphere system in the tropical Pacific have been quite successful in predicting El Nino episodes of the past decade.

El Nino is not the only phenomenon causing climate variations. Some climate variations have occurred at much longer time scales. Ice ages and some other long-term disruptions of the climate system can be attributed to changes in solar radiation, due to long-term variations in Earth’s rotation on its axis and its orbit around the Sun. These variations appear to be responsible for much of the variation in global ice cover during the past 1.5 million years. Other mechanisms that affect climate include volcanic activity, the uplifting and wearing away of the land surface, and continental drift (which affects the size and relative positions of continents and oceans).

Not all the longer-term climate variations are understood, however. Nor have they all been gradual. There were quite rapid changes in climate during the last ice age, up to about 10,000 years ago. Changes of about 5 [degrees] C took place in only a few decades, at least in Greenland and the North Atlantic. These changes were linked to changes in the ocean circulation. Similar changes in the future could bring rapid climate changes to the world. Over the last 10,000 years climate variations have been smaller than during the last ice age.

A Global Warming?

One variation that has taken place recently is a gradual global warming. Although the data we have to check changes in global climate are by no means perfect, they do indicate that temperatures have risen about 0.5 [degrees] C since late last century. The rise in temperature is much the same whether we look at air temperatures measured routinely by weather services over the land or sea surface temperatures. Merchant and navy ships have been measuring sea surface temperatures routinely since the middle of the last century. In the early years temperatures were measured by scooping up water in buckets and measuring with a thermometer when the bucket reached the deck. Nowadays, engine intake temperatures are measured.

The different ways of measuring mean that care needs to be taken in comparing temperatures from the last century with those of today. The same applies for temperatures measured over land. The way thermometers were situated to measure temperatures in the last century was different from current practice, and again care must be taken to remove possible biases from the different methods of measurement. But when these factors are taken into account, it does appear that the world has been warming. At the same time, precipitation appears to have increased in the high latitudes of the Northern Hemisphere and decreased in the tropics, although these changes have not been as consistent or clear as the temperature increases. The relatively stable climate of the past 10,000 years means that this recent global warming appears quite unusual.

Some scientists have suggested that changes in sunspot numbers (which may reflect changes in solar radiation) may be causing the recent warming, but the general consensus is that at least part of the warming is due to the enhanced greenhouse effect, due largely to the burning of fossil fuels. The greenhouse effect occurs because some atmospheric gases (especially water vapor, carbon dioxide, and methane) affect the radiation balance of the atmosphere. As described earlier, Earth absorbs radiation from the Sun, mainly at the surface. This energy is then redistributed by the atmospheric and oceanic circulation and radiated back to space at longer (“infrared”) wavelengths. Anything that alters the radiation received from the Sun or lost to space–or that alters the redistribution of energy within the atmosphere, and between the atmosphere, land, and ocean–can affect the climate. The greenhouse gases cause outgoing infrared radiation from the surface to be absorbed and reemitted by the atmosphere. This acts to warm the lower atmosphere and Earth’s surface. Therefore, an increase in greenhouse gases (and the atmospheric content of carbon dioxide has increased about 30 percent over the past couple of centuries) should lead to warming. There is considerable uncertainty about exactly how much warming should result from increased greenhouse gases, but it seems likely that continuing to increase the carbon dioxide in the atmosphere will lead to continued warming over the next few decades.

Is the Climate Becoming More Extreme?

If this enhanced greenhouse effect leads to a global climate warmer than the present by a couple degrees, the major effect on society and the economy would likely be felt through the impacts of the more extreme weather and climate events. For instance, changes in the number or intensity of tropical cyclones, or changes in the frequency of droughts, could have major consequences. In recent years, meteorologists have started to examine how such extreme events might change, if we continue to increase the amount of greenhouse gases in the atmosphere. Of course, this is a difficult task. The climate models we use to investigate such questions do only a rather rudimentary job in reproducing how extreme weather events work, at present. For example, they are unable to reproduce tropical cyclones with the intensity of observed cyclones, so it is difficult to extrapolate to a future climate. Tropical cyclone activity appears to have increased in the northwestern Pacific in recent decades but has decreased in the Atlantic. It is not clear if these changes are due to human interference in the climate system, or whether they will continue in the future.

Some of the climate models used to investigate possible changes due to an enhanced greenhouse effect have suggested that intense rainfall events may increase in frequency. There is some observational evidence that such an increase may already be happening, at least in Australia and the United States, but there is no certainty that this observed increase is the result of the enhanced greenhouse effect. One change in extreme events that could be confidently expected to accompany a general warming would be a drop in the number of cold nights (including frosts). This does appear to have taken place in several parts of the world in recent decades.

Other human actions also appear to have the potential to affect global climate. Aerosols (small particles) in the atmosphere are increasing, mainly as a result of the emission of sulfur dioxide from fossil fuel burning and also from biomass burning. These can absorb and reflect solar radiation. In addition, changes in aerosol concentrations can alter cloud amount and cloud reflectivity. These processes tend to produce cooling, which can, in some areas, offset warming due to an enhanced greenhouse effect. The lifetime of these aerosols in the atmosphere is much shorter (days to weeks) than most greenhouse gases (decades to centuries), so their concentrations (and thus their climatic impact) respond much faster to changes in emissions.

The description above indicates some of the many processes that can affect global climate. Evaluating just how much each process is contributing to recent climate trends, such as the recent warming, is difficult. The Intergovernmental Panel on Climate Change, after a thorough examination of the evidence, determined that the balance of evidence suggested that there has been a discernible human influence on global climate, and that the climate was expected to continue to change in the future, due to human influences on the atmosphere. The interaction of the atmosphere with the ocean is one factor complicating the detection and prediction of climate change. The oceans store large amounts of heat and carbon dioxide, which delays any warming that would be caused by an enhanced greenhouse effect. Understanding this delay is crucial for good predictions of future climate change.

Will El Nino Change?

One aspect of the climate system for which we need to be able to predict its response to global climate change is El Nino. If the future climate change leads to a change in the frequency of El Nino episodes of the intensity of those of 1877-78 or 1982-83, would this lead to changes in the frequency of famines around the world? Unfortunately, the models we currently use cannot predict how El Nino might react to an enhanced greenhouse effect, but better models are being developed. These models need to reproduce the way the oceans and the atmosphere interact, so they can reproduce the current behavior of El Nino. Certainly meteorologists and oceanographers have recognized the importance of this question. As well as using models to look to the future, they are examining evidence of ancient El Nino episodes to gauge how liable they are to disruption in a changing climate. This work, still in its infancy, should, with the new models of the ocean-atmosphere system under development, lead to clearer answers over the next few years.