USC Dana and David Dornsife College of Letters, Arts & Sciences > Blog

March 19, 2013

The Natural History of Catalina Island

Filed under: Uncategorized — admin @ 6:41 pm

Santa Catalina Island, commonly referred to as Catalina Island, is located some 20 miles off the coast of Southern California. The Island’s close proximity to the mainland and large size results in an interesting geology and wide biodiversity.

The Geology of Santa Catalina Island varies by region.  Catalina schist is a combination of crystalline metamorphic rocks: blueschist, greenschist, and garnet amphibolite.
In the southeast region of the island, a young pluton of quartz hornblende diorite porphyry, dated to about 19 million years ago, protruded through the approximately 200 million year old Catalina schist.
Volcanically formed rocks, mainly located in the high center of the island, are the third most abundant form of rocks found on Catalina Island.
Sedimentary rocks such as sandstone are also found on Catalina island, in a wide range of elevations.  Relatively young alluvial deposits exist at lower elevations, while other sedimentary rocks exist atop high peaks.  Uplifting, resulting in high elevation sedimentary rocks, is evidenced by the stair like formations visible along parts of the island’s geography.
Useful minerals such as gold, silver, lead, zinc, and steatite (soapstone) exist on Catalina.  Interestingly,  Native Americans used the soapstone to produce goods such as bowls and pipes.



Terrestrial vegetation
Catalina has a wide variety of plant species due in part to its close proximity (20 miles) to North America.  When species arrived at Catalina they evolved without predators, and with many ecosystem niches to fill.   In total, there are 606 species of wild plants on Catalina; 421 species are indigenous to the island, while 185 are invasive.  The island has six endemic plant species, and certain natural species, like Malva Rosa, only exist on offshore rocks .  ”Natural History of the Islands of California” separates Catalina’s plant communities into six groups.
Coastal sage scrub
Coastal sage scrub thrives in ecosystems that have frequent fog, and the temperature remains above freezing.  Therefore coastal sage scrub is the dominant community on Catalina.  Catalina coastal sage scrub includes plants from the sunflower, snapdragon, broom-rape, mallow, and nightshade families.
Coastal bluff scrub
As the name suggests, the coastal bluff scrub is a plant community along the coastal cliffs of Catalina.  Plant species specific to coastal bluff communities are the Sea Dahlia (Coreopsis gigantea), Nevin’s Eriophyllum (Eriophyllum nevinii), Catalina Crossosoma (Crossosoma californicum), and the Santa Catalina Island Live-forever (Dudleya hassei).  Interestingly, the Dudleya hassei is endemic to Santa Catalina Island.
Island chaparral
The island chaparral community is generally found at a higher elevation than the coastal sage shrub community, and on the northeast slopes of the island. Many species of the island chaparral community, such as island scrub oak, rely fires as a natural part of their life cycle.
Island woodlands
Island woodlands tend to exist in areas such as canyon floors and north facing slopes that collect extra moisture.  Catalina woodland species, such as the Catalina Ironwood and Catalina Cherry, are not in fact trees, but gigantic shrubs.
riparian woodland
Riparian lands, or areas of land near sources of continuous water, have enough moisture to support tree species.  Catalina’s tree species include Black Cottonwood, Blue Elderberry, and Red Willow.
Coastal grassland
Under the gigantic shrubs exist a myriad of invasive grass species, and to a lesser extent, native grasses.  Invasive grasses and weeds migrated to Catalina Island via domestic live stock brought to the island by settlers.

Native Animals

In addition to vegetation, Catalina Island is home to a variety of animal species including over 50 endemic species – species found only on the island and nowhere else. The endemic animals on the island include five mammals, three birds, and various invertebrates. One of the island’s more famous mammals is the Catalina Island Fox. Although it is the largest endemic mammal on the island, the fox exhibits dwarfism – a decrease in size seen in larger species as a result of limited resources. In the early 2000s, the Island Fox population greatly declined due to disease, and the species was listed as endangered. However, thanks to breeding and vaccination efforts by the Catalina Island Conservancy, fox numbers have increased in recent years. Recovery efforts of the fox population continue on the island today. Another endemic mammal is the Catalina Beechey Ground Squirrel. Unlike the Catalina Island Fox, the ground squirrel exhibits gigantism – another type of allopatric speciation that results from few predators and abundant resources. Other native species include the Island Deer mouse, Scarap beetle, and Catalina California Quail.

Nonnative Animals

Catalina is also home to numerous nonnative species, largely as a result of the island’s population and high visitation rates. In the late 1800s and early 1900s, ranchers brought goats and sheep were to the island. Their introduction had devastating impacts on the island’s ecosystems as they overgrazed the grasses. These animals have since been removed.

The largest mammal species currently living on Catalina Island is the bison which was introduced in 1924 when a film crew shooting a movie on the island brought over 14 bison and failed to remove them after finishing filming. Since their introduction, the bison population has grown greatly in numbers, reaching over 600 bison at one time. Today, the Catalina Island Conservancy maintains a bison population of 150-200 animals and controls growth by shipping animals off the island when necessary and by administering birth control shots to the females. Although they remain the largest mammals on the island, Catalina’s bison exhibit dwarfism, similar to the Island Fox, and are smaller than their mainland relatives.

Other introduced animals include feral cats which were once domestic cats released by their owners on the island. In recent years, raccoons have also been accidentally introduced by humans traveling to the island via boats.

These nonnative species threaten the island’s native species and place additional pressure on the island’s resources. The Island’s Conservancy’s efforts focus greatly on managing the impact of these species on the island.


By Katie Peters & Casey Frost

Works Cited

“Animal Species.” Catalina Island Conservancy. N.p., 2009. Web. 19 Feb. 2013.
Cockerell, T.D.A. Natural History of Santa Catalina Island. The Scientific Monthly, Vol.

48, No.4, pp 308-318.

Schoenherr, Allan A., C. Robert. Feldmeth, and Michael J. Emerson. Natural History of

the Islands of California. Berkeley: University of California, 1999. Print. Photograph

R. Randall Schumann, Scott A. Minor, Daniel R. Muhs, Lindsey T. Groves, John P.

McGeehin, Tectonic influences on the preservation of marine terraces: Old and new evidence from Santa Catalina Island, California, Geomorphology, Volume 179, 15 December 2012, Pages 208-224, ISSN 0169-555X, 10.1016/j.geomorph.2012.08.012.



The Central Valley Water Project: A Plan Gone Wrong

Filed under: Uncategorized — admin @ 6:39 pm

The Forty-niners who moved to the Great Central Valley during the California gold rush of the 1850s would not recognize the landscape of the Central Valley today. What was once covered in yellow grasslands in the summer and sprawling marshes in the winter and spring was quickly converted to the “largest semicontinuous expanse of irrigated farmland in the world” (Reisner, 1993, p. 335) by the mid-1920s. Today the Central Valley is responsible for 62 percent of California’s $37.5 billion annual agricultural production and more than 20 percent of U.S. food production (Stene, Introduction). Clearly the Central Valley is vital to our food security and our national economy but the development of this region has come with local environmental and economic costs.


Irrigated Farmland in California’s Central Valley (Photo courtesy of Underwood)

Irrigated Farmland in California’s Central Valley (Photo courtesy of Underwood)


The Central Valley became a hub for irrigation farming with the invention of the centrifugal pump after World War I. Suddenly there was an explosion of water pumping and by the mid-1920s California was the richest agricultural state. With hundreds of gallons of water per minute being pumped from the ground, it was only a matter of time before the water table dropped significantly. That time came at the end of the Great Drought of the 1930s.

In an attempt to save farmers from the catastrophic repercussions that would ensue if they continued pumping at the current levels and to protect them from the devastating floods, the California state legislature authorized the Central Valley Project (CVP) in 1933. California approved a $170 million plan to begin the CVP. However, the CVP still required further funds so California turned to the Federal Emergency Administration of Public Works (FEA), which approved a $12 million grant for the project, and then to the Rivers and Harbors Act of 1937, whose committee authorized an additional $12 million for the project (Stene, Introduction). Fully funded and now headed by the Bureau of Reclamation, the CVP began its hard path of securing and managing water distribution through the construction of dams, reservoirs, and canals around the Sacramento, San Joaquin, and Stanislaus Rivers. The goal of this program was to alleviate the environmental issue of groundwater depletion and to financially help the small farmers in the area. The plan ultimately backfired and the CVP has only exacerbated and created environmental and economic problems in the Central Valley.

Under the Reclamation Act farmers receiving subsidized water were only allowed to own or lease a maximum of 160 acres of land and they were required to live on the land. Instead of creating new farms, the CVP saved thousands of farms that had gone out of production because they had run out of water. Though originally intended to be a way to ensure that the greatest number of people could benefit from the irrigable land, this acreage limitation was viewed as the government’s attempt to convert private lands into federal plots and the acreage limitation was eventually changed and never strictly enforced (Odell, 1992, p. 3). Today, a few large companies including Chevron USA, Tejon Ranch, and Shell own the majority of the farmland in the Central Valley. These companies hire farmers to work on the land and as a result small farmers, for which the project was originally intended to benefit, are the primary beneficiaries of the CVP.

The cheap price of the water has affected the repayment plan of the CVP. Since the water is sold at such low prices, the “payments for water and power have not been sufficient even to cover the operation and maintenance costs of the project” (Reisner, 1993, p. 482).


Source: Environmental Working Group, Virtual Flood: CVP Water Is Heavily Subsidized, 2005

Source: Environmental Working Group, Virtual Flood: CVP Water Is Heavily Subsidized, 2005


This is a graph showing that only 11% of the CVP’s farmers cost, a total of $1 billion, had been paid back in 2002. The nation’s richest farmers who are residing in the Central Valley are essentially making their money off of taxpayer subsidization rather than the selling of food commodities.

The cheap prices of water encouraged agricultural expansion creating even more pressure on the water table. Even though the CVP was delivering more surface water throughout the San Joaquin Valley, the pressure on the aquifer still remained. Half of the agricultural water being pumped was coming from groundwater sources and farmers were still pumping water from their personal wells. This unsustainable water usage is still in effect because the low prices make any efforts at conserving water expensive so it is financially beneficial for farmers to pump all the water they can. According to the Environmental Working Group (2005), “while the average acre of U.S. farmland gets 2.48 acre-feet of water each year, the average acre in California gets 36 percent more, or 3.37 acre-feet”.

The inefficient use of water has led to water shortages for wildlife and urban customers and salinity and toxicity problems. With such a large demand for water by the farmers, there is less water for those in urban areas and for the wildlife populations reliant on the rivers. As the rivers are drying up and unnatural facilities and diversions are being constructed, the ecosystems are being altered. Fish population have been severely affected and out of the 29 native fish species, two are extinct, three are endangered or threatened, three are considered “species of special concern”, five are rare, and nine are declining. The delta smelt, an endangered indicator species, was being sucked up by the pumps delivering water from the San Joaquin River to the Central Valley and as a result they were dying so the pumping stopped. Today, less than 10 percent of the original wetlands remain and 20 percent of the wintering waterfowl in the US are dependent on them (Congressional Budget Office, 1993, p.333).

Since water is so cheap the naturally desert type of land that makes up the Central Valley is able to still be farmed though it is not suited for it. As a result, there are drainage and toxicity problems. The poisoning of waterfowl at the Kesterson National Wildlife Refuge due to farms discharging selenium that was ending up in the Kesterson Reservoir is an example of this. The heavy irrigation and lack of proper drainage has also increased the amount of salt in the soil and today the issue of the high salinity is the most under recognized problem in California.

The Central Valley has been transformed into a regional garden of fruits and vegetables and is referred to as “America’s Breadbasket” because of its vital agricultural role. The area should never have been farmed to the extent that it is today but it was and today the US is dependent on it. The only way to ensure that it continues to be profitable is to alter the farming techniques so that we do not further degrade the soils, alter the ecosystems, and deplete our water sources. Farmers may not be able to afford to conserve the water due to the cheap prices they purchase it at, however, they must begin to think in the long term. Continuing to use water unsustainably will be detrimental for them, for the local wildlife populations in the Central Valley, for the urban users, and for the country as a whole.

 By: Iñaki Pedroarena-Leal and Kelsey Valentine


Congressional Budget Office. (1997, August). Water Use Conflicts in the West: Implications of

Reforming the Bureau of Reclamation’s Water Supply Policies. Retrieved March 14, 2013 from

Environmental Working Group. (2005, March). Virtual Flood: CVP Water is Heavily

Subsidized. Retrieved March 14, 2013 from

Odell, D. (1992, December). The Transfer of the Central Valley Project. Environs, 16, 1-7. Retrieved March 14, 2013 from

Reisner. M. (1993). Cadillac Desert. New York: Penguin Books.

Stene, Eric A. “Introduction.” The Central Valley Project. United States Bureau of Reclamation. Retrieved March 12, 2013 from

Underwood, A. “In the Central Valley, Organic Farming Is Slowly Taking Hold.” Grow Switch News Blog. Retrieved March 12, 2013 from



Fire in Southern California

Filed under: Uncategorized — admin @ 6:10 pm

Arid southern California, an area highly susceptible to fire, is one of the nation’s more concentrated centers of wealth and home to tens of millions of people.  Minimizing the loss of life and property to fire is a priority. Various methods have been tried and tested; yet a uniformed fire controlled policy has yet to be determined. Public opinion stands to be one of the driving forces behind fire policy instead of science. This is partially due to the lack of scientific research being able to quantify the effects of the different methods used. This blog will look at two of the methods that have been used in Southern California and discuss the benefits, consequences and effectiveness of each one.

One method historically used to reduce the risk of fire is prescribed burning, a fire management strategy involving the purposeful burning of an area by humans.  However, recent research suggests that this option is generally ineffective in decreasing the area burned by wildfires.  Price et al (2012), taking into account changing weather patterns, studied the relationship between the area burned by wildfires and the area burned purposefully by humans in seven counties in southern California, an area dominated mostly by shrub and grassland fuel.  The study measured southern California’s leverage, the reduction in area burned given one unit of controlled burning.  Tropical savannas have high leverage.  Australian eucalypti forests have some.  Price et al (2012) found that that the seven counties it looked at had none whatsoever.  In other words, prescribed burning, while it may be a useful tool elsewhere, is ineffective in reducing the amount of land burned by wildfires in southern California. The basis for this debate is that areas with reduced fuel have an inhibitory effect on later wildfires. While it’s a known fact that fires spread less rapidly and with less intensity in a setting with lower fuel, the actual regional scale at which fuel reducing practices are effective is not as certain. Therefore it is imperative that a quantification of such effects be done. An attempt to do this is a term coined as “leverage,” referring to the unit area reduction in wildfires resulting from each unit of treatment. For example, studies done in Australia on a forest dominated by Eucalyptus trees found that three to four hectares of prescribed burning resulted in a reduction of subsequent wildfire burning by 1 hectares- resulting in a 3:1 hectares ratio for leverage. Although this study shows prescribed burning to be effective in this case, new research is showing that fire patterns and fuel age vary greatly from biome to biome and therefore data and conclusions drawn from outside Southern California will be hard pressed to be applicable to it.

Another strategy previously employed to avoid damage to life and property in southern California is fire suppression.  Today society largely believes this strategy, in its attempt to reduce the frequency and severity of fires, has actually led to more intense fires.  According to the fine-grain age patch model, fire suppression causes more high intensity fires than would occur normally. The model contends that fire disruption leads to a disruption of the natural fuel assemblage, and the prevalence of old-age fuels that build up as a result of fire suppression encourages larger fires.  However, a recent study by Keeley et al (2009) examined this hypothesis and determined it was unfounded.  Keeley et al saw that the last four mega-fires involved a mix of fuel ages, implying that old-age fuel may not be responsible for larger fires.  In essence, the idea that fire suppression has changed the fuel assemblage to increase the severity of wildfire in southern California is false.

            The new studies being done show that regional-scale patterns of fire extent in Southern California are not influenced by fuel age and therefore using prescribed burns as a method of fire treatment does very little to reduce the area of wildfires. However this is not to say that should a wildfire encounter a recently burned patch it would not inhibits is path and growth. Therefore prescribed burns could be most efficiently used if focused in areas or assets that need protection, while using fire suppression in parallel to unsure a minimization of property loss in the event of a wild fire.

By: Farzad Bozorgzad and Amanda Alvarez

Works Cited:

Price, Owen F., Ross A. Bradstock, Jon E. Keeley, Alexandra D. Syphard. “The impact of antecedent fire area on burned area in southern California coastal

ecosystems” Journal of Environmental Management 113 (2012): 301-307.

Keeley, Jon E., Paul H. Zedler. “Large, High-Intensity Fire Events in Southern California Shrublands: Debunking the Fine-Grain Age Patch Model.” Ecological Applications 19 (2009): 69-94. Web.

March 13, 2013

The War Over Water: Owens Valley and the Los Angeles Aqueduct

Filed under: Uncategorized — admin @ 6:47 pm

As the 1800s drew to a close, the burgeoning population of arid Los Angeles began to outgrow its water supply. By 1900, the city’s population had doubled since 1890 and grown tenfold since 1880 (Hoffman 1977). William Mulholland, the superintendent of water for Los Angeles, recognized that water was the limiting factor of the city’s growth. He took note of the quality, quantity, and proximity of the water possessed by the Owens Valley, and realized that Los Angeles desperately needed that supply. Along with Fred Eaton, mayor of L.A., and Joseph Lippincott, the regional engineer of the U.S. Bureau of Reclamation, Mulholland began the process of acquiring water from the Owens Valley. Using deception, subterfuge, bribery, and a strategy of divide-and-conquer, Los Angeles essentially stole all the water it needed while the farmers of the Valley received barely a fraction of the fair value for their water rights.

Figure 1. Map showing the 223-mile path of the Los Angeles Aqueduct (Hoffman 1997

At the beginning of the 20th century, the farmers and ranchers residing in Owens Valley had their own plans for the river’s water and were seeking federal funding from the Bureau of Reclamation for a public irrigation project, which would have blocked Los Angeles from diverting the water (Hoffman 1997). Determined to prevent the scrapping of the Owens Valley Project, Eaton used his friendship with Lippincott to gain access to inside information about water rights, which he used to influence Bureau decisions that would benefit Los Angeles, while Mulholland focused on manipulatingpublic opinion by misrepresenting the amount of water the Owens Valley would provide and by lying to the Valley’s residents about how much of the water would be diverted. Then, Eaton began buying up land in the Owens Valley under the false pretense that the land would be used for the reclamation project, and by 1905 Eaton had purchased enough property to secure the necessary land and water rights to block the Bureau’s irrigation project and build the aqueduct (Libecap 2009).

Mulholland needed a way to store the surplus water from the aqueduct, especially because he feared that the residents of Owens Valley might claim back any water that went unused. However, having underestimated the cost of the aqueduct, Los Angeles couldn’t afford to also build a large reservoir. In fact, there wasn’t even enough money to build the aqueduct itself. Mulholland found a solution to both these issues in the San Fernando Valley. If the aqueduct traveled through the Valley on its way to the city, any water dumped in the Valley would drain into the L.A. River and its broad aquifer, creating a large, convenient, non-evaporative pool for the city to tap—essentially, it would become a big, free storage site. Adding the Valley and its residents to Los Angeles would also provide a means to fund the aqueduct by creating a new tax base. Thus, with the annex of the San Fernando Valley the Owens Valley project would finally be ready to move from conception to reality.

Construction of the Los Angeles Aqueduct began in 1908 and was completed in 1913. The enormous project, directed by Mulholland, employed more than 2,000 workers and spanned a distance of 223 miles. Once the aqueduct was completed, Los Angeles began to prosper and grow at an unprecedented rate as homes and businesses spread across the basin. The expanding population combined with demand from the San Fernando Valley forced Mulholland to take all available water from the Owens Valley, resulting in the rapid depletion of its water supply.

Figure 2. The Los Angeles Aqueduct took five years to complete and represented one of the greatest engineering feats of its time (Los Angeles Times)

Figure 2. The Los Angeles Aqueduct took five years to complete and represented one of the greatest engineering feats of its time (Los Angeles Times)

Much of the land in the San Fernando Valley had been previously bought up for low prices by a syndicate of investors, who had inside knowledge of the plan to incorporate the Valley into the city and run the aqueduct through it. Unbeknownst to the public, the San Fernando Valley would be converted to agriculture and irrigated by water from the Owens Valley, drastically increasing the productivity and value of the land. This infuriated the farmers of Owens Valley, who were being robbed of their precious water to support agriculture in Los Angeles in addition to residential use.

By 1924, Owens Lake and about fifty miles of the Owens River were completely dry. Conditions were so bad that the farmers rebelled, culminating in the use of dynamite to blast out part of the aqueduct and return water to the river. The conflict between the farmers and the city of Los Angeles escalated until 1927, when the Inyo County Bank collapsed and brought down the Valley’s economy with it. Los Angeles officials continued to purchase private land holdings and their water rights; by 1928, the city owned 90 percent of the water in the Owens Valley and agriculture in the region had been reduced to a shadow of its former glory (Libecap 2009). It would be an understatement to say that Los Angeles won this water war, which will forever serve as an example of how economic demand can lead to the unsustainable, and sometimes unfair, exploitation of natural resources.


This post was authored by Katherine Moreno ’13 BA Environmental Studies and Miller Zou ’13 BS Environmental Studies  ’14 MA Environmental Studies.

Works Cited

Hoffman, Abraham. 1977. Origins of a Controversy: The U.S. Reclamation Service and the Owens Valley-Los Angeles Water Dispute. Arizona and the West 19.4: 333-46.

Libecap, Gary. 2009. Chinatown Revisited: Owens Valley and Los Angeles-Bargaining Costs and Fairness Perceptions of the First Major Water Rights Exchange. Journal of Law, Economics, and Organization 25.2: 311-38.

Hard water, Heavy water, Light water, Soft water

Filed under: Uncategorized — admin @ 6:38 pm


“There are dozens and dozens of nanotechnologies currently in development that will impact water. And for every amazing nanotech solution, there are mirroring developments in biotech. For every biotech solution, there’s a wastewater recycling solution equally as exciting. But many believe the most promising line of development isn’t even in the water space; it’s in the metatechnologies surrounding this space.”

-Peter Diamandis and Steven Kotler, Abundance p. 95, 2012


The continued failure to meet basic human needs for water calls for the implementation of soft path water solutions. Freshwater management solutions can be categorized in two ways: the hard path and the soft path. The hard path focuses on the construction of large infrastructure, such as dams, aqueducts, pipelines, and centralized treatment plants to meet human demands. It has extreme economic and environmental costs that can be minimized by moving toward soft path solutions. Environmental impacts of the hard path include a decline in freshwater fauna organisms and a disruption of the hydrological cycle, which leads to multiple other problems (e.g., nutrient depletion and decline in wildlife populations). Moreover, the cost of water per year to meet basic human needs by hard path solutions is systematically higher than those for soft path solutions (Gleick).

The soft path takes into account both social and environmental concerns.  (Gleick). While the hard path focuses largely on increasing water supply, the soft path heeds the importance of reducing the demand for water as well (Brooks). Furthermore, the hard path encourages the exploitation of natural resources through its narrow focus on extracting as much water as possible. In contrast, the soft path is a more holistic approach because it takes into account the effects of natural resource extraction, use and disposal on ecosystem health. The soft path allows for human water consumption to go hand in hand with environmentally sustainable and economic development through the utilization of “human ingenuity rather than resource-intensive inputs to improve natural resource use patterns” (Brooks).

Artificially low water prices have led to minimal integration of demand management as a component of water management solutions. However when looking at the opportunity costs, one can easily see that a reduction in water demand is a source of water in itself. In other words, more efficient freshwater use means more available freshwater. Furthermore, reducing water demand is much more time effective than any hard path solution (Brooks). Overall, the soft path complements centralized physical infrastructure with lower cost community-scale systems, decentralized and open decision-making, water markets and equitable pricing, application of efficient technology, and environmental protection (Gleick). Below, we discuss a few examples of the aforementioned components of soft path freshwater solutions.

Not only are surface sources and aquifers currently utilized above replenishment rates, but the problem persists that a billion people are living off of untreated water outside of cities and infrastructure. Fortunately, a variety of social forces are taking heed of claims that the Earth is entering an era of heightened water scarcity. The business model of social entrepreneurship is currently catching up with developing countries’ water crises. What began with a brand of ‘high-status’ Ethos water bottles in American coffee shops has helped create mentalities to inspire creativity where finance development costs were thought insurmountable. Starbucks revenues from Ethos are dramatically insufficient to address the world’s water problems (only $10 million), but thanks to its motivations, and those from entrepreneurs in areas like energy and agricultures, those historic impossibilities are being used as justification to renovate the entire water distribution paradigm.

In their 2012 book, Abundance, X Prize and Singularity University founder Peter Diamandis, along with entrepreneur Steven Kotler, described a role for four key forces to produce orders of magnitude improvements in basic service industries like water. These are  Moore’s Law, Do-it-Yourself Inventors, Technophilanthropists, and the “rising billion.” Economically, these forces translate into fantastic technical capabilities, a newly tapped pool of high-skilled workers, vast pools of tech-savy financing, and markets that have remained unsaturated for decades. It is a perfect storm for soft path water development. Moreover, Diamandis and Kotler are not alone in their forecasts. They are joined by economists like Jeremy Rifkin, author of The Third Industrial Revolution, and technologists like Ray Kurzweil, inventor of text-to-speech technology for the deaf.

A quintessential example is the Lifesaver bottle invented by Michael Pritchard. It employs a filtration membrane with pore only 15 nanometers thick and lasts for six thousand liters. At a cost of $0.05 per day, Pritchard has become famous for alleging that Millenium Development Goals for water might be met for only $8 billion. Buttressing his claims are a thriving global nanotechnology industry, which is estimating to encompass $1 trillion in investments by 2015 (Abundance, p. 93). Future iteration of Pritchard’s device could feature additional nanotech devices such as particles with affinity for heavy metals and arsenic. And, scaled-up versions of these nanotechnologies can become even more widely commercialized as desalination processes. NanoH2O is a Los Angeles company that is revamping the reverse osmosis with a stated goal of created 70% more water using 20% as much energy. With these deployed, the world would be unlikely to see desalination technology confined to energy-abundant regions like the Middle East.

It is easy to reframe the techno-optimistic paradigm as a mere few anecdotes, however Diamandis, Kotler, Rifkin, and Kurzweil envision a radical transformation of both technology and society as it grows to be more decentralized and more interconnected as the same time. “Smart grips” for water distribution to farmers have been implemented in Spain, while at the same time, they are being researched by giant technology corporations like Hewlett Packard (Abundance, p. 95). These stand in sharp contrast to the construct-at-all-costs mentality epitomized by Pat Brown’s campaign for the California Aqueduct. Social entrepreneurship, Moore’s Law, Do-It-Yourself invention, technophilanthropy, and demand from a rising billion people are not confined to working in a single industry or country. While ecologically-minded activist express concern for the standards of living of “future generations”, the world is well on its way to realizing massive efficiency and production gains within the lifespans of the current generation (Rifkin).

This post was authored by Sean Hernandez ’13 BA Economics and BA Environmental Studies, and Nazia Gangani ’13 BS Environmental Studies with a Minor in Business.


Brooks, David, and Susan Holtz. “Water Soft Path Analysis: From Principles to Practice.”Water International 34.2 (2009): 158-69. Web.

Diamandis, Peter, and Kotler, Steven, 12’ Abundance: The Future Is Better Than You Think (2012): 90-98. Print.

Gleick, P. H. “Global Freshwater Resources: Soft-Path Solutions for the 21st Century.”Science 302.5650 (2003): 1524-528. Print.

Rifkin, Jeremy, et al. “The World in 2025: Ways to the Future.” European Business Forum 29 (Summer 2007): 15-27. Web.

April 26, 2012

A Tale of Two Ports

Filed under: Uncategorized — admin @ 6:25 pm

As part of our 495 experience we were able to see the ports of Los Angeles and Long Beach on way to Catalina Island. Despite a lack of natural beauty, the sheer size of the ports, from the cranes to giant tankers and racks on racks on racks, can be awe-inspiring. Los Angeles would not exist in its current form if the LA River had not been supplemented with “sweet water” from the Owens Valley. Arguably, the port has had a similar effect on the development of Los Angeles, which was accounted for by early city planners. However, at what cost has this economic development affected the residents and environment surrounding the ports?

Ports can have large effects on air and marine ecosystem quality due to heavy industrial use in the area. Both boats and trucks travel to and from the ports regularly and have emissions, which are much less regulated than personal vehicles in California.

(Mueller et al, 2011)

These types of emissions, especially the ultra-fine particulates, are known to cause various illnesses related to cardiac and respiratory systems (Dominici et al., 2006). The gaseous emissions are well known and contribute to the amount of ground-level ozone, acid rain and the greenhouse effect. Unfortunately, it is difficult to contain air pollution within the confines of the port and has effects on residents in San Pedro, Wilmington and West Long Beach (Waldie, 2012), residents that live along highways with increased traffic due to port trade and the workers of the port.

Ports also contribute a whole range of contaminants into the marine environment. Ships often introduce contaminants by using anti-fouling chemicals, spilling oil/gas/diesel into the water during refueling and operation or as paint breaks down to name a few. Many of these pollutants affect the health of marine organism and humans’ enjoyment of the resource (people recommend against going swimming at the beach near downtown Long Beach). A breakwater is an important structure near ports and harbors, as they reduce wave action allowing for easier docking and unloading of ships (E.B., 2012). Unfortunately these structures intensify issues regarding marine pollution by reducing water exchange between port and ocean, which has been demonstrated to concentrate marine pollution. “ This factor [limited water exchange], combined with sewage runoff from the coast and intensification of activity in the commercial port, accounts for significant water eutrophication and accumulation of pollutants in bottom sediments” (Selifonova, 2009).

Another significant source of port pollution, although not as common in the United States as developing nations, is the business of ship building/repair/recycling. However, I felt it particularly relevant as the last time I returned from Catalina the smell of burning steel was wafting Miss Christi’s way from Al Larson’s Boat Shop. Pollution from this activity is well documented (Chang et al., 2010; Coffin, 2003) and measures are taken to reduce its effects where the activity is practiced. For example, the boats at Al Larson’s were dry-docked and their ramps were surrounded with floating booms, although somewhat haphazardly. However, were there to be a rain event most of the wastes would be washed into the water and the boom would limit pollution that floats but does little for heavy metals, one of the most common pollutant from boat repair, many of which are more likely to be deposited as sediment or become aqueous than remain on the surface (Maata & Singh, 2008).

These factors are related to the volume of traffic that the port receives. Both the Port of Los Angeles and the Port of Long Beach have historically been some of the highest volume  ports in the United States.

While the economy has certainly been stimulated by this activity, Los Angeles and Long Beach have endangered the health of their and surrounding communities, reduced recreational opportunities and altered the ecology and coastal morphology of the region.

In an attempt to mitigate the environmental impacts associated with ports, the Ports of San Pedro have developed plans focusing on reducing pollution. The Port of Long Beach adopted a Green Port Policy in January 2005 while the Port of Los Angeles initiated an Environmental Management System in 2003. These environmental management policies aim to engage the community, the port staff, and the customers, all while promoting sustainability, employing the best available technology and practices, monitoring performance, and complying with all environmental regulations. Some of the specific goals of the Green Port Policy are protecting wildlife, reducing harmful emissions, improving the quality of the water, and removing and treating the soils and sediments in the harbor.

One of the biggest accomplishments achieved under these environmental management plans is the Clean Air Action Plan. This plan focuses on reducing emissions from all five main port sources; trucks, vessels, cargo handling equipment, harbor craft, and rail. Highlighted in these plans are two long-term goals:

  1. By 2014, reduce port-related emissions by 22 percent for Nitrogen Oxides, 93 percent for Sulfur Oxides, and 72 percent for Diesel Particulate Matter.
  2. By 2023, reduce port-related emissions by 59 percent for Nitrogen Oxides, 92 percent for Sulfur Oxides and 77 percent for Diesel Particulate Matter.

The four main programs initiated under the Clean Air Action Plan are the Technology Advancement Program, the Alternative Maritime Power program, the Green Flag Program, and the Clean Trucks Program.  Under the Technology Advancement Program, technology that has a high probability of reducing pollutants are researched and tested for commercial success. One of these new technologies is the hybrid tug boat system, which pulls larger vessels and container ships into docks in order to prevent them from running their larger, and higher polluting engines. Additionally, through the use of Alternative Maritime Power, container ships can use shoreside power at the terminal to unload cargo rather than continually running their energy intensive diesel engines.

Hybrid Tug Boat System

The Clean Trucks Program is attempting to improve air quality in the community and for the greater Los Angeles area through easing into a ban of older, dirtier trucks. All trucks manufactured before 2007 are not permitted to operate within the ports. Both Ports expect that the use of newer and more efficient trucks will eliminate a large percentage of air pollution. The last Clean Air Action program is the Green Flag Program, which focuses on vessels coming into the Port of Los Angeles and the Port of San Pedro. Basically, this program ensures that emissions are reduced within a 40 mile limit from the ports by restricting ships from traveling faster than 12 knots. A speed reduction means a reduction in energy used by the ships and ultimately reduces fuel and therefore pollutant emissions as well.

Since the adoption of the Clean Air Action Plan in 2006, both Ports have compiled an Emissions Inventory to calculate emission levels by year from 2005 to 2010, all of the main air pollutants from port sources were reduced. At the Port of Long Beach, in addition to a 72% decline in diesel particulates from 2005 to 2010, sulfur oxides fell by 73%, smog-forming nitrogen oxides lessened by 46% and greenhouse gases dropped by 18%. At the Port of Los Angeles, diesel particulates declined by 39% from 2009, NOx emissions were down by 25% and SOx emissions fell by 45%.

These numbers reflect not only a significant change in port impacts, but an overall change in goals and progress for the future. In previous decades the surrounding communities have had to suffer and bear the burden of port pollution. This community found it difficult to challenge the San Pedro Bay Ports due to their importance to the regional and national economy. Now and looking into the future, the San Pedro Ports have promised to work together with the community to clean up their acts and encourage cleaner proposals and development.

 Authored by Dan Kasang, ’12 who is graduating with a BS in Environmental Studies, and Patrick Talbott, ‘12 who is graduating this spring with a BS in Environmental Studies and is pursuing a Progressive Master’s in Environmental Studies and a certificate in Sustainable Cities.

Works Cited

Chang, Y-C. et al. 2010. Ship Recycling and Marine Pollution. In Marine Pollution Bulletin. 60: 9. Pages 1390-1396.
Coffin, B. 2003. Ghost Fleet Underscores Ship Recycling Hazards. In Risk Management. New York. 50: 12. Page 10.
Dominici, F. et al., 2006. Fine Particulate Air Pollution and Hospital Admission for Cardiovascular and Respiratory Diseases. In Journal of the American Medical Association. 295: 10. Pages 1127-1134.
breakwater. 2012. In Encyclopædia Britannica. Retrieved from
Maata, M. & Singh, S. 2008. Heavy Metal Pollution in Suva Harbor Sediments, Fiji. In Environmental Chemistry Letters. 6: 2. Pages 113-118.
Mueller, D. et al. 2011. Ships, ports and particulate air pollution – an analysis of recent studies. In Journal of Occupational Medicine and Toxicology. 6: 31.
Selifonova, J. P. 2009. The ecosystem of the Black Seaport of Novorossiysk under conditions of heavy anthropogenic pollution. In Russian Journal of Ecology. 40: 7. Pages 510-515.
Waldie, D. J. 2012. Competition and Environmental Risks in Ports’ Future. In KCET’s SoCal Focus Blog. Web.
2010 UPDATE SAN PEDRO BAY PORTS CLEAN AIR ACTION PLAN. Publication. San Pedro      Bay Ports. Web. 5 Mar. 2011.
Braathen, Nils Axel. Environmental Impacts of International Shipping: The Role of Ports. Paris: OECD, 2011. Print.
“The Port of Los Angeles | Maritime.” The Port of Los Angeles: America’s Port. City of Los Angeles, 2012. Web. Feb.-Mar. 2012.
“San Pedro Bay Ports Clean Air Action Plan – Emissions Inventories.” San Pedro Bay         PortsClean Air Action Plan. Ports of Los Angeles and Long Beach, 2012. Web. Mar.         2012.



Public Transportation Transformation in Southern California and the Environmental and Health Problems it has Caused

Filed under: Energy,Los Angeles Politics,Pollution — admin @ 6:15 pm

Los Angeles once had a thriving public transportation system, mainly of electric streetcars owned and operated by the Pacific Electric Company. Pacific Electric’s trains branched out from the heart of Los Angeles for a radius of 75 miles to San Fernando, San Bernardino, and Santa Ana making (at the time) the world’s largest interurban electric railway system (see Pacific Electric Railway picture for details). Snell argues Pacific Electric is responsible for the manner in which Los Angeles is geographically sprawled today. The electric railways were first constructed in 1911, and it “established traditions of suburban living long before the automobile arrived” (Snell).

In 1940, General Motors (GM) purchased $100 million worth of portions of the Pacific Electric system under the auspices of Pacific City Lines (a bus company made up of GM and Standard Oil of California). In 1944, GM and Standard Oil gave American City Lines (also an affiliate of GM) to motorize Los Angeles, whereby American City Lines purchased Los Angeles Railway (the local electric streetcar system), scrapped the electric transit cars, tore down power transmission lines, took out tracks, and established a system of buses. These buses were specifically built by GM and ran on Standard Oil (Snell).

GM had the ability to do this because of its serious influence throughout the United States. At the time, there were the “Big Three” car companies: GM, Chrysler, and Ford. GM, however, had by far the most power of them all. Snell argues Chrysler and Ford depended greatly on GM for supply of various parts that were crucial to their automobiles. GM, Ford, and Chrysler at the time annually contributed around $14 million to lobbyists for promotion of automotive transportation; their leading rivals could only afford about $1 million to lobby for rail transit. The magnitude of sales, the number of American employees, government revenue from corporate taxes, and the almost monopoly the Big Three had, enabled them to levy serious political influence. The Big Three saw public transportation as standing in their way of selling cars – each public transportation vehicle held up to 50 spots per trip that could otherwise have purchased automobiles (Snell). It makes sense then for the Big Three to levy their power to move the United States to personal automobiles.

GM also had built a solid grasp on city bus production. Seeing as both their cars and the buses ran on diesel fuel, it was an easy transition for them. In the 1920s, when the automobile market was saturated, GM expanded into other types of transportation, mainly city buses. Snell states “Beginning in 1932, [GM] undertook the direct operation and conversion of interurban electric railway and local electric streetcar and trolley bus systems into city bus operations.” GM formed an agreement with Greyhound Bus Corporation, putting many GM executives onto Greyhounds Board of Directors, and aiding the Greyhound Bus Corporation financially; until 1948, GM was the single largest shareholder in the Greyhound Corporation (Snell). In 1928, Greyhound announced its intention to convert commuter rail operations to intercity bus services. In 1936, GM together with Greyhound, Standard Oil, Firestone Tire, and a parts supplier come together to make National City Lines (intercity bus transportation). By 1939, GM and Greyhound had been successful in converting electric streetcar lines to National City Bus Lines in Pennsylvania, New York, St. Louis, among others (Snell).

Interestingly enough, GM realizes in the 1950s they make more money by selling cars than buses; 10 times more to be exact (Snell). Buses also have higher operating costs due to the fact that “diesel buses have 28 percent shorter economic lives, 40 percent higher operating costs, and 9 percent lower productivity than electric buses” (Snell). Thus, GM actually had an incentive to decrease bus ridership. Buses, however, are noisy, produce diesel smoke, and slower than electric rail cars. Thus, Snell argues, the move to diesel buses may have actually created a long-term effect of selling more GM cars; the public transportation was no longer a desirable option, so people purchased personal automobiles.

Slater, however, contradicts Snell’s argument. Slater claims buses would have replaced streetcars, regardless of GM’s intervention. He argues by 1944 bus lines were already carrying as many passengers as electric streetcars (58). In addition, he states, Pacific Electric also had a bus operations for public transit as well. However, he misses the point that the urban sprawl of Los Angeles was created by the electric railway system, thus it was perfectly suited to be dependent on it.

Regardless of which side one takes in the controversy, in 20/20 hindsight, it is clear public electric streetcar transportation would have most likely been the healthier option for the residents of Los Angeles. Traffic congestion and the number of cars on Los Angeles freeways and streets causes a serious amount of pollution that is damaging to human and environmental health.

Los Angeles is one of the largest cities in the nation in terms of population, all of who need transportation. Transportation, however, encourages further development and settlement of people, as we saw with the direct correlation between urban sprawl and the extension of the Pacific Electric railway system. This extension can have positive influences on the economy due to the growth of business and transportation of goods, but comes with a cost.  Freeways have a direct impact on the human and their environment ranging from human health concerns to the disruption of ecological communities.

The construction of freeways can displace residents and small business owners. Local communities fight against freeways near their homes because it can bring down property values from the noise, air pollution and overall loss of a certain quality of life. Freeways can drastically alter the native landscape and ecological community.  The loss of habitat land for wildlife can have a direct impact on the ecosystem and alter the genetic make up of a species due to the separation.  This can result in a loss of biodiversity, susceptibility to disease and extinction.  The construction of many freeways has resulted in a loss of wetlands and/or the contamination of waterways essential to a community’s water supply; ultimately contributing to the decline in water quality in our oceans through surface water run off.  Freeways have a direct impact on air quality and mobile air pollution contributing to climate change, smog and the overall air quality of that region.  All these factors play a role in why stakeholders vehemently fight for there rights to bee heard in the transportation planning process.

An example of stakeholder involvement in transportation planning, specifically in regards to a freeways environmental impact on the surrounding region, is the I-710 highway that connects the two largest ports in the world, Long Beach and Los Angeles, to the rest of Southern California.  The ports of Long Beach and Los Angeles import 40% of all U.S. goods.  Due to mass movement of goods and an increasing amount of traffic due to the high volume, environmental and health challenges facing the area are high. In 2005 the I-710 Corridor Project Study was commissioned to look at the challenges and ways to improve traffic congestion and enhance the quality of life for residents and communities of the surrounding area.  The findings of this report were staggering. Los Angeles has attempted to improve and reduce the environmental and health risks demonstrated in the findings.

The I-710 passes through 15 communities with 1 million residents; 70% of these residents are minority, low-income communities.  These communities persistently exceed national air quality standards, which is due to the mass transit from the ports to the rest of the state and country. One small example is diesel emissions, the report stated, caused 2,000 premature deaths.

In 2009, the American Lung Association identified Los Angeles as the most polluted city in the nation from ozone and particulate levels.  Besides improving traffic congestion through the possible widening of lanes, building tunnels, elevated ramps and other infrastructural development the city must all take into consideration the needs of the already damaged health of the communities.  As mentioned earlier there are a number of stakeholders within this type of project and for the past 3 years the city has been trying to work with the communities and local organizations to identify pollution problems and resources to solve these problems. This is an ongoing concern and while currently the focus is on the I-710, these problems are related to all highways.

This post was written by Jasmine Davis, ’12 who is graduating this spring with a BA in Environmental Studies, and Elise Fabro who is graduating this spring with a double major in Environmental Studies & Political Science, and she is pursuing a progressive Master’s in Environmental Studies.

Works Cited

Environmental Justice: Los Angeles Area Environmental Enforcement Collaborative | Pacific Southwest, Region 9 | US EPA.” US Environmental Protection Agency. N.p., n.d. Web. 25 Apr. 2012.
Goffman, Ethan. “Highways and Environmental Impact Issues.” CSA. N.p., n.d. Web. 25 Apr. 2012.
Slater, Cliff. “General Motors and the Demise of Streetcars.” Transportation Quarterly51.3 (1997): 45-66. Print.
Snell, Bradford C. “A Market Structure as the Determinant of Industry Conduct and Performance.” American Ground Transport. CarBusters, Mar. 2001. Web. 25 Apr. 2012.

April 20, 2012

A New Perspective on Clean Energy

Filed under: Energy — admin @ 9:07 pm

We want to propose a new perspective on “clean energy.” As environmental studies majors, we have seen the impacts of fossil fuels and are often the first to advocate clean energy policies. But what are the true ramifications of what we advocate? Alex Epstein is the Founder and Director for the Center for Industrial Progress. He specializes in the energy debate and takes a fundamentally opposite view of the general “environmentalist” perspective. In his article “Four Dirty Secrets about Clean Energy,” Epstein seeks to expose supposed truths about so-called “clean energy” and clean energy policy. While it is not necessarily new information, his points provoke some thoughts about the ultimate consequences that come with clean energy policies.

Epstein’s first Dirty Secret is “If “clean energy” were actually cheaper than fossil fuels, it wouldn’t need a policy.” Epstein quotes various clean energy proponents like Al-Gore who make the claims that renewable energy sources are ultimately cheaper than fossil fuels. They say that while the initial implementation would be expensive, in the long run they would provide infinite amounts of energy. For instance, he provides the often quoted fact that enough sunlight falls on the face of the earth every forty minutes to satisfy our energy needs for a full year. If we could harness such energy, it would be “free forever.” These same proponents make the argument that as fossil fuels quickly deplete in supply, their prices will drive higher and higher, becoming more expensive. Epstein argues that these things aren’t true. He says that harnessing all the sunlight that landed on the Earth is nowhere close to feasible and would need to be implemented on such a massive scale but could never be achieved. He also says that if supplies of fossil fuels were diminishing as rapidly as claimed, then people in the energy market would make fortunes in the futures markets. The clean energy proponents say that fossil-fuel companies are short sited and don’t realize the eminent shortage we will face soon. Epstein argues that this is false, that these companies spend billions of dollars on research to ensure the viability of their companies and industries.

There are many misconceptions within the environmental industry. One of the primary flaws of clean energy that is often overlooked is the financial feasibility of such sources. Analyzing from an economist’s view, if renewable energies were so profitable then the markets would reflect this and more investment and development would go into these areas. However, the alternatives of renewable are far more attractive to investors because of the greater chances of profit. We are often confronted with the fact that fossil fuels are rapidly depleting and people only care about short-term profits. However, if these claims were true, then individuals would then make huge profits in the futures market. This does not appear to be happening.

If renewable energy sources were truly cheaper than fossil fuels despite their initial costs, history has shown that they would win out as investors seek to place capital in the most profitable area. An example to prove this is the relationship between crude oil and natural gas. Previously oil was the most profitable form of energy however in recent years, due to a many number of reasons, including scarcity, natural gas is now more financially profitable and as such future investment is being made in this field. Therefore if renewable were more profitable history would suggest that they would already be invested heavily in. One could therefore infer that renewable energy still is not competitive because it is more expensive and therefore need to become more efficient before their initial costs compensate for their long pay back time and there are competitive in a free market.


The price of natural gas begins to fall under crude oil, which historically has been a cheaper source of energy.

The second Dirty Secret is “Clean energy advocates want to force us to use solar, wind, and biofuels, even though there is no evidence these can power modern civilization.” Epstein sites the fact that only 1% of the world’s energy needs are satisfied by various renewable energy sources. He says that the reason renewable sources can’t compete with fossil fuels is because of energy density. While there is a lot of energy in sola and wind, it is so dispersed that to harness them to any effective degree requires far more land, labor, and equipment than fossil fuels. Epstein argues that such requirements will always keep renewable energies far more expensive than fossil fuels. He also says that these sources of energy are unreliable. Sun depends on the weather and wind can be intermittent. Therefore, the energy production of these sources isn’t consistent and often require backup energy sources, which are often fossil fuel sources. Epstein refutes the often quoted “conspiracy” theory that renewable energy isn’t implemented because big fossil-fuel loving companies aren’t allowing their adoption. Epstein argues it is the fundamental nature of this energy source that keeps it from being adopted. These sources can’t satisfy human needs in an efficient way.

There are fundamental differences between the quality and density of energy provided by fossil fuels and those of sustainable sources. The energy of the wind and sun is far more dispersed than that of oil, coal or even nuclear energy. This means that larger plants are required to harness the energy, thus creating a larger impact on the environment. The idea that there is ample energy out there to be harnessed is correct however the resources and land required to enable us to harvest this energy is substantial. Furthermore, because of intermittency in production there is a need for excess plants to be built and geographically dispersed in order to compensate for the fluctuations in supply.

Epstein’s Dirty Secret #3 is “There are promising carbon-free energy sources–hydroelectric and nuclear–but “clean energy” policies oppose them as not “green” enough.” He makes the argument that environmentalist and those concerned with reducing carbon emissions even reject zero carbon emission energy sources that actually have the potential to meet human energy needs. Epstein says these individuals attacked the nuclear power industry in its infancy with “lies and propaganda” to make its growth and expansion nearly impossible. He says these tactics are still being used today when people site the situation in tsunami-stricken Japan and the issues that are happening with those nuclear reactors; they use it as another reasons for the dangers of nuclear power. He claims that anti-nuclear proponents usually say their main concern with nuclear power is safety both with regards to the nuclear reactor plant and the radioactive wastes the process produces.

He says these proponents site the radioactive element of nuclear power as a danger for people living in an area surrounding a nuclear plant. He counters this concern with the fact that even solar energy is considered radioactivity. He makes the point that simply because the energy source is radioactive, it does not mean that alone makes it dangerous. He says a person receives more radiation exposure walking during the day then living next to a nuclear plant. He then addresses the popular image of a failing reactor exploding or being bombed by terrorists, causing a “Hiroshima” type scenario. He says this is a hyperbolic concern for the main reason that the uranium in nuclear reactors are not explosive and such an event would not cause an explosion that people are often concerned about. Epstein says that if these attackers’ main concern was truly safety, they would see that nuclear power is one of the safest forms of energy currently available. He says the best indicator of a technology’s safety is “how many deaths it has caused per unit of energy produced and that  “In the capitalist world, nuclear power in its entire history has not led to a single death from meltdowns radiation, or any of the allegedly intolerable dangers cited by nuclear critics.”

Epstein then addresses the concern people have with the waste that is produced through the process of making nuclear power. He argues that the concern is not nearly as threatening as anti-nuclear proponents make it out to be. He says “the amount of waste is thousands of times smaller than for any other practical source of energy, that it can be safely stored, and that there are many technologies for utilizing the waste to generate even more energy.” He labels these concerns as simple hysteria that attacks nuclear power simply because it is “unnatural” and therefore must be bad. He attacks anti-nuclear proponents for advocating so much government regulation of nuclear power that they effectively halted the growth of such a promising industry. He says the required safety regulations that have been imposed only work to hike up the price of this power source and make building a new power plant nearly impossible. He says that today anti-nuclear proponents site the dying nuclear power industry as a result of natural market forces that make it unable to compete with other sources of power. Epstein argues that this isn’t the case at all. He says that nuclear power was highly competitive when it first appeared as a viable energy source. Epstein claims it had massive potential to provide large quantities of cheap, zero-emission energy until all the regulation effectively killed the industry.

Epstein says that nuclear power is not alone. These same advocators of zero-emissions energy have spent just as much work trying to dismantle hydroelectric dams. He says these dams have enormous quantities of energy to provide at zero-emissions cost. He argues that the attackers are not concerned simply with carbon emissions, but having any impact on nature at all.

Epstein makes a few interesting points with his third dirty secret. Obviously by this point in his article we see that Epstein is concerned more with human progress than with the state of the natural world. So while he is not as concerned about finding zero-emission energy sources, he claims that even when people who are finally find a clean energy source that can actually meet human power needs, they still reject it. Well this is a valid point, I think Epstein is guilty of down playing the dangers of nuclear power just as much as the people he sites as hyping it. While he is right that unit of power produced per death caused is extremely low with regards to nuclear power, he makes the false statement that nuclear power has resulted in no deaths. If I had the chance, I’d like to ask how he can make such a claim when there are glaring examples of just that, the main of which being Chernobyl. We of course know that Chernobyl was a particular case because of the poorly built infrastructure and the lack of expertise, but that does not change the fact that people died because of it. The initial responders to the explosion didn’t wear protective gear and were exposed to high levels of radiation, dying within the next few weeks. While again deaths per unit of energy produced make these particular deaths statistically insignificant, it does not make it zero as Epstein boldly claimed. Epstein also seems to misunderstand people’s perception with radiation. The radiation from the sun and the radiation produced by nuclear power are drastically different. Both have the potential to cause physical damage, but it is all about degree of exposure. It takes far more solar radiation to cause the type of damage that the same amount of nuclear radiation would cause. This is not to neglect the fact that we are exposed to far more solar radiation, but the concern with anti-nuclear proponents is those disastrous instances when fantastic amounts of radiation is released and people are exposed. Again, siting the Chernobyl event, billows of radiation were released into the atmosphere and spread throughout Europe.


The spread of radiation through the atmosphere across Russia.

Epstein also says that we can store nuclear waste safely. Again, I wish he had gone into detail in his article about how exactly defines safely and how we believes nuclear waste is stored. From my studies at least, I believe nuclear waste is usually stored on site of nuclear plants in pretty basic structures. While it is true I do not know this for sure, I have studied such proposed storage plans like Yucca Mountain and even that is unable to provide housing. Epstein does make a valid point overall that nuclear power is a practical energy source and emits zero-emissions. He does, however, seem to downplay its dangers just as much as others overplay them.

Epstein’s final “Dirty Secret” is “The environmentalists behind clean energy policy are anti-energy.” Epstein makes that argument that ultimately, environmentalists are not concerned with pollution, but with human progress and development. He says that the “minimal impact” approach that is advocated is fundamentally “anti-energy.” He says that even if energy policy outlawed all forms of fossil fuels and only allowed renewable forms of energy like solar, wind, and geothermal, environmentalists would be against it. He says that because of how inefficient these renewable sources, they would have to be implemented on massive scales. Huge stretches of land would be covered in solar panels and wind farms. Geothermal requires thousands of feet of drilling into the Earth. Just implementing these technologies would require fossil fuel consumption to create them. They would fundamentally alter the environments they are placed. Epstein the total impact on the environment would be greater than fossil fuels because of energy concentration. Fossil fuels are so energy dense that the energy can be harnessed in a much smaller space with fewer resources. Renewable forms of energy require altering entire landscapes. He says environmentalists would never get behind such an impact. He claims that ultimately, environmentalists want human development and progress to stop and diminish. He says when pushed, environmentalists ultimately say that the only solution is conservation, population control, and the cessation of development. He quotes a few figures known for their “clean energy” stances that say people ultimately need to live more modestly. He argues that the only way for that to happen is with more government regulation in every aspect of our lives to make sure we are living modestly. He says the end result of this movement is “pure destruction.” Epstein argues that with industrial development, humans can respond and adapt to our environment. He sites the catastrophes that environmentalists warn will inevitably come if something isn’t changed. Epstein argues that humans are not simply going to be subjected to these catastrophes with no defense. Instead, he says industrial energy and development make “catastrophes non-catastrophic.” He sites such situations like a drought in Africa that kills thousands every year. While that is the case there, in the U.S. industrial development has led to irrigation that makes deserts some of the most productive and desirable places to live. Epstein says what the world needs is industrial development which betters the human condition. He says the only way to achieve this is to completely halt the pursuit of “green” policies that are fundamentally anti-development and progress.

Epstein’s final point is remarkable. The inefficiencies of renewable energy are no secret to anyone. What would happen if we actually did heavily pursue them and try to replace fossil fuels with them completely? They would require implementation on a massive scale to meet human energy needs. Such implementation would undoubtedly have a huge impact on the environment. Is that what environmentalists really want? Seeing as the biggest problem environmentalists have with dams is their alteration of the environment, I will assume implementation of solar and wind on the scale needed would face just as much opposition as dams. So then what needs to be done? Humans have to stop development and live modestly. But how would that be enforced? What does that mean would have to happen in our daily lives? How many luxuries that we enjoy and take for granted would we have to give up and how would that be enforced? Would we simply be monitored every moment of our lives to ensure that we are living modestly enough? What would be the punishment for non-compliance? And this is only concerning already developed countries. What about developing countries? Do we have to stop them from developing too? Or do we allow them to reach a modest level and stop them? Is it right or fair to impose such restrictions on other countries? These questions have to be answered if we really want to pursue renewable energy. Their inefficiencies only mean two outcomes: massive scale implementation that has a huge environmental impact, or halting development and enforcing everyone to live modestly, however that is defined. As humans we respond to our environment and alter it to suit our needs. This makes us fundamentally different from all other organisms on the planet. Development is altering our environment to make it more suitable for our needs. Should we change our nature? As environmental studies majors we should really consider the ultimate consequences of even our actions, which on the surface sound very good. But what will be the real cost if we get what we advocate for? How will it be enforced and what will that mean for our individual liberties? How much are we willing to give up and how do we feel morally about forcing our policies on others? We owe it to ourselves to really search all perspectives to finally make what we can truly say is the right approach to the problems we face today.


Corey Bustamante is a junior double majoring in Environmental Studies and Economics.

Richard Charlesworth is a senior majoring in Environmental Studies and minoring in Architecture.

A comparison between Catalina and Santa Cruz Islands

Filed under: Catalina Island,Santa Cruz Island — admin @ 9:01 pm

One characteristic shared by the two islands is how susceptible their ecosystems are to disturbance, as exhibited by the crashes of their island fox population.  Although different in cause, each demonstrated that a small island ecosystem, evolving under sheltered protection from mainland disturbances can create unique and fragile ecosystems that do not handle major disturbances well.  This is largely due to their relatively small gene pool of the population and small geographic range.


Island Fox

In other more traditional geographic regions, a disturbance in an ecosystem that leads to a population crash can often be followed by an easier recovery.  Either there is a large enough, well-adapted surviving population that can repopulate, or organisms from another region can gradually be reintroduced into the area.  However on an island, often neither is possible. If the species experiencing the crash is endemic, then it is possible that the crash will result in the species extinction as no other existing members of the species exist in the world.  Even if some individuals survive the initial disturbance, with the population, small to begin with, may leave so few survivors that the gene pool does not carry enough diversity for a proper recovery and the species may die out.  As such, a disturbance in an island ecosystem is much more likely to lead to species extinction.
On Catalina Island, the collapse of the fox population was primarily due to the introduction of the canine distemper virus.  In 1999 an outbreak occurred causing the population to drop from 1300 to only 100 animals.  The outbreak swept across the west side of the island but fortunately did not reach the eastern island, which was separated by a narrow isthmus.  In 2000 the Catalina Island Conservancy and the Institute for Wildlife Studies instituted the Catalina Island Fox Recovery Plan, which consisted of monitoring, captive breeding, vaccination, and relocation of the foxes.  The program was a success and by 2004, the population had climbed up to 300.  Although it is not entirely known how the virus was introduced into the population, one theory is that it was brought to the island by an infected domesticated dog or a stow-away raccoon.

Feral Pig

On Santa Cruz Island, a collapse also occurred, but for different reasons.  Over-predation by the golden eagle, an exotic species, was discovered to be the primary cause.  However indirect blame could be placed on the human introduction of pigs to the island.  A study by Roemer et. al. indicated that the colonization of Golden Eagles onto the island could only be sustained by the existence of a feral pig population.  However, even though the foxes alone could not sustain the eagle population, they were much more affected by eagle predation than the pigs.  The foxes were ill adapted to evade eagle predation and as such faced possible extinction.


Golden Eagle

Like the island fox’s unfortunate fate at the hands or claws of introduced species and viruses, many native and endemic plant species on both Catalina and Santa Cruz islands have suffered from human introduced grazers. While both islands have gone under some form of plant restoration from the damages done by past-introduced grazers, Catalina currently still has resident populations of non-native grazers while Santa Cruz Island does not. This provides an interesting contrast between the islands because there are many similar native plant species that exists on both islands but in different quantities and manifestations. Through this comparison one can clearly see the tremendous impact that grazers have on the plant communities of the Channel Islands.


Catalina Island Bison

Catalina currently has a small population of 150-200 bison that roam the island. The bison population is controlled both by a birth control that limits the number of calves a female bison can have a year and by shipping the bison back to the mainland to supplement mainland herds on tribal lands. The birth control method was introduced in 2009 and was greeted by animal rights activists who opposed the Catalina Conservancy’s earlier eradiation of feral goats and pigs with high power rifles from helicopters. The Los Angeles Times reported that the birth control option for controlling the bison herds was suggested by an animal activist Avalon shop owner named, Debbie Avellana. Other non-native grazers that continue to roam the island are mule deer that are kept under control by recreational hunting as well as the Conservancy, and a very small population of black buck antelope. Historically Catalina was used for grazing goats, pigs, sheep and cattle but have since been irradiated.
Catalina’s current native plant population has suffered as a result of the current non-native grazers on the island. The effect of the grazers can be seen all too clearly in the example of the native Giant Coreopsis (Coreopsis gigantea). On Catalina this “Dr Seuss plant” is only found with in the confines of the Ackerman Nursery where grazers are kept out. There are also reports of some wild species on the sea bluffs or steep gullies of the island where grazers can’t get to them. On a whole plants on Catalina tend to be bush-like where they otherwise would be more like trees. The only “trees” you will find on Catalina are either non-native or are the native toyon, lemonade berry, sugar bush or Catalina Cherry trees because they are so resilient. Some native plants have changed their pollination season to try and outcompete not only the grazers but also invasive plants.
Restoration on Catalina is difficult because there is a permanent human population there and the island attracts around a million tourists a year. This constant stream of visitors means the potential for foreign species introduction is more likely. Fennel is still a problem on the island being an aggressive invasive species, but a management strategy including weeding around campsites and populated areas outward seems to be working in its early stages. Another invasive species is the eucalyptus, which was brought to the island on purpose to beautify areas like Avalon and was a favorite of the Wrigleys. Santa Cruz Island also struggles with both eucalyptus and fennel.

Tourists in Avalon

Santa Cruz Island does not have any non-native grazers currently living on the island. Historically Santa Cruz Island was a ranch raising some of the most well known beef and sheep products on the west coast. Since then it has been brought under the control of National Park Services and the Nature Conservancy. The only human presence is that of campers and eco-tourists, and researchers. There are a few people that live there to maintain the research and historic ranch facilities. These conditions have allowed a recovery of many native plants and allows for these plants to grow large and where on Catalina you may have a sparse bush, on Santa Cruz Island you will have a large bush as tall as a man. On Santa Cruz Island, Giant Coreopsis and Bedstraw are significantly more common than on Catalina as are buckwheats (including one species of buckwheat that is endemic to Santa Cruz Island), Manzanita (also including a endemic species), and Sunflower bush. Santa Cruz Island has around 600 native plant species.

Santa Cruz Island Landscape

These cases, exhibit how island ecosystems are incredibly susceptible to disturbances, which can often be brought upon by the interference of humans.  In the case of Catalina Island Fox, the introduction of a virus, possibly by a colonizer’s pet dog, is to blame for the collapse of a species.  Santa Cruz’s population collapse was brought upon by the human introduction of pigs to the island, which facilitated the entry of yet another harmful invasive species.  It is believed that in both instances, had humans not brought in these disturbances that such a collapse would not have occurred.  Just as these collapses wouldn’t have occurred without human interference one can use Santa Cruz Island to “see” how different a landscape Catalina would have if it didn’t have the human introduced grazers still shaping plant communities on the island. As such these cases serve as a reminder that humans should exercise extreme caution when interacting with such isolated ecosystems, as they can be as fragile as they are unique and beautiful.

This post was written by Mariah Gill ’12 and Jefferey Nakashioya ’12 both seniors in Environmental Studies.

Carlos de la Rosa, Personal Communication/ Lecture

April 16, 2012

Climate change and the hydrological cycle

Filed under: climate,Water — admin @ 9:21 pm

A severe change in the hydrological cycle is expected, and it is expected to hit snow- or ice-dominated areas most severely. This change is expected because of an increase in greenhouse gases. This change at first was expected to increase the amount of potable water but now the dynamics of the changes have been analyzed more closely. We have found out that as temperatures increase less precipitation will fall as snow and snowmelt will occur sooner in early spring and not in the summer or autumn when the water is needed most. The snowmelt and rain will cause an overflow in rivers and causing loss of potable water to the oceans when there are not sufficient reservoirs.

And it is not necessarily changes in precipitation that causes all this because the amount of precipitation generally remains the same. It is the change in temperature that changes the seasonal runoff patterns in these snowmelt-dominated areas because less water falls as snow and more falls as rain, preventing the normal release of water as snowmelt and the quick flowing of rainwater.

The Colorado River of the western United States was determined to be one of the four snowmelt-dominated rivers that also do not have sufficient reservoir capacity to prevent overflow and loss to the ocean. To determine these, first the snowmelt-dominated areas were determined by the ratio of accumulated annual snowfall to annual rainfall and those with R greater than 0.5 were considered snowmelt-dominated. Next, to determine reservoir capacity, the runoff was compared to the reservoir capacity. These determined areas underestimate the area and population affected because populations downstream and other farther areas also depend on the water that comes from snowmelt-dominated areas.

The aspect of the most importance is water supply. In the Western United States, the Colorado River is the most important contributor of water supply. There are no predicted changes in precipitation, only a change in seasonal snowpack and snowmelt, as discussed earlier. The winter snow is expected to decrease and the melting is expected to occur a whole month earlier. On top of that, there is currently not enough reservoir capacity to prevent water loss to the ocean.

The Colorado River, along with the Rio Grande and San Joaquin, supply water to Wyoming, Colorado, New Mexico, Arizona, Nevada, California, Utah, Texas, and parts of Mexico. These rivers, especially the Colorado River, were determined by the Interior Department in 2011 to deplete by 8 to 14 percent over the next 40 years. But in a more optimistic short-term study done in 2009 at the University of Colorado at Boulder, the risk of the Colorado River depleting its reservoirs remains below 10 percent at least through 2026. It was also said as a result of this study that even if the worst drought scenario were to occur, we wouldn’t feel the effects immediately because we have a great storage capacity along the Colorado River, storing almost four times the annual flow of the river. But in between 2026 and 2057, the risk of reservoir depletion increases seven times.

These studies on the Colorado River can comfort us because we know we are relatively safe until 2026, but 2026 is approaching fast and we cannot get comfortable. Large scale changes such as shift in seasonal snowmelt and decreasing amounts of snowfall took decades to develop and will take decades to reverse, if it is even at all possible. The most plausible solution for now is that we must find ways to direct and store this precipitation so we do not lose it to the ocean.


This post was authored by Alejandra Rocha ’12, a senior majoring in Environmental Studies.

« Newer PostsOlder Posts »