Wednesday, December 28, 2011

Friday, December 09, 2011

AGU day 4: diffusion

Dear AGU attendees:

It is not necessary to pack into the standing-room at the back of the lecture hall to the point of blocking people's way when there at 10 rows of empty seats at the front. diffusive equilibrium is a wonderful thing. Try approaching it.

Tuesday, December 06, 2011

AGU day 1: Geologists 1, Publisher 0

Here at AGU, the wireless is working very well. The only exception is when I try to access Elsevier's sciencedirect.com, which is moving at tectonic plate speeds. Last meeting, I noticed the guy next to me had his tablet out and downloaded each paper mentioned in a talk to follow along. That conference had 78 people. This one has 20,000.

Tuesday, November 29, 2011

US science tour schedule

I'll be in the SW USA for the next week and a half giving talks about SHRIMP and attending AGU. The talks are:

Arizona State University


Thursday 1 Dec time: TBA, Chemistry dept (probably) Title TBA (this one is pretty informal, if you haven't guessed)


University of Texas at Austin


Friday Dec. 2

Understanding the SIMS U/Pb calibration

Jackson School of Geosciences
JGB 3.222 at 11 AM.


American Geophysical Union Fall meeting, Moscone Center, San Francisco



SHRIMP geochronology using an 18O primary beam


Session Title: V33G. Innovations in Isotope Mass Spectrometry and Isotope Metrology II
Session Type: Oral
Date: 07-Dec-2011
Start Time: 03:10 PM
End Time: 03:25 PM
Location: Room 3022 (Moscone West)

Texas abstract:
In preparation for the production of the new SHRIMP IV, a number of experiments were run to characterize the behavior of the U/Pb calibration under various analytical conditions. Repetition of early SHRIMP One work showed that the calibration appears to primarily reflect the dependence of Pb ionization on oxygen activity. In order to constrain the effects of oxygen, further experiments were performed using an 18O primary beam, so that the relative contributions of oxygen from the beam (18O) and the natural samples (16O) could be discerned.

The use of the 18O primary has shown that the ratio of sample oxygen to primary oxygen in the secondary ions varies based on the target mineral and the primary beam impact energy. For baddeleyite, there is also an orientation effect. In some circumstances, the 18O/16O ratio can be used to correct for scatter along the calibration line, allowing sub-percent level accuracy for Paleozoic U/Pb dating of zircon.

The cause of calibration-related uncertainty is still not precisely known, but can probably be related to a number of factors. This work, combined with recent demonstration of SHRIMP geochronology of chemically abraded zircon, suggests several potential ways of improving calibration accuracy. In addition to the standard approaches of high quality sample preparation and wide energy windows, new approaches include the use of 18O to correct for source fluctuations, active change neutralization using a medium energy electron gun, and chemical abrasion. These techniques have not yet been used simultaneously on unknown zircons.

AGU abstract:
The key constraint of uranium-lead geochronology is the variation in ionization efficiency of uranium and lead. As the ionization efficiency of Pb is dependent on oxygen availability, a calibration relating the UO/U or UO2/U ratio to the Pb/U ionization efficiency is commonly used. However, these calibrations have historically been limited to errors of about 1%.
We have identified the origin of the oxygen in the UO and UO2 species by feeding the primary column duoplasmatron source with 18O gas. This creates labeled UO and UO2 isotopologues at nominal masses of 254, 256, 270, 272, and 274, where the 18O isotopologues contain 18O from the isotopically labeled primary beam, and the 16O isotopologues contain oxygen from the natural silicate, phosphate, or oxide geochronology target mineral. The ratio of U18O to U16O depends on the target mineral, and primary ion species (atomic vs. molecular oxygen). In zircon, the variation in U18O vs. U16O can be used to correct for calibration scatter, allowing for more precise and accurate geochronology. This correction only applies to SIMS instruments such as SHRIMP, which can perform uranium-lead geochronology without the use of a third source of oxygen, such as oxygen flooding.

Monday, November 28, 2011

Billions of years of conference badges

ASI was lucky enough to be the lanyard sponsor at the recent Biennial Geochemical SIMS workshop, held in Hawaii at the beginning of this month. The meeting was fantastic. But rather than do the usual slog of simply plastering our logo on the lanyard, we decided to give it a geological timescale. The scale is one year per angstrom. Various important events are designated by the isotopic system used to define them and/or cartoons. Being very long and skinny, it is hard to display in a blog, but here’s a photo of one curled up (click to enlarge).


We have some extras, so if anyone wants me to bring some to AGU for you guys to wear, let me know and I'll pack some.

Tuesday, November 22, 2011

Migrating dinosaurs and oxygen isotopes

ResearchBlogging.orgA recent paper made a claim that dinosaurs must migrate, based on the oxygen isotopes in dinosaur teeth. This paper is both awesome and flawed.

The awesome part:

It is hard to measure oxygen isotopes in teeth. Teeth are a mixture of organic matter and two minerals: calcium carbonate, and hydroxyapatite. The organic matter contains oxygen, the carbonate portion contains oxygen, and the hydroxyapatite contains oxygen in two different parts of the mineral; bound water (the “hydroxyl” part, and phosphate ions.

None of these phases are stable in groundwater, and they generally get replaced by other minerals such as silica during the fossilization process. That is why most of the dinosaur teeth you see in museums are black. Even if the teeth aren’t fossilized, the different components will exchange oxygen with groundwater at different rates. Depending on the groundwater chemistry. So simply finding appropriate samples from the Jurassic- 150 million years ago, is not easy.

Secondly, teeth are hard to analyse for oxygen isotopes in lab, because you need to make sure that the oxygen from the different materials isn’t mixed, especially if some of the oxygen has been compromised. This can also be tricky.

Most oxygen has an atomic mass of 16, from having 8 protons and 8 neutrons. However, about 2 in a thousand oxygen molecules have 2 extra neutrons, giving a mass of 18 amu. This oxygen-18 ( abbreviated 18O) evaporates slightly more difficultly and condenses more easily than normal oxygen-16 (16O), so rainfall is generally depleted in 18O. This gives what scientists refer to as a negative δ18O value, which basically means that the rain water has a lower 18O/16O ratio than seawater. As air cools, more and more 18O rains out, so that snow has a strongly negative δ18O deviation (figure 1).


Figure 1. Tropical lowland rainfall is generally slightly negative in δ18O (left), while mountain snow in generally highly negative (right).

This leads to the flawed part of the paper.

Fricke et al. (2011) state that because their dinosaur teeth have a variation in δ18O, the dinosaurs must have migrated from lowlands to uplands (figure 2).


Figure 2. oxygen isotopic variation in dinosaur teeth in interpreted as arising from migration.

But as the low δ18O snow melts, it forms low δ18O rivers (figure 3). In an environment with seasonal rainfall, local, tropical rain could give a modest δ18O depletion, white water draining from high mountains would have a strong δ18O depletion. This is exactly what Lambs et al. (2005) see in modern day India: The Ganges river, has a δ18O value of -5, while the Bramhaputra, which flows into India from Tibet, has a δ18O value of -11. Despite their different sources, both rivers empty out into the same river delta. So in this case, the water is migrating, by flowing down hill.


Figure 3: Water can move as well.

An animal which drank from locally fed streams and ponds during a wet season, but retreated to a river with a distal source in the dry season, would also have a δ18O anomaly like that of a migrating dinosaur. This would also explain why the dinosaur had more negative δ18O values when it died; the rock which contained the fossils was a river sand.

This is seen by Dettman and Lohmann (2000) in rocky mountain oysters (fossilized bivalves, you pervs). Shellfish fossils have a δ18O value that ranges from -5 to -23, all in the same sedimentary sequence. Nobody interprets this as evidence for oyster migration. Rather, it is thought to be caused by rivers with very different source characteristics feeding the same depositional setting. Just like the modern Ganges delta.

So my opinion is that the analytical work and sample selection were very good, but the interpretation is a bit simplistic.

Fricke, H., Hencecroth, J., & Hoerner, M. (2011). Lowland–upland migration of sauropod dinosaurs during the Late Jurassic epoch Nature DOI: 10.1038/nature10570

1. David L. Dettman and, & 2. Kyger C Lohmann (2000). Oxygen isotope evidence for high-altitude snow in the Laramide Rocky Mountains of North America during the Late Cretaceous and Paleogene Geology, 28 (3), 243-246

Lambs, L., Balakrishna, K., Brunet, F., & Probst, J. (2005). Oxygen and hydrogen isotopic composition of major Indian rivers: a first global assessment Hydrological Processes, 19 (17), 3345-3355 DOI: 10.1002/hyp.5974

Thursday, November 17, 2011

Even bigotry has a silver lining

Professor Anne Jefferson has recently been complaining about a banal sexist article that recently appeared in the prestigious scientific journal Nature. While it is understandable that she has been offended by this insipid and thoughtless piece of writing, there is an obvious lesson here for her and other scientists looking to advance their careers. It is so simple that I can lay it out in outline form:

1. Publishing in Nature is good for your career. There is no doubt about this. For better or worse, Nature has one of the highest profiles of any scientific journal. Some prestigious institutions, when looking for high-impact, earth-shaking original research, tally up only papers published in Nature and Science. So there is no doubt that getting a paper into Nature will be good for your career.
2. Nature will publish tripe. This is obvious from reading the article by Dr. Rybicki which has casued this kerfuffle. “Womanspace” contains no original thoughts, no new insights, and no hint of creativity or intellect.
3. Therefore, anyone looking to advance their career should submit anything and everything to Nature for publication. If you can string 700 words together in an incoherent, vaguely offensive story with jokes as flat as an abyssal plain, then you are at least Dr. Rybicki’s equal. And should you actually put a smidgeon of thought into your writing, well then you’re in like Flynn. So don’t hold back! That inconclusive master’s project? Submit it to Nature. Your high school science fair experiment? Nature. Your 7th grade essay on pea horticulture? Fire away.

Nature has sent a clear message to the scientific community that the standards which once gave their publication its prestige no longer apply. Sure, you could spend years leading a major research effort, like scientists Dea Slade, Jessica Altöldi, or Lisa Welp did. And their research efforts deserve major acclaim. But the publisher of Nature has but their gruelling scientific accomplishments side-by-side with:

“I'd been staying with my friend Russell in Canberra, trying to sort out how we were going to get our book on virus structure together, when Russell's wife Lilia decided that their youngest daughter needed new school knickers. She was too busy making supper…”


Nature has sent a clear message to the scientific community. Nature is no longer interested in keen intellectual arguments or brilliant insight. They now want to publish garbage. Submit it to them, and send the good stuff to Science.

Saturday, November 12, 2011

Some thoughts on the Penn State sex scandal

For anyone who has been living under a rock for the past week, a former Penn State assistant football coach has been charged with sexually assaulting 8 pre-teen boys over a 15 year period. Numerous other administrators have been charged with failure to report the incident, and others, including legendary football head coach Joe Paterno, have been fired.

Needless to say, there has been a bit of internet chatter about this. A lot of it has focused on the football program and the similarities between this incident and those of the Catholic Church. I think this emphasis is mistaken, and potentially damaging.

My take is this. All universities cover up sexual assaults as a matter of course. The key feature of the Penn state case is that this particular incident is simply not containable. The age of the victims means that, unlike most university situations, consent is out of the question. The number of victims means it is not a freak incident, and the multiple third party eyewitnesses preclude it from simply being a he said she said. This is a once in a century campus sex crime.

The problem is that, in the 46 years that Paterno has been coaching Penn State, there have been scores of drunken field trip incidents, hundreds of late night library gropings, and thousands of off-campus drink-spiking rapes. And because those cases have not involved epic falls from grace, state-wide criminal probes, and shocking eyewitness descriptions of underage sex, they have been successfully covered up.

Because the fact of the matter is that universities are very good at sex crime cover-up. They form shadow justice systems designed to give victims whatever they require to stay quiet, they use freshman orientation to scare students into avoiding the cops, and they terrify overseas students by threatening to yank their visas and send them back to their country of origin before they can file charges.

Universities have no divisions of powers, or checks and balances, and they are driven to enhance and protect their institutional reputation at all costs. So their reaction to this case has a direct bearing on the health and safety of students worldwide. If they react by re-enforcing their cover-up mechanisms so that nothing smaller than a Paterno-scale epic will ever see the light of day, then campus life will be degraded. If they react by redirecting all their administrators and councilors and lawyers towards helping their victims instead of covering the institution’s image, then everyone who sets foot on a campus will be better off.

I am not optimistic.

Thursday, November 10, 2011

Energy from the sun


Our house uses a number of different technologies to harness energy from the sun. Three are pictured above. The newest and most expensive of these was just hooked up to the grid today, providing 11.4 kWh for the internet dawlders of Australia. So long as I don't goof around on the computer all night, that should cover our home usage and then some. The Hills Hoist also efficiently utilized solar energy by drying three loads of washing. The seedlings in the pots on the black rack have yet to use sunlight to sequester atmospheric carbon dioxide in the form of tomatoes, but we are hopeful for the future.

Tuesday, November 01, 2011

Sunday, October 30, 2011

Don't hold your breath, folks

I actually spent a weekend bushwalking for the first time in five years this month. On top of that, I read a paperback novel. As a result, I'm so far behind that I can't even see the tunnel from here. I'll chip in on the dinosaur migration paper when I get a chance, but I wont get a chance anytime soon. And I should probably look at this new Zinc isotope thing but I haven't even DL'ed that yet. But I have a huge backlog of work stuff, taxes, a paper to review (for an editor, not you lot), and family stuff, so it could be a while.

Thursday, October 13, 2011

Orbital cycles, Australian lake levels, and the arrival of aborigines

ResearchBlogging.orgAustralia is a dry country. It is so dry, that the largest drainage basin on the continent has rivers that only occasionally carry water, and drains into a salt pan. Imagine if the Missouri only flowed every third year, or if the Zambezi was a generally a sand-filled channel that crossed a nondescript cliff at what we know as Victoria Falls.

Admittedly, the Lake Eyre basin is smaller than either of these drainages, but only slightly. But for the last 150 thousand years, it has told geologic tales which rival the best Swahili stories or Souix legends. It describes the movement of the Earth against the stars, and the coming of the first people to Australia.

The reason it can tell these stories is that, as a closed basin, the water level of lake Eyre varies dramatically with the amount of water flown in from its major tributaries. So, although the part of central Australia around the lake and the southern part of the drainage is a desert, tropical rainfall in the northern rivers fills it occasionally today, and has in the past allowed a lake many times larger than the current lakebed to exist. Magee et al. (no relation), have carefully and painstakingly reconstructed the history of the lake level over time, and it tells a fascinating tale of alternating floods and aridification over the last 150 thousand years.

What they found is that there have been five periods where a large, permanent lake replaced the current playa. Comparing the lake record to the changes in the Earth’s orbital tilt and eccentricity shows that the lake filling is consistent with wetter conditions- and a more powerful Australian monsoon, being correlated with high sea levels, low ice mass, and high northern hemisphere sunshine.

The exact reasons for this are not discussed in great detail. One is that the outflow from the Asian winter monsoon might be pushing moist tropical air towards Australia more than Australia’s modest monsoon sucks air in. Another point (made mostly in related, referenced publications) is that the warm sea north of Australia- the Gulf of Carpentaria- is shallow, and during times of low sea level, was land. So the northern edge of the Lake Eyre basin was a thousand kilometers from the sea instead of 150, due to the retreat of the Gulf shoreline.

But the other big feature is that the lake-filling events that occurred after 50,000 years ago were much smaller than those which occurred before. Climactically, the conditions 10,000 years ago should have been the same as the conditions 115,000 years ago. But the lake was only a fraction of the size. The authors find no natural causes which can explain this. So they suggest that the aridity starting around 50,000 years ago is related to the reduction in forest and increase in grasslands which occurred at this time. This vegetation change was a result of a huge increase in the frequency of fire in central Australia, which allowed fire-adapted plants to prosper at the expense of moisture-retaining forest. The increase in fire at this time is generally associated with the arrival of the first people on the Australian continent. IT is known that of Australia’s megafauna went extinct at this time, but Magee et al. (2004) show that even the tropical rains were effected by human migration, with drastic changes to the continent’s largest river basin.



Magee, J., Miller, G., Spooner, N., & Questiaux, D. (2004). Continuous 150 k.y. monsoon record from Lake Eyre, Australia: Insolation-forcing implications and unexpected Holocene failure Geology, 32 (10) DOI: 10.1130/G20672.1

p.s. A few Gene Expression commenters asked a month ago if I could summarize this paper. I hope this helps.

Monday, October 03, 2011

CO2 sequestration in brines: what actually happens?

“I am flying home from Europe in late August with nothing but a notebook and the 2011 Goldschmidt conference Geology giveaway issue to keep me occupied. Using the old-fashioned method of reading and writing on paper, I will blog my way through the compilation of highlighted geochemistry papers as time allows. These will then be posted via time delay to keep the blog moving while preventing paper burnout.”

ResearchBlogging.orgWe humans are perplexing beasts. In order to power the computers and airplanes and steel mills and blogs of our increasingly technological society, we are digging up and burning every source of fossilized plant and algae matte rthan we can find. In the process, we are dumping CO2 into the atmosphere at the fastest rate since at least the Paleocene / Eocene thermal maximum 55 million years ago.

The accumulation of this gas in the atmosphere has been identified as a potential problem, so society is looking for alternative dump sites. Although some agriculturalists think that trees or soil or other surface effects can securely hold this excess carbon, geologists tend to concentrate on shoving it where the sun doesn't shine. In science talk, we replace shove with ‘sequester’ since scientists like elongated words.

The theoriticians like to daydream about ‘sequestering’ their carbon in al sorts of fanciful places, but a perennial favorite is the deep, dark, hot, salty brines of saline aquifers. Often, these aquifers underlie current (or former) oil and gas reservoirs, in which case a fair amount is learned about them in the petroleum extraction process. However, carbon dioxide is supercritical at the pressures and temperatures of these deep reservoirs, and becomes highly reactive as a result. Brine can also be quite chemically reactive. In a nutshell, the supercritical CO2 dissolves into the brine to form carbonic acid. But predicting the details is tricky.

In order to stop the theoriticians from talking smack about CO2-brine reactions, 1600 tons of CO2 was injected into a brine in an abandoned oil well. Carefully monitored aqueous geochemical hijinks ensued.

By measuring the composition of the reservoir fluids both before and after CO2 injection, Kharaka et al. are able to quantify these hijinks.

In short, the pH plummets as the HCO3- skyrockets, and dissolved alkali earths, transition metals, and base metals increase as a result. This is predicted to be a result of dissolution of carbonate and iron hydroxide cement. Obviously, dissolving the intragranular cement should increase the porosity and permeability, making it easier for the fluids to migrate. Despite this, no leakage was observed into the overlying sandstone unit.
Another disturbing observation was the increase in dissolved organic molecules, some of which are quite toxic. This observation was unexpected, and not fully understood.

The last experiment was to use the d18O values of the brine and CO2, which were initially quite different, to calculate mixing and residual supercritical CO2 which had not dissolved into the brine.

My only complaint is that they did not look at the behavior of sulfur. Sulfur can be present in brines and co-existing residual hydrocarbons in either oxidized or reduced forms, and can also form a variety of minerals. Sulfur oxidation is what generates acid mine drainage, and it is an important constraint on both the acidity of the fluids present and on the solubility of various metals.
Kharaka, Y., Cole, D., Hovorka, S., Gunter, W., Knauss, K., & Freifeld, B. (2006). Gas-water-rock interactions in Frio Formation following CO2 injection: Implications for the storage of greenhouse gases in sedimentary basins Geology, 34 (7) DOI: 10.1130/G22357.1

Thursday, September 29, 2011

Early Earth awesomeness and middle Earth magic

One of the problems with making illustrated linear geologic timescales is that the middle 80% of the timescale generally looks fairly boring. Of course, all sorts of things were happening in the Archean and Proterozoic, but they ren't always as easy to sketch cartoons of as a trilobite. I'm currently doodling a cartoon illustrated timescale, and I was wondering. do any of you have any favorite Precambrian events that can take up the timeline space that would otherwise be white? If so, and you don't mind me stealing your favorites, please share.

Viewing imaginary spacecraft from the ground

I read and watched a lot of science fiction when I was young. I don’t much any more, mostly because I’m too busy, but every now and then I have a relapse. Also, for the most part, real science is more fun these days. But they aren’t necessarily mutually exclusive.

For example, this evening, I was thinking about the International Space Station. Under optimal viewing conditions, the ISS is the brightest thing in the sky, aside from the sun and moon. But while the station is surprisingly large (about the size of a football field), it is generally smaller than most science fiction spacecraft which are capable of interstellar travel.

Science fiction generally depicts people walking around on the ground, or starships floating close above a planet, but with little connection between the two; The only time I can recall people on the ground seeing spacecraft above are when the Death Star explodes in Return of the Jedi, and when the remains of the Enterprise re-enter the atmosphere in Star Trek 3. But if you can see the ISS from here on Earth, then surely a larger science fiction (or alien) spacecraft would be brighter still.

Figure 1. Since you can see the ISS from your backyard, you don’t need the force to detect something much bigger in the same orbit. Click for larger image.




Thanks to Jeff Russell’s Starship dimensions, scaled profiles of most major starships can be easily compared to the ISS. That’s all well and good, and we can estimate areas and visual magnitudes in the -7 to -10 range for various popular starships. But since there isn’t anything of that brightness in our skies, it doesn’t mean much, except to tell us that they would be easily visible from the back yard of anyone looking for them (assuming they aren’t in Earth’s shadow). But there is a useful celestial yardstick.

Figure 2. relative apparent sizes of various spacecraft (and the ISS) when directly overhead in a 350 km low Earth orbit, when compared to the apparent size of moon. The moon is of course 1000 times larger and 1000 times farther away. Click for larger image.




The Moon, which has a radius of about 1740 km, is about 1000 times farther away than low earth orbit. So a spacecraft 1000 times smaller- say, a flying saucer with a 1.7 km radius- would have the same apparent size when directly overhead. Thirty degrees above the horizon, it would appear half a big. In other words, the shape of kilometer-scale spacecraft, such as a Star Destroyer, or a Babylon 5 capital ship, would easily be discernable to people below. The 24 km-wide flying saucers from Independence Day would appear to be seven times larger than the moon, and would blot out the sun for up to 4 seconds as they passed in front of the sun.

So forget all the garbage you hear about radar jamming and government cover-ups. When the alien invasion fleet comes for us, we’ll be able to watch it from the back yard.

Figure 1 is from STS 118 and Return of the Jedi.

Tuesday, September 27, 2011

Isotopic wins the 2011 Acrtic sea ice minimum competition


The winner of the third annual Arctic sea ice prediction pool is “Isotopic”, with a winning guess of 4567 +/- 100 thousand square kilometers.

As is usual, Isotopic’s prize is the chance to nominate a blog topic upon which I will try to write.

Congratulations, and thanks to all who played.

Thursday, September 22, 2011

Real scientists study climate

There is often an argument, usually heard from the math/ engineering wing of the global warming skeptic industry, which suggests that climate scientists are a separate and distinct group of researchers. A cabal who don’t do real science, and who train and study in isolation, cut off from the rest of the scientific endeavor.

This is generally not the case. Most of the climate scientists I know started out doing something else. Some worked in the gold mining or oil & gas industries. Some studied the formation of continents, or the origin of granite. Some were volcanologists, or modeled deep mantle convection. A few were not even geoscientists at all, but came from disciplines such as chemistry, or nuclear physics. There are some people who go the other way, and move from climate science into archeology, or astrobiology.

There are several reasons for this. First of all, the analytical tools used to study non-climatological processes can often be applied to climate questions. And more importantly, when scientific discoveries of all types are first made, it is not necessarily clear where that discovery will have the most impact. It is not unusual for something in a seemingly unrelated field to get picked up by climate research.

There is also the funding aspect. Here in Australia, there has been an increasing reluctance to fund basic research. Most climate science is considered applied study, not basic science, so there has been a real trend for researchers chasing the funding dollar to go into areas like climate, mining, or forensics, where funds are easier to obtain.

But the point is that as professional scientist wind their way through various scientific inquiries that lie on their career paths, they don’t turn off the analytical parts of their brains when there is a climatological implication to their studies. Ultimately, climate science is just like any other sort of science, and it is studies using many of the same tools and methods as the rest of Earth science.

Saturday, September 17, 2011

How long as the Atacama been dry?

“I am flying home from Europe in late August with nothing but a notebook and the 2011 Goldschmidt conference Geology giveaway issue to keep me occupied. Using the old-fashioned method of reading and writing on paper, I will blog my way through the compilation of highlighted geochemistry papers as time allows. These will then be posted via time delay to keep the blog moving while preventing paper burnout.”

ResearchBlogging.orgThe Atacama desert, on the west coast of South America, is the driest desert on Earth. The high Andes mountains block moisture transport from the Amazon basin, and the cold Humboldt current offshorehttp://www.blogger.com/img/blank.gif provides little evaporative moisture.

Dunai et al. (2005) attempt to determine whether the hyperarid conditions are ancient (early Miocene) or more recent (late Miocene) by looking at the cosmic ray exposure ages of easily eroded sediment.

Cosmic rays are extremely high energy protons which are generated beyond our solar system (ask an Astronomer for details). The are energetic enough to penetrate the atmosphere and the first few meters of rock when they strike the Earth. When they do hit rock, they can create nuclear reactions between the atoms in the rock. One of the products of these reactions, 21Ne, can be measured using noble gas mass spectrometry. So the amount of excess 21Ne a rock has is proportional to how long it has been close to the earth’s surface, and the cosmic ray flux.

Dunai et al. (2005)’s sample sites were specifically chosen to exclude areas where the outwash from the high Andes east of the desert would erode or cover the local rocks. Only local rainfall could erode the selected areas, so only local, medium elevation, near-shore precipitation (or lack thereof) was relevant to the erosion rates.

Their results show that most of the rocks they sampled have been at or near the surface for 20-30 million years. These are among the oldest exposure ages in terrestrial rocks. The implication is that there has been negligible erosion since that time.

On the other hand, I wish the paper made more of an effort to explain why the results given were not within error of each other. Call me old fashioned, but a data table would be nice as well.

The other question that they ask is which came first, the aridity or the uplift? It is easy to see how uplift causes aridity- the rain shadow gets stronger. How aridity causes uplift is less obvious, and the reference given is not available on this aircraft. But the general idea (based on context) seems to be that with no fluvial input to the subduction trench, it accumulates very little sediment. Without sediment, the rocks are stronger, and can withstand more stress, pushing the mountains higher.

The problem with this conclusion is that it requires knowing the sediment flux from the entire drainage area. Presumably the sediment transport would be controlled mainly by erosion of the high (and higher precipitation) Andes.

Dunai et al. (2005) specifically chose a site that did not record the sediment flux from the eastern, mountainous part of the drainage basin. Instead they chose to focus on local conditions. By excluding the most important potential sediment source, they put themselves in the worst possible position to answer questions about sediment transfer in the rest of the Atacama desert, including total transport to the trench.

Dunai, T., González López, G., & Juez-Larré, J. (2005). Oligocene–Miocene age of aridity in the Atacama Desert revealed by exposure dating of erosion-sensitive landforms Geology, 33 (4) DOI: 10.1130/G21184.1

Tuesday, September 13, 2011

An example of peer review

Dear Editor Smith,
I return the Doe et al. manuscript number 5623646 with numerous comments. In my opinion, the manuscript will not be fit for publication until all the flaws described below are corrected:

Title

The title of this paper does not reflect the sort of study which I would like to see done on this material. Please instruct the authors to change it, instead of using the title to push their own agendii.

Introduction

While the paper is nominally about solid solution in simple oxides, the narrow focus of the introduction has resulted in a failure to cite the well-known avian migration papers of Lemming et al. (2003), and Lemming and Aardvark (1998), both of which ought to be mentioned for completeness. Without tying mineral solid solution to bird migration (ibid), econometrics (Lemming and Wesson 2002), and mass spectrometry Lemming et al. 2009), the author fails to cite as wide a selection of my papers as he otherwise could. This indicates an inability to place the science in the broader context of society. Without this context, their results are neither novel nor interesting.

Methods

Like the title, the methods of this paper fail to pursue the angle of inquiry which I would have used, had I their skillsets and funding. This is obviously a serious error. Please require the authors to have done something other than the experiments whose results they are reporting. They would do well to cite Lemming et al. (2009) for the analytical procedures I prefer.

Results

In the first experiment, where the precision is twice as bad as Lemming and Stoat (2006), the data is obviously not precise enough to be worthy of presentation. The second experiment, with precision twice as good as Lemming and Stoat (2006), is obviously too good to be true, and must be the result of incorrect error propagation or outright forgery.

Discussion

Once again, the lack of citations to my unrelated papers is a serious flaw. In addition, the authors insist on drawing conclusions based on their data, and not my preconceptions of where the field was 15 years ago. Ignoring the work that they misguidedly performed renders the rest of their study irrelevant. In fact, their constraints and discussion of the experiments they DIDN’T do is practically nonexistent. This is clearly unscientific. There is a problem of nomenclature as well. The proposed mineral name in this paper is completely unacceptable. I require the authors to name their new mineral after my pet hamster instead.


While this paper is not suitable for publication in a top rate journal, it will be perfect for your rag, providing that the above revisions are undertaken.

Sincerely,
Dr. Lemming

Saturday, September 10, 2011

How do extinction events kill so effectively?

“I am flying home from Europe in late August with nothing but a notebook and the 2011 Goldschmidt conference Geology giveaway issue to keep me occupied. Using the old-fashioned method of reading and writing on paper, I will blog my way through the compilation of highlighted geochemistry papers as time allows. These will then be posted via time delay to keep the blog moving while preventing paper burnout.”

The dinosaurs are still alive!

ResearchBlogging.orgThis was the conclusion reached by a group of my fellow undergrads way back in the Pliocene when I was in college. As an independent study project, they investigated all the possible effects of a giant meteorite impact (dust, fires, tsunami, etc), and concluded that none of these effects had the reach or duration to cause widespread global extinction.

Indeed, the actual kill mechanism is generally armwaved and/or hyperbolized (no sunlight for months, scorching acid rain, death from the skies!) under the circular reasoning that, “since everything died, these effects must have been lethal.” Understanding how entire niches get wiped out as actually rather tricky.

Enter Kump et al. (2005), who described a possible kill mechanism; poisoning from massive releases of H2S gas from an oxygen-starved ocean.

In the absence of oxygen, bacteria will happily metabolize sugars by turning sulfate (SO4--) into sulphide (S--), with the oxygen liberated form the sulfate used to burn sugar into CO2 + H2O. In the absence of iron or other base metals, this sulphide becomes H2S in an aqueous system like the ocean. The ocean is full of sulfate; it is the second most common dissolved salt anion, after chloride.
So under oxygen-free conditions, generating significant amounts of H2S is easy. Once this H2S mixed with oxygen-rich water, it oxidizes back into salfate. Water with significant H2S content is called “Euxinic”. While the modern ocean is well oxidized throughout all, but a few closed basins like the Black Sea, in the geologic past some or all of the deep water may have been euxinic.

In their study, Kump et al. (2005) do two things. First, they determine the conditions under which H2S-bearing waters can upwell to the surface faster than oxygenated near-surface water can break down the H2S. This is important because oxygen and hydrogen sulphide react easily in water, but if the H2S exolves into the atmosphere, then it can co-exist metastably with O2 gas in the air.

The second thing that Kump et al. (2005) do is to chemically model what happens to this H2S once it gets into the atmosphere, how it is broken down,. and what other changes occur as a result.

Because H2S and O2 do not directly react under normal atmospheric conditions, H2S oxidation in the atmosphere in generally performed by the OH and O radicals, which are in turn generated by the UV or radiological breakdown of H2O and O2 molecules. These are the same radicals that breakdown methane (CH4), carbon monoxide (CO) and many other metastable gasses.

What Kump et al. (2005) find is that if the H2S flux into the atmosphere exeeds the present flux by about a factor of 1000, then the H2S accumulates after than the OH and O radicals can break it down. This leads to a step function increase in H2S atmospheric lifetime and concentration, and a drop in O and OH abundance.

This depletion of O and OH, in turn reduced methane breakdown, so that methane concentrations and mean atmospheric lifetimes also increase. In addition, the lack of O means that ozone production is curtailed, so the ozone layer is reduced. The combination of reduced ozone protection and direct H2S toxicity is touted by Kump et al. as a highly effective kill mechanism, especially for land creatures and sea creatures in the near-surface waters.

Kump et al. then go on to show that there is evidence for anoxic waters reaching the surface during a number of Phanerozoic extinction events, and further hypothesize that the widespread euxinia in the Proterozoic inhibited the development of land life as a sort of “permanent extinction event” condition that persisted for most of Earth’s history, until mysteriously disappearing in the Cryogenean.

The H2S-based kill mechanism (catchily coined as a “chemocline upward excursion”) is way outta my field of expertise. So I don’t know if there are reasons outside of my knowledge base to reject it out of hand. However, the nice thing about this paper is that it proposes a mechanism with specific, testable effects which we analysts can go looking for. While determining paleo-ozone and methane levels could be a bit tricky, the study of paleoeuxinity is a significant and ongoing field of study. I don’t know if this paper has withstood the test of time, but I suspect that it has inspired a whole slew of clever experiments. What more could we ask of the theoriticians?
Kump, L., Pavlov, A., & Arthur, M. (2005). Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia Geology, 33 (5) DOI: 10.1130/G21295.1

Friday, September 09, 2011

The National Hurricane Center's Y2K bug

The following is the current forecast discussion for Hurricane Katia. Note the last line:

ZCZC MIATCDAT2 ALL
TTAA00 KNHC DDHHMM

HURRICANE KATIA DISCUSSION NUMBER 45
NWS NATIONAL HURRICANE CENTER MIAMI FL AL122011
500 AM AST FRI SEP 09 2011

THE CLOUD PATTERN CONTINUES WELL ORGANIZED AND IN FACT A DRIFTING
BUOY NEAR THE CENTER OF THE HURRICANE RECENTLY REPORTED A MINIMUM
PRESSURE OF 968 MB. THE INITIAL INTENSITY IS KEPT AT 75 KNOTS.
HOWEVER WEAKENING IS INDICATED SINCE THE HURRICANE IS ALREADY
REACHING COOLER WATERS AND KATIA IS FORECAST TO BECOME
POST-TROPICAL IN ABOUT 36 HOURS.

THE HURICANE IS MOVING TOWARD THE NORTHEAST OR 050 DEGREES AT 21
KNOTS. SINCE THE HURRICANE IS ALREADY EMBEDDED WITHIN THE
MID-LATITUDE WESTERLIES....IT SHOULD CONTINUE ON THIS GENERAL TRACK
WITH AN INCREASE IN FORWARD SPEED FOR THE NEXT FEW DAYS.

NO 96-HOUR POINT IS BEING GIVEN BECAUSE FORECAST POINTS IN THE
EASTERN HEMISPHERE BREAK A LOT OF SOFTWARE.


FORECAST POSITIONS AND MAX WINDS

INIT 09/0900Z 37.6N 67.5W 75 KT 85 MPH
12H 09/1800Z 39.5N 64.5W 75 KT 85 MPH
24H 10/0600Z 42.0N 55.5W 70 KT 80 MPH
36H 10/1800Z 45.5N 43.0W 60 KT 70 MPH...POST-TROP/EXTRATROP
48H 11/0600Z 49.5N 30.5W 65 KT 75 MPH...POST-TROP/EXTRATROP
72H 12/0600Z 56.5N 10.5W 50 KT 60 MPH...POST-TROP/EXTRATROP
96H 13/0600Z...EAST OF ZERO DEGREES LONGITUDE

$$
FORECASTER AVILA

NNNN

Thursday, September 01, 2011

One day posters suck

We are talking about conference posters here, not posts on blogs. In general, there are two main types of presentations at scientific conferences; talk, and posters. Talks are generally supposed to appeal to a large number of people, and feature limited feedback between individual audience members and the speaker (aside from the occasional front-row tantrum, of course).

Posters are generally a more personal affair, wit the poster presenter and the one or two people listening engaging back and forth for as long as it takes to settle their differences.

Back in the good old days, when Demetrodon was our most advanced predator and mollusks ruled the seas, posters stayed up for the entire conference. This was great, because speakers could put the nettle gritty details of their methodology on their co-authors posters, and posters that describe methods used in a number of different talk sessions could all be centrally located. When I gave a poster at Goldschmidt in 2006- my last presentation before starting work in exploration, I had one person come by during the dedicated poster presentation time, Monday morning. But as the conference continued, and more and more people discussed interesting science based on what the lab was doing, more and more people started to swing by and discuss things, or ask me to walk them through the poster, and by Friday afternoon we had engineers from different mass spectrometer factories dueling with whiteboard markers of the the details of the ion optics.

Nowadays, this can’t really happen. At all of the major conferences I have been to since then, posters have been a one day affair. They are put up the day of the discussion, and taken down afterwards. And I reckon that this is an inferior system.

Firstly, it makes coordinating posters that related to multiple sessions difficult. Even within a session, if the timing of the talks and posters are not arranged well, you can have a speaker referring people to details in a poster that has already been taken down.

Of course, the flip side to one day posters is that you can halve four times as many of them, with commensurate increases in attendance (and fees?). But is it really necessary to bring more and more people together for less and shallower interaction? I thought that was what the internet was for.

Tuesday, August 30, 2011

How odd is our solar system?

One of the most basic observations about the planets in our solar system is that there are two basic types. In the inner solar system, we have four rocky planets with radii less than 6500 km. In the outer solar system, there are four gaseous planets, with radii larger than 24,000 km. One long-held implication of this division is that there is some sort of significance in the lack of planets intermediate in diameter between Earth and Neptune.

One of the most striking observations from the list of planet candidates from the Kelper mission is just how unusual the terrestrial planetary size distribution is. The Kepler planetary radius distribution (figure 1) peaks in the middle of this gap; almost 70% of Kepler planet candidates are larger than Earth but smaller than Neptune.

Figure 1. probability distribution of Kepler planet candidate radii


So our solar system is unusual. But how unusual. A back of the envelope calculation will tell us. If we accept the Kepler figures, then only 30.8% of planets are, like ours, either smaller than 6500 km or larger than 24000 km. So the chances of an eight planet system having zero planets in this size range is 0.308^8. This works out as about one in twelve thousand. So for every 8 planet system like ours, there should be 11,999 with at least one intermediate-sized planet.

With a hundred billion stars in the galaxy, there are still bound to be quite a few solar systems like ours. But with only about 1800 known planets and planetary candidates discovered so far, it is unlikely that we will discover a solar system analog any time soon.



Sunday, August 28, 2011

Time away


The lemming family has been on holidays. The geomorphologically curious are welcome to guess LLLL's location in the picture above, but the only hint I will give is that everything in the photo aside from the atmosphere is geologically young in the grand scheme of things, having formed in the last few percent of Earth's history. Scientifically meaningful content will return as time permits.

Thursday, August 25, 2011

Mass–independent isotopic fractionation

The whole point of geology is to figure out what happened in the past based on the rocks from that time which are still around today. It isn’t actually about the rocks. It’s about the story. The rocks are just the publishing medium. And the craft of geology is learning to read the language of stones.

Similarly, the purpose of geochemistry is to determine the story told by a rock’s chemical composition. The way we do this is somewhat counter-intuitive. We generally search for chemical relationships- that are hard to change. The reason for this is that a ratio that is easy to change doesn’t tell us very much. The potassium/platinum ratio, for example can be changed by just about any process, so measuring it doesn’t tell us what process was occurring.

This is why geochemists like to study systems like noble gasses, rare earth elements, and isotopes. These things are generally changed by only a few processes, so if a change is seen in a rock, there are relatively few processes that could have made the change.

For example, isotopes are nuclei of the same element with different masses. They generally have similar chemical properties- all sulfur isotopes are still sulfur- so only a few processes can change them: evaporation, digestion by bacteria, and diffusion, are some examples. This is the basis of all stable isotope geochemistry; to use the limited number of possible processes to pin down a story by looking at isotopic changes.

In general, when isotopic ratios change, that change is mass dependent. That is, the change is a function of the difference in mass. For sulfur, for example, the change in the 33S/32S ratio should be about half of the change in the 34S/32S ratio. Mass-independent isotopic fractionation refers to a process that fractionates the different isotopes by a ratio that is not strictly mass-dependent. So instead of the 33SS/32S change being half the 34SS/32S change, it might be 0.6. Or 0.3.

The number of causes of mass independent fractionation is exceedingly small- way smaller than the number of effects that cause normal mass dependent fractionation. So if mass independent fractionation is observed, you pretty much know that a particular unique process must have happened.

Most mass independent isotopic work at present is done in sulfur. This is because mass-independent fractionation of sulfur is ubiquitous in rocks from the first half of the Earth’s history, but is rare to nonexistent since that time. So this is a powerful tool that tells us that the Earth’s surface was fundamentally different in Archean time; a process (photolysis of atmospheric SO2) was occurring from 3800 to 2450 million years ago, and hasn’t happened since. SO2 is not stable in the presence of oxygen, and photolysis requires UV light that is currently blocked by the ozone layer, so the sulfur isotopic record is the best tool we have for determining just how different the early atmosphere was from the one we breathe today.

Tuesday, August 09, 2011

If you think this blog is inactive, visit a craton.

One unfortunate side effect of the wireless and handheld internet revolution of the past five years is that it has made internet cafes harder to find. So expect cratonic style inactivity to continue for a web epoch or two.

Tuesday, July 26, 2011

Peugeot trip computer overestimates fuel efficiency.

Two years ago, I suspected that the fuel efficiency that my car’s trip computer calculates was leaner than the actual consumption. Two years and 50,000 kilometers later, this appears to be the case. On average, the trip computer states the car uses 0.75 fewer liters of fuel per 100 km than the actual consumption. That’s about 5 miles per gallon in American units. For whomever cares, the car averages 6.3 liters per 100 km, or 37 mpg, and drags a family of 3.4 people around town and on moderate, occasional road trips.


Figure 1. Actual fuel consumption (blue) vs. stated fuel consumption (pink).

Saturday, July 16, 2011

Terminology Question

Is there a less clunky term for "post-Archean"? Obviously, post-Archean has been around since at least the PAAS definition from Nance & Taylor (1976), and it has obvious utility as "the time in Earth's history when the atmosphere contained oxygen", but I was wondering if there is an officially recognized word for it. Anyone?

Saturday, July 09, 2011

Is this how circles are supposed to work in Google plus?


Google plus circles can be used to classify people in order of perfidy. Click to enlarge.



Or here is a blow-up if you didn't click the one above.



Warning: Associating geologists who study surface processes with "basement" may cause offence.

Friday, July 08, 2011

2011 Arctic Sea Ice minimum predictions


The 2011 sea ice extent betting pool has now closed. Unlike previous years, we no longer have a distinct multimodal betting population. However, despite my advertising the contest on denialist schill sites, nobody has guessed at a final ice extent anywhere near or above the 1979-200 average (on right). On the other hand, we no longer have large guess populations off-screen to the left, as we have in previous years.

Tune in this October to see who wins.

Wednesday, July 06, 2011

Peugeot trip computer overestimates fuel efficiency.

Two years ago, I suspected that the fuel efficiency that my car’s trip computer calculates was leaner than the actual consumption. Two years and 50,000 kilometers later, this appears to be the case. On average, the trip computer states the car uses 0.75 fewer liters of fuel per 100 km than the actual consumption. That’s about 5 miles per gallon in American units.


Figure 1. Actual fuel consumption (blue) vs. stated fuel consumption (pink).


Figure 2. overestimation of mileage (mpg).

Monday, July 04, 2011

2011 Arctic sea ice extent minimum prediction pool

Update:
The contest so far:


Neven and Peter are reminded that contestants who have been mathematically eliminated at the 2 sigma level are entitled to guess again. It is up to you guys to keep track, though. This post will remain on top until the contest closes.

original post:

The 2011 Arctic sea ice extent minimum prediction pool is now open. A reminder that this is a competition on extent, not coverage.

The 2010 summed guess curve and final result are shown below:


Guesses are to be in the form of extent and sigma (a mathematical measure of uncertainty), in thousands of km2 You may use decimal places if you insist.

Your guess will define a Gaussian curve.

The function with the highest value for x=minimum daily measured ice extent (from IARC-JAXA) wins.

See the 2009 announcement, opening, and final curve for details.

This contest will close much sooner than last year's. Guesses must be submitted by the time the Earth reaches aphelion in its orbit, which the internet tells me is 3 pm on July 4 (UTC). Trash talking, dissembling, and boasting in the comments section is still encouraged.

The prize, as always, is the choice of a blog topic on which I will write.

Sunday, July 03, 2011

Wasting time on the internet?

Looking for something to read? Sorry, I'm running a mass to field calibration so that my isotopes appear at the correct apparent mass for tomorrow's visitor. I can't entertain you this evening. But if you're thinking you might want to do something vaguely useful to society, and you know something about geology (and I know a lot of y'all do), head over to Wikipedia's WikiProject Geology and see if you can apply your expertise to educating the world. It will be far more interesting than determining that mass 238.051 amu is centered in the detector when the magnet has a field value of 2219.269 gauss. Trust me. I'm thrilled because I'm starting at the top of the table and working down.

Tuesday, June 28, 2011

Taking the niggers out of Huck Finn

At the beginning of this year, there was an internet brouhaha over the politically correct decision to replace all instances of the word “nigger” with "slave" in a forthcoming edition of the American classic novel, “The Adventures of Huckleberry Finn”. I have been vehemently opposed to political correctness ever since I was told at college orientation that exercising my first amendment rights was grounds for dismissal from university. So when I read about this controversy, I chose to react in the most politically incorrect manner possible: I decided to take the time to re-read the book instead of spouting off instantly and emotionally. Six months later, the internet has long since forgotten about this event, and I’m finally ready to comment.

Being politically incorrect, my first reaction was that, if they were going to start substituting words, they ought to be adding slurs, not removing them. For example, the text of Huck Finn contains 51 instances of “white”. What if we were to replace all these with “trailer trash”?

The answer, of course, is that the story starts to make less sense. For example, consider the introduction of Huck’s father:

There warn't no color in his face, where his face showed; it was trailer trash; not like another man's trailer trash, but a trailer trash to make a body sick, a trailer trash to make a body's flesh crawl -- a tree-toad trailer trash, a fish-belly trailer trash.

Since more than three quarters of the instances of “white” in the story do not refer to race, it requires a bit of picking and choosing to get this substitution to work:

And here comes the trailer trash woman running from the house, about forty-five or fifty year old, bareheaded, and her spinning-stick in her hand; and behind her comes her little trailer trash children, acting the same way the little niggers was going.

Of course, trailers are an anachronism, and as far as epithets go, “trailer trash” is about as mild as they get: the only people likely to be offended by it probably haven’t discovered the internet yet. And it isn't even politically incorrect, since the only people it is politically correct to slur are the uneducated rural whites. So let’s try something else.

One obvious use of politically uncorrect substitution is to determine whether words are being used derogatorily. For example, if "lady" is used in a class warfare, resentful, or sarcastic context, then substituting a politically incorrect derogatory synonym (of which there are many) should preserve the tone of the passage. Performing the substitution allows us to test this hypothesis, which the very first of the 12 substitutions finds wanting:

Three big men with guns pointed at me, which made me wince, I tell you; the oldest, gray and about sixty, the other two thirty or more -- all of them fine and handsome -- and the sweetest old gray-headed cunt, and back of her two young women which I couldn't see right well.

Rachel Grangerford (the grey-haired lady/cunt) is depicted as a sympathetic, motherly character. Even in outback Australian trucking circles, where every fourth word is cunt, the above passage would be incongruous. She is the subject of five of the first seven instances of ‘lady’, and the other two refer wistfully to drawings made by Emmeline Grangerford, another sympathetic character. Most of the rest of the uses of ‘lady’ refer to fine-looking circus performers, who are not viewed particularly badly.

In fact, there are relatively few horrible female characters in Huck Finn. The least sympathetic would probably be Miss Watson, the evangelizing sister of Huck’s legal guardian who tries to sell Jim downriver. The following passage retains most of its original meaning, for example:

I noticed dey wuz a nigger trader roun' de place considable lately, en I begin to git oneasy. Well, one night I creeps to de do' pooty late, en de do' warn't quite shet, en I hear de old cunt tell de widder she gwyne to sell me down to Orleans, but she didn' want to, but she could git eight hund'd dollars for me, en it 'uz sich a big stack o' money she couldn' resis'. De widder she try to git her to say she wouldn' do it, but I never waited to hear de res'. I lit out mighty quick, I tell you.

Of course, one has to be careful doing an automatic search and replace of all 110 instances of “miss”. Firstly, cunt is not a verb, and secondly, the story is set on the Missouri bank of the Mississippi river.

But I digress. The uproar about this book is not the cunts. It is the niggers. And if the politically correct way of addressing this is to turn all instances of nigger to slave, then the politically anticorrect response should be to change all the original uses of the word slave to nigger. And this is where things get interesting.

Despite the word nigger appearing in the text 212 times, slave only appears 11. Five of those are in “slavery”, and another refers to “slave country”. The remaining five are related to Jim’s sale to the Phelps family by the King, Huck stealing him, and the news that Miss Watson freed Jim in her will on account of feeling bad about trying to sell him.

The word slave is only used to specifically refer to the condition of someone (usually Jim) being owned. It is not used to refer to people as human beings. In the original text, it is simply not interchangeable with nigger, or black, or any other reference to people, African-American or otherwise.

Slavery is an institution that quite literally reduces people to mere property- it is the ultimate form of objectification. So the original texts refusal to label the people subjected to this institution with the word slave is probably important, given that the author had hundreds of chances to do so. In this way, the text condemns the institution in a way that would be lost in the politically correct rewording.

Of course, "The Adventures of Huckleberry Finn" is not the only form of literature to have been deniggered. For example, the well known nursery school rhyme:
Eeny, meeny, miny, moe,
Catch a nigger by the toe.
If it hollers let him go,
Eeny, meeny, miny, moe

Has been edited to replace nigger with tiger. To the best of my knowledge, this did not precipitate howls of outrage among the self-appointed internet literati. However, such a substitution might be a bit awkward for Huck Finn:
Tigers would come miles to hear Jim tell about it, and he was more looked up to than any tiger in that country. Strange tigers would stand with their mouths open and look him all over, same as if he was a wonder. Tigers is always talking about witches in the dark by the kitchen fire; but whenever one was talking and letting on to know all about such things, Jim would happen in and say, "Hm! What you know 'bout witches?" and that tiger was corked up and had to take a back seat. Jim always kept that five-center piece round his neck with a string, and said it was a charm the devil give to him with his own hands, and told him he could cure anybody with it and fetch witches whenever he wanted to just by saying something to it; but he never told what it was he said to it. Tigers would come from all around there and give Jim anything they had, just for a sight of that five-center piece; but they wouldn't touch it, because the devil had had his hands on it.


It may be that politically correct readers wish to substitute something other than slave or tiger throughout this novel; feel free to leave suggestions, with examples of replaced text, in comments.

A more general point is that experimental search and replace is an interesting tool of textural analysis. Obviously it can only be used in the case of works available in an editable format, which for practical purposes means the public domain. But it shows that the traditional scientific methods of exploring a system by changing one variable at a time can be applied to literature. There has be much in the news of late about the controversial practice of statistically mining literature. So experimentation is the next obvious step in the sciencification of literary analysis.

Systematically changing the language of various masterpieces is a useful analytical tool. But I suspect that anti-science traditionalists will see this technique as blasphemous vandalism which is even more offensive than the derogatory words used freely in this post.

Saturday, June 25, 2011

Sphene

Is my favorite geological word. Or rather, my favorite formerly geologic word. From a lawyeristic point of view, it hasn’t been a geologic word since 1982, despite having been the preferred name for Ca TiSiO5 for the 4,566,999,971 years prior to that date. These days, you don’t sphene spoken of much, as most of us who mutter it are busy yelling at the young whippersnappers to get off our psilophytopsid lawns. However, it has not totally disappeared from the scientific literature, despite the best efforts of the IMA to discredit it. And the materials scientists might actually still prefer sphene (God belss them).

Of course, most mineralogists these days dutifully go along with the official name (the “T-word”, since despite my use of cunt, nigger, and fuck in this blog, I do draw a line at really offensive words, like t*&#%ite). And why shouldn't they use the official name? They are just following orders. But there are still a few cowboy rock smashers around who got into this field because we were never particularly good at following the rules. And while I am not so old-fashioned as to refer to element 41 as columbium, I do prefer sphene.

The T-word is a stupid name. The mineral was known (and called sphene) long before the element titanium was discovered. The name is derived from the greek word for wedge, whoich describes the shape perfectly. In contrast, commercial titanium is mined from ilemite or rutile, not sphene. And the element was originally discovered (independently) by processing ilmenite (in the UK) and rutile (in Germany) in the 1790’s.

Sphene has a domain name.
The t-word does not.
Sphene is pretty.

Finally, any sphene used in geochronology was almost certainly sphene when it crystallized, and only morphed into the T-word at a very late stage in its evolution. Fortunately, the sphene/T-word transformation does not upset the U-Pb isotopic system, so that dating of sphene is still possible today.

Sunday, June 19, 2011

Three reasons that Conservatives should fight global warming

Here in Australia, as in America, the conservative branch of politics remains firmly opposed to meaningful action to slow climate change. This is unfortunate, because climate change offers several opportunities to conservatives, were they to move aggressively to transition away from fossil fuels. I will list three below.

Note that on occasion, liberal opinionators will describe reasons that conservative should act on climate change. Those reasons generally boil down to something along the lines of “well, basically they should stop being so conservative.” This is not one of those lists. Instead, I will describe three ways in which action on climate change will help conservative causes at the expense of the left.

1. Unravel the unions

The fossil fuel industry is generally more heavily unionized than the general population in most countries. In union-poor countries like the US, sectors like coal are amoung the few in private industry where unions remain relevant. In heavily unionized countries like Australia, union penetration of energy production is extremely high.

In contrast, many renewable energy companies are small, entrepenurial, and union-free. As far as I know, there has never been a crippling strike by the united brotherhood of rooftop solar panel installers, because no such organization exists. If the left is allowed to guide the transition from fossil fuels to renewable energy, they will probably find a way of transferring union power into the new field. In contrast, the shift away from fossil fuels offers conservative a one-in-a-generation opportunity to destroy unions by closing down the industries in which they operate.

2. Neuter the NIMBYs

Transitioning from fossil fuels to a renewable economy will require a lot of development. Power generation and transmission facilities will all have to be built, and built fast, in order to effect meaningful change before irreparable damage is done to the polar ice caps. This can’t happen if small numbers of highly connected recalcitrant people have the power to block development. A prime example of this is the Cape Wind fiasco in Massachusetts, where local opposition led by the liberal Kennedy political dynasty has stymied the project for a decade and added billions to its cost. Rapid and effective changes to the energy system will not be possible if the NIMBY obstruction industry is allowed to continue blocking it, so they will have to be disempowered.

3. Embarrass the United Nations

The United Nations has been trying to act on climate change for almost 20 years. Under its Kyoto protocol, emissions have actually increased faster than what was considered the worst case scenario at the time. This is due mostly to the industrialization of Asia, where the flight of Western industry has created wealth and opportunity for billions of people in countries that were once destitute. While this is great for Asia, it shows that the UN plan was completely useless in terms of slowing CO2 emissions. A smart, effective, conservative-based, locally controlled emissions plan that immediately cuts into emissions would show to the world that in general, the most effective thing the UN can do is to get out of the way.

Wednesday, June 15, 2011

One hundred major impacts: part two: the deep ocean



A few months ago, I guaranteed the readership of the Lounge that none of them would be killed by a meteorite impact. In laying out the estimates that allowed me to do this, I took an equal area map and bombarded it with one hundred 400m projectiles. Objects of this size hit the Earth about once every 100,000 years, and are locally devastating but globally insignificant, so this seemed like a good way to look at where an impactor “big enough to wipe out LA” was actually likely to land.

Of these 100 impactors, 71 landed in the ocean. 19 of these were within 1000 km of the coastling of an inhabited continent (e.g. not Antarctica), while the others were far out in the ocean basins.

For impacts more than 1000 km offshore, the impact effect calculator of Marcus, Melosh, and Collins suggests that the main effect would be a tsunami. The tsunami details are not in their linked paper, and the amplitudes vary significantly, but the maximum amplitude at 1000 km from the impact area is about 4 meters or smaller. This is broadly similar to that of a magnitude 9 earthquake such as those that struck Japan this year and Sumatra (and the Bay of Bengal) four years ago. The tsunami takes about 1.8 hours to travel 1000 km, so warning times would depend greatly on detecting the impactor in space and seeing the fireball with antiproliferation satellites (this impactor is equivalent to a 3000 megaton bomb, so the fireball would be far larger than that of a nuclear weapon). The seismic signal of a hit to the deep ocean would actually be fairly minor, as most of the energy would be absorbed by the water.

Of course, the main difference is timing. Half of the impacts in this simulation were in the deep ocean, so with an impact repeat rate of 1 every 100,000 years, we would expect one deep ocean impact every 200,000 years. In contrast, a magnitude 9 earthquake strikes about once every 25 years. So over a million year period, we would expect 40,000 tsunamis from earthquakes, and five from deep ocean impacts.

Monday, June 13, 2011

How a carbon tax should work

I have been fairly critical of the current Australian Government’s approach to a range of issues, in particular, global warming. The current proposal is for a carbon tax. While my preference for dealing with carbon emissions is to let the damages get worked out in the courts, a carbon tax can work OK, if done well. I don’t have confidence that the current government can do anything well, but it is rude to criticize unconstructively, so the least I can do is propose a sensible carbon tax which the government and the special interests who run it can ignore. So here we go:

The goal of a carbon tax is to reduce climate change, preferably to the point where the Greenland and West Antarctic ice caps don’t melt and flood Australia’s world-class beaches.

A carbon tax therefore needs to reduce carbon emissions. Australia is a large per-capita emitter, and is also a large carbon exporter, in the form of coal, and to a lesser extent, natural gas. So the basic idea of the carbon tax is to guide the transition from carbon-based energy infrastructure to lower carbon forms. In order to do this, it has to be broad-based, scaled to the potential threat, and predictable over a several decade timescale.

Luckily, there is an easy way to do this. The climate scientists tell us that atmospheric CO2 concentrations greater than 350 ppm are likely to cause troublesome warming, with higher concentrations bringing more trouble faster. We are currently at 388 ppm. So, CO2 emissions are not a problem if there isn’t much CO2 in the atmosphere, but become more problematic, the farther over the safe limit we are. Thus, the sensible thing to do is to scale the carbon tax based on the atmospheric concentration.

The easiest and most transparent way to do this is to simply tax CO2 emissions at one dollar per ton, for each ppm in the atmosphere over 350. So at the current level of 388 ppm, the rate would be 388-350= 38 dollars per ton (I’m assuming we all work in tons of carbon, but if the standard value is tons CO2, please correct me). At the current rates of rise, this rate will go up by a little under 2 dollars per year.

The tax rate will stop rising when CO2 emissions stabilize, as it should. If sequestration ever takes hold in a serious way, the rate could even come down. And if carbon producers manage to sequester our atmosphere back down below 350 ppm, then the tax rate would drop to zero, which would be entirely fair.

Economic modelers can project future CO2 rise rates, which gives them more confidence and planning abilities than they have right now, and this scheme would be far preferable to an unknown tax rate that will last for an unspecified period of time and be subject to God-knows what kind of increases, changes and repeals.

The only remaining challenge would be figuring out how to apply it to the world’s other six continents, and their respective economies.

What the money gets spent on is another issue, but damages and consumer compensation are easy places to start.

Sunday, June 12, 2011

3QD semifinalists announced

Three quarks daily has announced the semifinalists for its 2011 3QD science blogging prize. The Earth and planetary science blogs generally fared well. Of the entries I listed last week, the following have advanced:

Geology word of the week: O is for Ophiolite, by Evelyn Mervine at Georneys.

Ocean acidify-WHAT!? By Sheril Kirshenbaum at Convergence (originally posted at the Intersection).

The Pelican's Beak, by Brian Switek at Laelaps.

Prehistoric Clues Provide Insight into Climate’s Future Impact on Oceans, by Allie at Oh for the love of science.

Rare Earth elements aren’t rare, just playing hard to get, by Sarah Zielinski at Surprising Science.

Levees and the illusion of flood control, by Anne Jefferson at Highly Allochthonous.

While five other nominations (including mine) failed to advance.

Overall, this is a 55% success rate for Earth science, significantly better than the 23% overall, or the 18% for non-geoscience. Way to go, geoblogospheroids! The 3 quark daily editorial team now whittles the 20 remaining entries down to six for the final judge. Good luck folks.