- Editorial: Issue 22 – Michaelmas 2011
- Cover: Cultured Brains
- News: Issue 22
- Cool It – Bjorn Lomborg
- Islam’s Quantum Question – Nidhal Guessoum
- The Art of Science – Acabo Games Ltd
- Feature: A Clean Slate
- Feature: The Age of Endeavour
- Feature: Eye-popping Films
- Feature: A Bolt from the Blue
- Feature: Beyond Darwin
- Focus: Beneath the Surface
- Behind the Science: The Father of Forecasting
- History: Science in Print
- Arts & Science: Caring for Art
- Perspective: Colliding at Colossal Costs
- Science & Policy: Reactive Politics
- Away from the Bench: Skeletons and Flame Tornadoes
- Weird and Wonderful
Rationality, References and Radio
Lately, it seems that science and rationality are being mentioned in tandem more and more frequently. Scientists have always strived for rationality, but the link between the two is now regularly cropping up in mainstream debate. For example, Richard Dawkins’ latest book adds to his crusade of rational thinking whilst Brian Cox has pioneered the catchphrase “stay rational” on his BBC radio show. Evidence-based thinking should certainly be heard over other, less sound reasoning, but we should be cautious before dismissing the opinions of others. Most differing views are held not out of spite or for personal gain, but are simply due to a poor scientific understanding—so how can this be changed?
One danger is that self-belief in rationality can lead to confidence-bordering-arrogance. To be clear, most people do a fantastic job portraying science in a respectable light, but the public’s image of a profession can be easily tainted by a minority. People are unlikely to absorb a message if it is dictated with arrogance, or if they believe a profession as a whole is such. Being the scientific, evidence-based, and therefore privileged group that we are, it is our responsibility to inform and inspire others, especially if we think their view is harmful or misleading. Just as it may be the job of others to reciprocate this in fields we have little knowledge of. However, even if someone is factually mistaken, or their opinion is based on false evidence, just repeating ‘expert’ knowledge will not always make them listen—in fact, it may well turn them away from scientific reasoning in the future. As Francis Bacon said, “man prefers to believe what he prefers to be true”. But listening patiently to others’ views and why they hold them can make all the difference: the first step to influencing their opinion can be to make them think about their own logic. Perhaps I am erring on the side of caution, but I do think we need to be careful, otherwise some people may stop listening altogether and that would not be good for anyone.
Moving swiftly on, there are a few household things worth knowing. This issue, for the first time, is being published under a Creative Commons Licence. This allows our material to be reproduced for educational purposes. In addition, another first for the magazine is the inclusion of a few references. The aim is not to become overly technical or cumbersome, but to state important sources and allow you to read up on topics if you wish to. These references can be found with the credits below.
Some other BlueSci events are also making this term a very exciting one. We have a new radio show starting on Cam FM 97.2, which will be discussing some of the articles in this issue as well as recent news stories. On top of that there is also a jam-packed schedule of talks—keep an eye on the events section of the website for details. As always, if you are interested, bemused or fanatically excited by any of the above, why not get involved too? Tom Bishop
Issue 22: Michaelmas 2011
Editor: Tom Bishop
Managing Editor: Stephanie Glaser
Business Manager: Michael Derringer
Second Editors: Harriet Allison, Aaron Barker, Gengshi Chen, Felicity Davies, Helen Gaffney, Ian Le Guillou, Jonathan Lawson, Louisa Lyon, Tim Middleton, Alexey Morgunov, Lindsey Nield, Jessica Robinson, Viktoria Stelzhammer, James Scott-Brown, Richard Thomson, Alice Young
Copy-Editors: Nick Crumpton, Felicity Davies, Stephanie Glaser, Adam Kenny, Rose Spear, Viktoria Stelzhammer
News Editor: Robert Jones
News Team: Stephanie Boardman, Emma Hatton-Ellis, Zoe Li
Reviews: Tim Middleton, Jessica Robinson, Hugo Schmidt
Focus Team: Nick Crumpton, Ian Le Guillou, Louisa Lyon, Wendy Mak
Weird and Wonderful: Gengshi Chen, Jonathan Lawson, Louisa Lyon
Pictures Team: Wing Ying Chow, Nick Crumpton, Felicity Davies, Helen Gaffney, Ian Le Guillou, Louisa Lyon, Viktoria Stelzhammer
Production Team: Wing Ying Chow, Nick Crumpton, Felicity Davies, Ian Fyfe, Stephanie Glaser, Rose Spear, Viktoria Stelzhammer
Illustrators: Dominic McKenzie, Alex Hahn
Cover Image: Jignesh Tailor
A Clean Slate – Corkin, S. (2002). What’s new with amnesic patient H.M.? Nat. Rev. Neuro. 3. 153-160.
The Age of Endeavour – Baker, D. (2011). NASA Space Shuttle Manual: An Insight Into the Design, Construction and Operation of the NASA Space Shuttle (Owner’s Workshop Manual). Haynes Publishing.
Eye-popping Films – http://www.rkm3d.com/How-3D-Works/
A Bolt from the Blue – Birch, A. (1944). Reduction by Dissolving Metals Part 1. J. Chem. Soc. 430-436.
Beyond Darwin – Whitelaw, N.C. & Whitelaw, E. (2008). Transgenerational epigenetic inheritance in health and disease. Current Opinion in Genetics & Development, Vol. 18, 273-279.
The Father of Forecasting – Gribbin, J. & Gribbin, M. (2003). Fitzroy: The Remarkable Story of Darwin’s Captain and the Invention of the Weather Forecast. Headline Review.
Science in Print – Barton, R. (1998). Just before Nature: The purposes of science and the purposes of popularization in some English popular science journals of the 1860s. Annals of Science. Vol. 55, 1-33.
Caring for Art – Making Masterpieces by Neil MacGregor with Erika Langmuir. A BBC Education Production.
Colliding at Colossal Costs – www.lhc.ac.uk
Reactive Politics – www.world-nuclear.org/info/reactors.html and www.bbc.co.uk/news/world-asia-pacific-13050228
Jonathan Lawson looks into the story behind this issue’s cover image
The discovery and isolation of stem cells has been one of the most exciting and controversial discoveries for the medical sciences in recent decades. Studying these amazing cells, which have the ability to become many different cell types in our body, could significantly improve our ability to repair and replace lost or damaged body parts with fully functional, healthy organs.
Much of the interest and conflict surrounding stem cell research relates to embryonic stem cells. These are some of the earliest cells to arise in a developing foetus and are capable of becoming any cell in the body. This gives them huge potential, as it would be possible to take a small group and, with the right signals, generate any part of the body. However, there are many other types of stem cells which are of great interest to biomedical research, but which are generally less highly publicised. These less well-known cells usually produce only a few cell types. Some of them exist even in fully grown adults, whereas embryonic stem cells quickly run out as they change into other cell types.
Human neuroepithelial stem cells, which produce some of the earliest cells in the human brain, are just one example and the subject of the cover for this issue. In the brain, neuroepithelial stem cells undergo differentiation, changing from stem cells to neurons (functioning brain cells). The image on the cover shows this process occurring to a small number of cells that were collected and then allowed to grow and divide on a coated plastic surface. This indicates that the process of differentiation is efficient even in a lab environment and so could be used to generate large numbers of brain cells for study or clinical applications. The advantage of starting with neuroepithelial stem cells is that half of the work is done; it would take far more effort to first convert embryonic stem cells to neuroepithelial stem cells and then to use those to generate neurons. Hence, this approach is much faster and more efficient.
The cells in the image have been treated with several coloured markers to allow different cells to be identified. Blue marks the nucleus of all cells, irrespective of cell type. Green cells are those that have differentiated to become neurons. The red colour identifies excitatory neurons: brain cells that act by causing others to increase their signalling activity. Both green and red are visible in the image, indicating that the neuroepithelial stem cells are able to produce several different types of brain cell.
This image was selected for BlueSci by a panel of judges as the winning entry in the Graduate School of Life Sciences Image Competition 2011. It was submitted by Jignesh Tailor, a PhD student working with Dr Austin Smith. Jignesh is keen to develop his work with neuroepithelial stem cells into a means of treating neurodegenerative disorders such as Parkinson’s disease.
The Graduate School of Life Sciences competition includes categories for both images and posters, allowing entrants to discuss their work with many pre-eminent members of the University and to practice their skills at explaining their research to visitors from a wide range of backgrounds. This year, competition was particularly heated and all of the entries were of a very high standard, with many enthusiastic and engaging people getting involved.
The winning entries from all aspects of the competition can be found on the Graduate School of Life Sciences website and other entrants from the image competition will appear throughout this term on our website, www.bluesci.co.uk.
Jonathan Lawson is a first year PhD student at the Gurdon Institute
Bat wing hairs act as airflow detectors
A new study has revealed that bat wings are covered in tiny sensory hairs, which detect air movement and significantly improve their ability to manoeuvre when flying.
Bats are the only mammals capable of true flight. They can perform agile, acrobatic manoeuvres in the air, but little is known about how they achieve such precise control. To investigate the function of the hairs, researchers from Vanderbilt University in the United States blew puffs of air across the wings of short-tailed fruit bats, in various different directions. They measured the electrophysiological response of the nerve at the base of the hairs, and found that they are very sensitive to air currents—particularly to reverse airflow.
Next, the researchers shaved off hairs in different regions of the wing and examined the bats’ ability to fly through an obstacle course. When hairs were removed from the trailing edge of the wing the bats’ flight behaviour changed: they made much wider turns and the average flight speed was greater. Turbulence is a bigger problem at low flight speed or when making tight turns. Thus, these sensory hairs, while not essential for flight, may help to maintain stability under challenging conditions. Similar mechanisms are thought to exist in other flying animals. Emma Hatton-Ellis
DOI: 10.1073/pnas.1018740108 ehe
Billion-pixel camera to map the Milky Way
The european Space Agency has built the world’s largest digital camera for its 2013 Gaia mission.
The camera uses Charged Coupled Devices (CCDs), such as those found in normal digital cameras, to sense images. CCDs capture photons and produce an electric charge proportional to the intensity of incident light. These charges are then converted into a sequence of voltages, which can be stored. While a typical small digital camera consists of one to three CCDs, Gaia’s camera will contain 106, which were painstakingly mounted into position. The resulting 0.5×1.0 metre array will be able to detect the equivalent of the diameter of a human hair 1000 kilometres away.
This remarkable camera will be the key to Gaia’s mission of mapping one billion out of the estimated 100 billion stars in the Milky Way. It will measure the 3D positions of the stars; it will record their spectra, brightness and orbital velocities; and each of its stellar targets will be assessed 70 times over the five-year mission. In addition, Gaia is expected to detect quasars, nearby galaxies and minor bodies within our solar system, as well as some planets outside of it. It will also test Einstein’s theory of relativity. The findings are hoped to provide more insights into the formation and evolution of our galaxy. Zoe Li
When kinetics gets it wrong
Chemists at the University of Georgia have synthesised the molecule methylhydroxycarbene for the first time but found it to subsequently disappear, defying theoretical expectations.
Methylhydroxycarbene (H3C-C-OH) was generated under high-vacuum cryogenic conditions, then trapped in an unreactive argon matrix. Although surrounded by an inert gas, it soon began to react and disappear.
The group found that the carbene reaction had been controlled by quantum mechanical tunnelling, a process forbidden by classical mechanics. This process allowed it to both unexpectedly react in the first place, but also form acetaldehyde (CH3-CHO), a product not predicted by theory.
Generally a reaction will proceed to the product that incurs the lowest energy barrier, which in this case would be vinyl alcohol. However, because a hydrogen atom is light enough to act as a quantum mechanical object, both a particle and a wave, it can tunnel through a potential energy barrier rather than over the top. Acetaldehyde’s narrower potential energy barrier meant hydrogen could more easily tunnel underneath it.
The group tried the same experiment but with a version of the carbene that contained deuterium—twice the weight of hydrogenv—instead. In this case the carbene did not react as the deuterium was too heavy to allow tunnelling. This work emphasises the fact that thermodynamics and kinetics are not always sufficient to predict how a reaction will proceed. Stephanie Boardman
DOI: 10.1126/science.1203761 sb
Cool It – Bjorn Lomborg
Reading any discussion of man-made global warming in the popular press is a dispiriting experience. The impression is of conflicting termite nests of bad faith, demagoguery and hysteria. Lomborg’s Cool It brings clarity, making the book an important one to read.
What Lomborg does in Cool It is very straightforward. He lays out what our current best estimates on the effects of global warming are (as reported in the IPCC and topline journals), and how they differ from common perception. For instance, Al Gore claims that water levels will rise 20 feet over the next century, the IPCC says 1 foot. Lomborg then asks what can be achieved, and how it can best be achieved, with the resources available. His analyses draw on several Nobel Prize laureates and representatives from the world’s poorest nations, to reach a consensus about which steps can save the most lives per dollar spent.
We live in a time when the obvious is controversial. There have been several harsh attacks on Lomborg’s work but what is remarkable is the paranoid and hectoring tone in which they are written. Lomborg does us all a great service in dragging this discussion back to the realms of the proven and the rational. Hugo Schmidt
Islam’s Quantum Question – Nidhal Guessoum
the relationship between Islam and science is a curious one: sometimes, peaceful symbiosis; other times, outright conflict; but predominantly a worrying indifference and a lack of understanding. In Islam’s Quantum Question, Nidhal Guessoum, an astrophysicist and a devout Muslim, attempts to explain why. Unlike most Christian thinkers, Muslims have been slow to integrate scientific findings into a theistic worldview: in Muslim countries, such as Egypt and Pakistan, only around fifteen per cent of the population believes in Darwinian evolution. Guessoum decries a lack of understanding of the basic philosophies and methodologies of science throughout the Muslim world. He also charts the alarming growth of I’jaz, the academic attempt to find miraculous scientific content within the Qur’an (such as verses which predict the speed of light or the amount of iron in the solar system). This is followed by four detailed chapters confronting some of the main topics in the science and religion arena: cosmology, design, the anthropic principle and evolution. In each, Guessoum proceeds to sketch the Muslim attitudes towards these ideas and present his proposals for a way forward. Guessoum is forthright in his criticisms and practical in his suggestions. Islam’s Quantum Question is thoroughly researched, meticulously footnoted and clearly presented. These careful preparations help him to assert his belief, as advocated by Averroes nearly a millennium before him, that Islam and science are entirely compatible. Tim Middleton
The Art of Science – Acabo Games Ltd
Do you always go for the science and nature category in trivial pursuit, but perhaps struggle with the art or sports questions? If so, The Art of Science is the game for you. A challenging science and technology based trivia game aimed at students and academics. It has a similar format to most trivia games, based on cards with questions in six categories: Biology, Chemistry, Physics, Mathematics, Technology and Miscellaneous (questions on any topic but with a strange preference for the Olympic games). The Art of Science sets itself apart by allowing the player to choose how many points you need to get for each topic, meaning that scientists from all disciplines can play one another on an even playing field. The points system also means you can easily vary the game length from a quick game to an epic battle of knowledge.
The questions make the game. Not only are there loads of them, but they also cover a great range of difficulties. There are some which are a tad easy—in computer technology, what does www stand for?—but others are more challenging—what is the Runge-Kutta method an example of? In contrast, the rules can be very confusing and don’t really explain how to play the game—they try to be too clever and quirky. That said, overall, it is an excellent game for science fans. Jessica Robinson
The Art of Science – Acabo Games Ltd
do you always go for the science and nature category in trivial pursuit, but perhaps struggle with the art or sports questions? If so, The Art of Science is the game for you. A challenging science and technology based trivia game aimed at students and academics. It has a similar format to most trivia games, based on cards with questions in six categories: Biology, Chemistry, Physics, Mathematics, Technology and Miscellaneous (questions on any topic but with a strange preference for the Olympic games). The Art of Science sets itself apart by allowing the player to choose how many points you need to get for each topic, meaning that scientists from all disciplines can play one another on an even playing field. The points system also means you can easily vary the game length from a quick game to an epic battle of knowledge.
The questions make the game. Not only are there loads of them, but they also cover a great range of difficulties. There are some which are a tad easy—in computer technology, what does www stand for?—but others are more challenging—what is the Runge-Kutta method an example of? In contrast, the rules can be very confusing and don’t really explain how to play the game—they try to be too clever and quirky. That said, overall, it is an excellent game for science fans. jr
Yinchu Wang looks back to a life unknowingly dedicated to science
Before the dark knight propelled Christopher Nolan to international fame, there was Momento. Only his second feature film, this was every bit as well received as his later blockbusters. The film’s mind-boggling narrative tells the story of a man who, having been assaulted in his home and suffered brain trauma, finds himself unable to form any new memories. In spite of his handicap, he resolutely sets out to find those responsible. It is an entertaining, if somewhat contrived, account of one man’s struggle to sieve truths from lies when robbed of a most basic mental faculty. In reality, there are individuals who suffer the same plight. Cases of such a mental impairment are relatively well documented and studied, none more so than that of a patient who, before his death, was known simply as H.M. What follows is a brief account of Henry Gustave Molaison, the man who suffered from anterograde amnesia.
H.M. experienced epileptic seizures from the age of 10. Though minor at first, the episodes became increasingly severe with age. By the time he was 16, his seizures began to exhibit all the classic features of what is termed ‘grand mal’, a generalised seizure affecting the entire brain. Unfortunately for H.M., his condition proved intractable to medication. It cost him his job and made it impossible for him to have a reasonable quality of life. In an attempt to control his condition, doctors resorted to drastic measures. Neurosurgeon William Scoville performed a bilateral medial temporal lobe resection, removing the inner part of the temporal lobes from both sides of his brain in an effort to eliminate his epilepsy. If the brain is visualised as two commas joined together in the middle, then the temporal lobes are simply the forward curving tails. Structures found deep in the temporal lobes include the amygdala, uncus, hippocampal gyrus, and hippocampus. Our current understanding of the roles played by these structures is still incomplete, but less so than it was in those times. We might, therefore, forgive Scoville for his seemingly reckless move, especially since the technology of that era availed doctors of few other options. Unfortunately, they could not have foreseen the side effects such a resection would have on H.M.
H.M.’s epilepsy abated after the operation. However it quickly became apparent that his memory was severely affected. He had no recollection of his operation and, on one occasion, gave the date as March 1953, when it was really September. Although he could recall many public events that occurred prior to his operation, his memory of autobiographical events for that time was severely compromised. Moreover, postoperatively H.M. could not remember the people he met or the circumstances in which he met them. He carried out conversations normally, but would quickly forget what was said and, when asked, would deny that they had ever taken place. It emerged that his problem was not simply memory loss, but an inability to form new memories. However, formal psychological tests showed that his IQ was above average. In addition, most of his other cognitive abilities, including perception, abstract thinking, reasoning, or even digit span (a measure of short-term memory) were remarkably well preserved. Even more intriguing was the discovery that he was able to acquire new skills such as mirror tracing and puzzle solving. These activities are difficult for untrained individuals but, with practice, can be quickly learned. H.M. practised performing such tasks daily and, over time, became increasingly adept at them. On each occasion however, he would deny having ever done them before.
In the subsequent decades H.M. was studied extensively, first by Brenda Milner and colleagues from the Montreal Neurological Institute, then later by Suzanne Corkin and colleagues from the Massachusetts Institute of Technology. The advent of MRI technology made it possible to define more precisely which structures in his brain bore lesions. Several core findings emerged that later became landmarks in the field of neuroscience. In combination with other work, the studies of H.M. contributed to the development of a mature ‘theory of memory’. Scientists could now distinguish between episodic memory (the memory of events) and that gained from procedural learning through practice. They were also able to rule out the hippocampus as the anatomical location of long-term and short-term memory storage. Instead, it is now believed that the hippocampus is involved in consolidating short-term memories into the long-term store. Although subsequent studies have added complexity to the theory of memory and how it is formed, early studies of patients such as H.M. provided unprecedented insights into the anatomical basis of our everyday behaviour and consciousness.
Of course, H.M.’s case also made neurosurgeons acutely aware of the debilitating side effects of bilateral medial temporal lobe damage. Any such procedures carried out since have been performed unilaterally and only with extreme caution.
Anterograde amnesia is still a relatively common condition in clinical settings. Its causes are as diverse as the illness is dramatic. For example, it is seen in those stroke patients whose medial temporal lobe is damaged as a result of vascular occlusion. Trauma, infections and electroconvulsive therapy may all cause damage to the area and result in corresponding neurological defects. Chronic alcoholics are also prone to this disorder due to thiamine (vitamin B1) deficiency, in which case they are said to suffer from Korsakoff’s syndrome. However, lesions resulting from the above causes are seldom restricted to the medial temporal lobe. In most cases, patients suffer from other cognitive deficits in addition to those seen in H.M.
Advances in imaging technologies have reduced the extent to which clinicians must rely on studies of individuals with lesions in a specific brain region, to determine the particular functions of each of those regions. Still, accumulated knowledge gathered in this way over the decades has allowed many treatment and management strategies to be developed. H.M.’s case remains a stark reminder of the importance of memory to our existence. Without it, we could experience neither lasting joy nor deep pain. Our personalities and habits would be inexplicable, even to ourselves. There would be no friendship or antipathy, because neither could last for more than a fleeting moment. Corkin reported that H.M. had little idea of who she was despite their being acquainted for over 45 years. She had to introduce herself every time they met. He could read the same magazine article over and over again and laugh at the same joke every time. Sadly, he gradually became aware of his shortcomings: “every day is alone in itself,” he related, “whatever enjoyment I’ve had, and whatever sorrow.”
Henry Molaison’s contribution to science for more than fifty years concluded upon his death in December 2008, with the donation of his brain to the Massachusetts General Hospital for preservation and documentation. However, he remained unaware of his fame and the impact that his participation in research had, and would continue to have, on science and medicine over many decades.
Yinchu Wang is a 3rd year medical student in the Department of Physiology
Vicki Moignard recalls the captivating history of the Space Shuttle
It has been called the most complex machine ever built. It launches as a rocket, orbits as a spacecraft, and lands as a plane. Surviving the frigid cold of space and the furnace of atmospheric re-entry, it protects a crew of eight in one of the most hostile environments imaginable. For those too young to have experienced the moon race, NASA’s space shuttle has been synonymous with the space age. Now, after 30 years of service, the shuttle fleet has been retired following the final flight of Atlantis on the 21st July 2011.
Conceived as a weapons system in Cold War America, the idea of a reusable space-plane predates the moon landings. However, kick-started by the launch of Russia’s Sputnik 1 and Yuri Gagarin’s first legendary mission into space, NASA initially sidelined the shuttle in favour of the simpler and cheaper rockets that would bring the US space programme in line with that of its Soviet adversary.
It wasn’t until 1968 that, with the Moon in its grasp, NASA turned its attention away from the Apollo lunar program, and the idea of the space-plane resurfaced. While many questioned the necessity of manned spaceflight once the Moon had been ‘conquered’, President Nixon’s administration eventually gave the shuttle program the green light. In a time of shrinking budgets, its cost-saving fleet of reusable spacecraft would transport the components of space stations and the larger rockets into orbit—ultimately aiming to carry the US to the stars.
However, the Space Shuttle had no precedent. It was originally meant to launch in 1976, but as deadlines passed the reality of developing a fully reusable spaceship became much more complex than anticipated. In fact, the original plans had envisaged two vehicles—the orbiter and a lifting-body that would carry it into space—but it was too heavy and too expensive. The final machine was a compromise, with the orbiter carried into space by the large external tank and two solid rocket boosters that have since become a familiar sight on the launch pads of Florida’s John F. Kennedy Space Centre. It was no longer fully reusable, with the fuel tank discarded after launch, but it was precisely this sort of compromise that allowed the shuttle to exist at all, albeit five years late. In April 1981, on the 20th anniversary of man’s first foray into the cosmos, Columbia finally made the shuttle’s long-anticipated maiden voyage beyond Earth’s atmosphere, returning safely after 54 hours in space.
It was the first of 135 missions that would see the shuttle fleet wrack up an impressive half a million miles in 20,000 Earth orbits and a combined 3.5 years in space between the five orbiters: Atlantis, Challenger, Columbia, Discovery and Endeavour. However, by aviation standards the shuttle was, to the end, an experimental system. They may look like the same craft that rolled out in the late 1970s, but they have undergone three decades of upgrading, necessitated and directed by their own shortcomings. Most memorably, in 1986, on the shuttle’s 25th mission, Challenger was tragically destroyed 73 seconds after lift-off when a fuel leak triggered an explosion that consumed the orbiter and seven crewmembers died. It was a reminder that, despite the increasing familiarity of space flight and the decreasing column inches devoted to the shuttle, space travel may never become routine.
The remaining orbiters were grounded for two and a half years of modifications to ensure that such a mistake would never happen again. But in 2003, following damage obtained on lift-off to the heat-shield that protects the orbiter during re-entry, Columbia disintegrated over Texas, just minutes from landing. Another crew was lost to a well-known problem; similar damage had been detected on the very first mission. Thorough heat-shield inspections became mandatory on all missions after 2003, but the Columbia disaster spelled the end of the shuttle program.
Nevertheless, the Space Shuttle is a remarkable machine and its contribution to modern science should not be underestimated. Thousands of experiments have been conducted onboard the orbiting laboratories, focused predominantly on developing space technologies and studying the effects of weightlessness on the human body. Among the shuttle’s most notable payloads was the Hubble Space Telescope, which, since its deployment from Discovery in 1990, has explored the furthest reaches of the observable universe and returned stunning images of space.
The shuttle’s greatest scientific legacy, however, is almost certainly the International Space Station (ISS). Although a very different structure to the solely American-built station that inspired the shuttle program, the ISS could not have been constructed without the shuttle and will stand as a reminder of that era many years after the orbiters have ceased to visit. With construction beginning in 1998, it has dominated 37 shuttle missions and has been inhabited continuously since 2000. It was declared complete during Endeavour’s final mission in May 2011, with the installation of a $2 billion particle physics experiment, the Alpha Magnetic Spectrometer.
However, the shuttle’s achievements cannot be measured in purely scientific or economic terms. Perhaps its most important accomplishment has been the promotion of international cooperation, particularly between the US and Russia. In a partnership that began with a series of rendezvous missions between the shuttle and the Russian space station Mir in the 1990s, 16 nations and multiple space agencies are now involved with the ISS, including the US, Russia, Canada, Japan and several European countries. Thus, at times of great conflict and economic hardship, the shuttle has stood as an icon for human endeavour and achievement; an avatar for progress and international co-operation. The true value of the shuttle may therefore lie not in what it has left behind, but in what it has contributed to secure the future of space exploration.
The shuttle program has received much criticism over the years, not least for the losses of the Challenger and Columbia crews. Costs have escalated and the maximum number of flights achieved in any single year was nine, back in 1985, a fraction of the 50 missions anticipated per year. However, the shuttle was a pioneer in what were still the early days of space flight, when the ambitions of the National Aeronautics and Space Administration were grand, but many lessons remained to be learned. It was far from perfect, but with 30 years of service the shuttle was the reusable spacecraft NASA had promised and the first to claim that title. By the time the oldest remaining orbiter, NASA’s flagship Discovery, was retired in March 2011, it had spent a total of 365 days in orbit during 39 missions, more than any other manned spaceship ever launched.
Now, at the end of the first half-century of space flight, as the surviving orbiters are retired to museums across the US, a new era of space travel is beginning. The Russian Soyuz craft has claimed sole responsibility for ferrying astronauts to and from the ISS, and China will begin construction of its own space station by 2013. While supporting commercial enterprise to find new means of carrying astronauts into orbit, NASA is charged with the task of developing spacecraft to transport humans further afield than we’ve ever been before, ensuring that a new generation of engineers and astronauts will be inspired to continue pushing back the boundaries of human space exploration.
Vicki Moignard is a PhD student in the Department of Haematology
Aaron Barker looks into the physics behind 3D cinema
Clever applications of physics and the large market envisaged by the film industry have driven this technology rapidly forwards since its resurgence in the 1980’s. Many films these days are being made in 3D, but often the relatively simple technology behind them is poorly understood.
The aim of 3D cinema is to replicate the experience we get when we look at the 3D world around us. Objects that are farther away look smaller to us and are also blocked from our vision by things that are closer.
Our eyes present us with two views of the same object, so that we have a better idea of where it is in 3D space—this is called stereopsis. Furthermore, eyes have to focus to see things more clearly, and since focusing depends on the distance to an object, this gives us a better idea of its position in 3D space too. Neither of these effects can be replicated using standard 2D films and, for now, only the first is actually in use in 3D cinema. However, eventually film-makers may even be able to make our eyes focus on objects on a cinema screen as though they were closer or farther away.
For a 3D film to persuade us that something is closer than the screen, the left eye must see the same object to the right of where the right eye sees it. The brain will assume that these two images are of the same object in the same position and it will interpret this as it would any other object for which this is the case—it will think that the object is closer to us. The opposite is true for objects that are supposed to be far away; if the left eye sees an image slightly to the left of where the right eye sees it, the brain will assume that it is farther away than the screen. As a result, 2D images appear to be 3D.
The most primitive form of 3D imaging is called anaglyph. You have probably tried this—it requires you to wear glasses with one red and one blue lens. Anaglyph images will then have some red and some blue parts to them: the red parts are filtered out by the red lens, allowing you to see only the blue parts, whilst the blue parts are blocked out by the blue lens, meaning you see only the red parts. This allows a film-maker to cause each eye to see something slightly different, fooling the brain into inferring a 3D image. The drawback of this method is that the quality of colour of any cinematic image may be compromised to some degree by the need to look through these coloured filters.
Some glasses use a more advanced version of this idea. One of the lenses lets through only certain wavelengths of light of various colours while blocking others out, whereas the other lens does the opposite. This can improve the quality of colour, but otherwise works very much like anaglyph.
There is a type of electronic glasses that use a completely different method. These work by blacking out one lens while a film shows what that eye should see and then blacking out the other lens. In order for the brain not to be bothered by this, the flickering has to be at a high frequency and must be completely in synchronisation with the film. This requires expensive glasses in which flickering can be kept in time with the film either by a wire, which is cumbersome, or wirelessly, which further raises the price of the glasses. For this reason, this method has not been widely used in 3D cinema.
Rather than sacrificing image colour quality with anaglyphs, or forcing viewers to pay huge prices to watch films with expensive glasses, the polarisation method is widely used. It allows for high colour quality but requires an expensive screen, as the polarisers only let half of the light through to each eye. This means that the colours in the film must be intensified, otherwise they could appear dimmed to viewers. However, it does allow for much cheaper glasses, reducing costs when hundreds of people watch the same viewing of a film. There are two types of polarisation that are used in 3D cinema, with both taking advantage of light waves oscillating perpendicular to the direction in which the light travels.
The simplest form of polarisation is plane polarisation. In normal light, the oscillation is in all directions at right angles to the light’s direction of travel, but if light is plane polarised by being passed through a filter, all the oscillations are in the same direction. For example, viewers can wear glasses with one lens polarised to block out the horizontally polarised light, only letting through the vertically polarised light and only letting the viewer see the image formed from vertically polarised light. The other lens can then do the opposite, blocking vertically polarised light and so providing the other eye with a different image: that in horizontally polarised light. The problem with this is that the glasses have to be kept completely straight; if they are tilted too far one way or the other, both eyes will start to be able to see both images, as the polarisers will not be orientated in the same direction as the light.
This effect can be avoided if circular polarisation is used instead. With circularly polarised light, as opposed to oscillating in just two planes, the light wave describes a circle. To create this, light must be first polarised in a horizontal and vertical direction, but the wavelengths are made to oscillate out of sync with one another by just a quarter of a wavelength. As these wavelengths approach the eye, the combination of the out of sync light waves is equivalent to them forming a circle.
A filter is made from an anisotropic material that forces the light to travel more slowly in one direction. Therefore, one of the planes of light becomes a quarter of a wavelength behind the other plane, creating the circular polarisation. Whether the horizontally or vertically polarised light is delayed determines whether the light is circularly polarised in a clockwise or an anti-clockwise direction. So that each eye sees only one of the images, the glasses must undo this polarisation. The light must travel through another layer of anisotropic material, so that it is plane polarised again, and then through a polariser. Clockwise and anti-clockwise polarised light travelling through the same anisotropic material would come out plane polarised at 90 degrees to each other. Each lens of the 3D glasses has a different plane polarised filter so each eye sees a different image, either horizontal or vertical waves, which the brain can convert into a 3D picture.
This is the method most commonly used in cinemas today. If you take two pairs of these 3D glasses, you can see that they work by circular and not plane polarisation. By wearing a pair with one eye open and looking at a friend wearing another pair, one of the lenses should appear blacked out. Furthermore, it should be the same lens regardless of how much you tilt your head.
3D cinema has come a long way since its invention. Through the early anaglyph images with fairly poor colour quality, we have now moved to glasses with polarisers that are capable of showing high quality 3D images. As this technology continues to develop, we can expect a cinema experience that will become more and more convincingly 3D.
Aaron Barker is a 4th year student in the Department of Earth Sciences
Simon Page reveals the colourful side of one of chemistry’s more dangerous reactions
Aristotle once said “there was never a genius without a tincture of madness”. Indeed, some of the most brilliant chemical reactions are so hazardous that their discovery surely involved a mad burst of Aristotelian inspiration. The Birch Reduction is one such reaction.
Discovered over 60 years ago, the reaction was first performed by the Australian chemist Arthur Birch. Birch, whose auspicious career led him to be honoured as a Fellow of the Royal Society in the United Kingdom and a Companion of the Order of Australia at home, devised the reaction at the University of Oxford during the Second World War. At that time, the RAF had become suspicious that Nazi fighter pilots were taking hormones in order to improve their performance in the air. Arthur Birch was instructed to generate some suitable hormonal analogues in order to level the playing field. The compounds he generated were modified steroids, the best known of which was nandroline (19-nortestostorone).
Nandroline is only prescribed rarely nowadays, but in the past has been used to treat both osteoporosis and anaemia. It also has a notorious history in athletics as a performance-enhancing drug; it was the compound that Linford Christie, the UK 100 metre Olympic gold medallist, tested positive for in August 1999, effectively ending his athletic career. Birch’s pioneering work also paved the way for the synthesis of other compounds in the same steroid family: progestins. This included one of the two active ingredients in the first oral contraceptive pill, norethynodrel, which was approved under the name Enovid in 1960.
In order to synthesise his steroids, Birch needed to remove a double bond from certain six-membered, carbon-based rings. These so-called ‘aromatic’ rings, owing to their often odoriferous nature, then form their corresponding 1,4-cyclohexadienes. This is no mean feat because, as Birch noted himself, “[it is] not possible by the standard methods of catalytic reduction, since the desired compounds are more readily reducible than the starting materials.” Essentially, Birch was making the observation that aromatic rings prefer to lose all their double bonds or none at all.
Unperturbed, Birch built upon the accidental discovery of Charles Wooster and Ken Godfrey: that an ammoniacal solution of sodium metal could be employed to do just the job. It isn’t clear how they happened upon this discovery, not least because the original report is a classic case of understated scientific prose. Nonetheless, it is appealing to imagine Wooster and Godfrey dreaming up the experiment in an idle moment, and deciding to carry it out in those last (otherwise unproductive) hours of the working week. Or perhaps, they had an inkling of how the chemistry might work out: a genuine spark of scientific inspiration that they were convinced would generate their desired outcome. Either way, their foray into the alkali metals inspired Birch to reach for his lecture bottle of ammonia and start investigating.
Birch’s reaction involves the combination of an alkali metal (lithium, sodium or potassium) with an alcoholic solution of the target aromatic compound in liquid ammonia. The sodium, keen to lose an electron, becomes ionised to Na+, resulting in an intensely blue solution. The colour is attributed to the so-called ‘solvated electron’, which is stabilised by a cage of ammonia molecules. This electron is then donated to the aromatic ring, which gains a negative charge and consequently nips a proton off the alcohol.
The immediate dark blue resulting from the addition of the sodium metal is a satisfying change from the mundane, yellow or colourless solutions in which chemists deal routinely. But it also delivers a profound statement about the underlying mechanics of chemical reactions: the blue shade is electrons. The colour is derived directly from the size of the ‘ammonia cage’ within which the electron is forced to rattle, prior to its attack of the aromatic ring.
With great reactions, however, come great responsibilities, and the Birch Reduction is not without its hazards. It is necessary to condense hundreds of millilitres of ammonia, usually a gas, at a temperature of less than 40˚C. Following this, solid sodium must be cut up and weighed out, all the while being kept under oil since it will spontaneously burst into flames if it comes into contact with moisture. The subsequent change in colour is also a distraction from a more subtle change in the reaction: the volume of the solution increases. This change stems from the extra organisation required to arrange ammonia molecules around the free electrons in their ‘ammonia cage’ geometry. The final challenge comes in quenching the reaction. A large excess of ammonium chloride—a relatively harmless white solid similar to table salt—is added to the solution. This has roughly the same effect as adding a large excess of sugar to a rapidly stirred bottle of Coca-Cola. The difference is that instead of producing a sweet foamy mess of carbonated drink, the reaction flask has a tendency to erupt with a fountain of liquid and gaseous ammonia.
Although the reaction is still widely used to this day in research laboratories, it is hardly seen in industry, where the combination of very large volumes of liquid ammonia and sodium would raise a few too many Health and Safety eyebrows. Perhaps the most infamous industrial application of the Birch Reduction is in the synthesis of methamphetamine, better known on the streets as the drug Crystal Meth. In this case, the intention is not to remove double bonds from aromatic rings, but to chemically modify pseudoephedrine, which is the active ingredient in decongestants like Sudafed. Criminal drug labs have to be inventive in order to gather the ingredients for a large scale Birch Reduction. Usually, the ammonia is obtained from fertiliser and lithium (from non-rechargeable batteries) is used in place of sodium. It is hazardous work—indeed, an estimated 15 percent of drug busts on illicit meth labs involve police being tipped off by an explosion or a fire.
On the other hand, the Birch reduction is routinely undertaken in the research laboratory without such drama. It is not only a classic reaction, but also a genuinely useful one—it does unusual chemistry, it scales up well and, frankly, it is quite pleasing to look at too. Certainly, chemistry would be the poorer without that flash of imagination that inspired Arthur Birch to throw caution to the wind and drop sodium in his test tube.
Simon Page is a PhD student in the Department of Chemistry
Jamie Hackett examines whether it is possible to inherit the experiences of our parents
At the beginning of the 19th century, the Swedish parish of Overkalix experienced several winters of crop failure and with it widespread suffering and malnutrition. As it turns out, these unfortunate events may reveal a fascinating possibility almost two centuries later—the grandchildren of the young men who suffered a winter of starvation live considerably longer than average. Conversely, the grandchildren of individuals who were children during unusually fruitful years tend to have a much lower life expectancy. It seems that exposure to either a wealth or a paucity of food during an individual’s formative years can leave some sort of imprint that affects their descendants, impacting on the life expectancy of their grandchildren over 100 years later. Is it possible that something a parent experiences can alter the traits of their descendants?
Long before Charles Darwin’s theory of evolution by natural selection came to prominence, another theory held sway. The French naturalist Jean Baptiste de Lamarck believed in the ‘inheritance of acquired characters’. That is, that an organism’s life experiences could be passed on to their offspring. An oft cited example is that of giraffes which, according to Lamarck, evolved lengthy necks through the very act of constantly stretching for the highest leaves. However, Lamarck’s ideas have been largely dismissed. One problem is that while the giraffe may stretch its neck, only its germ cells (sperm and egg) are passed on to offspring. Is there any way these completely separate cells know the neck has been stretched? Likewise, it has long been clear that while smoking or over-eating may impinge your own health, future generations will start afresh, unaffected by their parents past digressions. Children of smokers, for example, are not born with charred lungs. Nevertheless, evidence in agreement with the Swedish data is now emerging, hinting at the possibility that what you choose to do now may affect your children before they are even conceived. Data from a study in Bristol has shown that, contrary to expectations, smoking before puberty can impact future offspring. The research showed that boys who smoked before the age of 11, when they were first producing sperm, had sons with significantly higher body mass indices (BMI) than expected. It is therefore possible that a decision made at age 11 to experience smoking, can have consequences that permeate through to the next generation.
How is it possible that an experience can be transmitted to the next generation? The answer lies in epigenetics. ‘Epi’, from the Greek meaning ‘above’ or ‘on top’, in this context literally means on top of genetics. Epigenetics explains a good part of the reason why each cell-type in a human body has exactly the same DNA sequence but looks and functions very differently—think sperm and brain cells. Chemical tags, known as epigenetic marks, are grafted onto DNA or the structures supporting it and act as signposts telling a cell to either use or ignore a particular gene. In this way the approximately 25,000 genes in your DNA can be divided into many combinations, producing all possible cell-types. One key epigenetic mark is DNA methylation. Its importance stems from it being very difficult to remove. Once deposited it tends to stay put and signifies that any nearby genes should be ignored. Therefore, if methylation occurs in sperm or egg DNA, it may be transmitted to the next generation and influence how genes are used across the entire body—a process known as epigenetic inheritance. Professor Randy Jirtle at Duke University demonstrated the possibility of epigenetic inheritance using mice with a gene called Agouti—its VY allele produces a yellow fur colour when turned ‘on’, but an agouti colour (dark brown) when turned ‘off’. When a female was fed a diet supplemented with vitamin B12 and folic acid, excellent sources and coenzymes of methyl groups respectively, her offspring were much more likely to be agouti coloured than the offspring of genetically identical females on a normal diet. It seems that the high levels of methyl in her food had resulted in methylation marks being deposited at the Agouti gene in her eggs. These marks were transmitted to her offspring, which in turn inherited the methylated and therefore ‘off’ version of the gene, meaning they were agouti coloured. Hence, genetically identical mothers can produce offspring with entirely different fur colours just by changing their diet. If methylation marks are also inappropriately deposited on other genes they may, if transmitted, have more profound effects on offspring, such as altering life expectancy.
The potential power of methylation marks to alter genes and therefore traits can be seen in honeybees. A worker honeybee is genetically as similar to the queen as it is to another worker. Yet queens have much greater life spans, the capacity to reproduce, and a very different physiology and anatomy. These differences arise because a chosen female larva, fated to become a worker, is fed ‘royal jelly’. This special substance triggers methylation on top of specific DNA sequences, which switches the larva’s development to that of a queen. This is astounding: it’s like a human baby being given a different brand of food and becoming a superhuman instead of a normal adult, simply because of an epigenetic switch on top of their DNA. In short, epigenetic marks can be very important for how DNA is interpreted. In the case of a honeybee, methylation controls its entire destiny through the chance provision of ‘royal jelly’.
So, how likely is it that epigenetic marks, both good and bad, are regularly passed from parents to offspring in humans? Despite the stability of DNA methylation the answer is unclear at the moment. This is because before mammalian sperm and eggs mature, they go through a remarkable ‘reprogramming’ event which leaves the DNA in a transient naked state with no epigenetic marks. In effect, this wipes clean any harmful epigenetic marks that have accumulated and enables the offspring to start with a fresh new set of modifications. This explains why epigenetic inheritance is not more common. However, if reprogramming were complete there would be no possibility of epigenetic inheritance whatsoever, making the Overkalix data very difficult to explain. It seems this hurdle is bypassed as some sequences of DNA are able to evade reprogramming, including the Agouti gene. Whilst we do not know exactly which sequences are resistant or how, it is possible that epigenetic modifications in these specialised regions may be transmitted to offspring and may explain the observations of altered life expectancy and BMI. However, the real question is why these sequences are specifically protected, a highly debated issue in the epigenetics research field. So remember, next time you’re drinking the college bar dry or reaching for that extra cigarette, you might not just be affecting yourself.
Jamie Hackett is a postdoctoral researcher at the Gurdon Institute
BlueSci looks at an endlessly fascinating and increasingly useful world deep below
Deep below our feet lies a world mostly unexplored. It has been said that we know more about the dark side of the moon than we do about the depths of our oceans, and the same could apply deep in the Earth’s crust. Despite their isolation, underground and deep-water worlds are a hive of activity. Life has flourished in some of the most unimaginable places, adapting for survival in habitats we might have considered impossibly inhospitable. But subterranean life does not just consist of hardy bacteria: humans are also building homes underground, whether sheltering from extreme temperatures or simply being environmentally friendly. Engineers through the ages have also been tapping in: from Roman baths to modern-day electricity generation, they have been utilising the vast pool of geothermal energy that lies beneath. Even astrophysicists have begun to journey below the surface, building a telescope over a cubic kilometre in size within the Antarctic ice. There may be a huge amount that we have yet to learn about the nearest kilometres beneath our feet, but we have already made many fascinating and useful discoveries.
To us, the 400°C hydrothermal fluids exiting deepwater fissures at pressures of up to 50,000 kilopascals (when we’ve evolved to enjoy a leisurely 101 kilopascals) may sound ‘extreme’, but over the last few decades scientists have suggested that might be a tad anthropocentric. They are now accepted as havens for a diverse web of crabs, fish and octopuses that are sustained by some unusual bacteria. These bacteria feed off the dissolved sulphur compounds, usually toxic to other organisms, abundant at the mouth of ‘black smokers’. The biodiversity supported by these chemosynthetic organisms, which use heat, methane and sulphur instead of sunlight as sources of energy, is only now being fully revealed. Some single-celled organisms have also been shown to use the dim light generated by black smokers as a source for photosynthesis. This has all been discovered in the 51 years since Jacques Piccard and Don Walsh first navigated 10.9 kilometres down into the Marianas Trench.
Since 1993, further deep sea exploration has also discovered over 300 new species of deep sea gastropod, as well as one quite fascinating species of mollusc. This particular snail has a foot mineralised with iron sulphide armour plating, making it frustratingly magnetic for anyone attempting a dissection with metallic tools. Elsewhere in the water column, the HADEEP collaborative project captured images of the deepest recorded fish in 2008—a suction-feeding snailfish 7.7 kilometres down in the Peru-Chile trench. Two years later they went further with a new species of shrimp, detected in the Izu-Bonin Trench at 9.3 kilometres.
As with many specialisms of science, the study of these remarkable organisms has developed its own terminology. Organisms with a preference for high pressures and temperatures are now termed ‘extremophiles’. This umbrella term also incorporates organisms tolerant of high or low pHs, immense altitudes, elevated radiation levels and extreme salinities. The greatest challenge to barophilic (‘pressure loving’) organisms is the effect pressure has on the physical properties, especially the fluidity, of cellular membranes. At high pressure the phospholipids, which make up the membranes that allow metabolic movement of structures into and out of cells, are crushed together. In response, barophiles universally express a far greater percentage of unsaturated fatty acids within this layer, resulting in an appropriately plastic membrane. The fantastic suite of specialisations being discovered in more and more deep sea organisms means some extremophiles truly are extreme ‘lovers’—‘obligatory barophiles’ are unable to withstand pressure less than 50,000 kilopascals.
Far from being simply zoological oddities, the study of extremophiles is big business—industries have been using ‘extremozymes’ for agricultural and laundry markets for decades. Taq polymerase, ubiquitous in labs around the world as the key enzyme for polymerase chain reactions (PCR), was derived from thermophilic bacteria found in Yellowstone National Park. Now it seems medicine and biotechnology may further benefit from the study of deep sea organisms, with extremophiles being studied in order to find novel chemotherapeutic agents. For instance, the vent bacteria Vibrio diabolicus might be key in studying and delaying blood clotting due to its sugar-residues. So as the fascinating depths of our oceans continue to amaze and provide potentially ground-breaking research, what has the ground right below us got to offer?
For over twenty years biologists have been aware that crammed in between the pores of the consolidated sediments of the Earth’s crust, an entire ecosystem of microorganisms exists. Earlier this year, for example, in borehole water filtered from the Beatrix gold mine in South Africa, a multicellular organism was found tolerating anaerobic conditions and temperatures of up to 41°C—over 1.3 kilometres under the surface of the crust. The discovery of Halicephalobus mephisto, a nemotode worm, not only expanded the range that multicellular life was known to inhabit, but also hinted at the resilience of life within those extreme habitats.
And we’re not going to stop being surprised anytime soon. Even at the South Pole, microbes have recently been found in the pores of ice crystals; resilient to prolonged darkness, an almost complete absence of water and exposure to ultraviolet radiation. This results in these single-celled prokaryotes sporting four copies of their genome, along with DNA repair mechanisms, to guard against the genetic damage caused by radiation. Furthermore, the discovery of Lake Vostok, lying four kilometres below the frozen surface and isolated for up to 25 million years, has brought to light a freshwater system as big as Lake Ontario in North America. The Arctic and Antarctic Research Institute based in St. Petersburg is currently completing a drilling exercise initially stopped in 1998 but resumed in 2004. After finally convincing the Antarctic Treaty Secretariat there are no fears of contaminating it with surface organisms, they will be the first team to investigate the water of the super oxygen-saturated lake for signs of life. Drilling stopped this February just hundreds of metres short of the lake, but is due to resume this December with the beginning of the polar summer. It is unknown whether extremophiles will be found within this frontier of Earth exploration, but whether or not life is found in the lake, the technological advances developed as part of this project will be of major interest to astrobiologists—similar ecosystems are hypothesized to be present on other celestial bodies, such as the satellites Ganymede and Europa.
A multitude of animal species may have adapted successfully to life below ground, but what about humans? Today, millions of people across the globe call an underground residence ‘home’, although the vast majority live far closer to the surface than their extremophile counterparts.
The most common motivation for moving underground is to avoid extreme temperatures on the surface. In Coober Pedy, the Australian outback town and self-declared opal mining capital of the world, almost half of the town’s 3,000 permanent residents live underground. Many of these homes, known as ‘dugouts’, are former mine shafts left behind by the opal industry. While the average daily temperature in Coober Pedy routinely exceeds 40°C and can drop to freezing overnight, the temperature underground is a constant and comfortable 20–25°C. Subterranean life in Coober Pedy is made possible by the local sandstone, which is easy to tunnel through, yet strong and stable. A coat of sealant on the dugout walls prevents dust, while vertical shafts extending into the rooms provide ventilation and natural light.
The same solution to the problem of staying cool is found in several desert villages around North Africa, most famously the Berber township of Matmata in Tunisia. Here residents live in caves excavated around the perimeter of large circular pits dug from the surrounding sandstone. Troglodyte (cave-dwelling) residences are thought to have been continuously present at Matmata from the 13th century onwards. The township was only ‘discovered’ in 1967, however, when heavy rains flooded the houses forcing residents to seek help from the Tunisian authorities. Today, Matmata survives largely as a result of its international fame—its subterranean hotel, the ‘Siri Diss’, acted as the set for Luke Skywalker’s home in the Star Wars films.
Underground extra-terrestrial accommodation may not be limited to films, however. Using ‘Themis’ (Thermal Energy Imaging System), NASA spacecraft have identified what appear to be caves on the surface of Mars, and are now investigating whether these might be suitable locations for a future base on the red planet. The caves would offer protection from the meteoroids, ultraviolet radiation and solar flares that continuously bombard the Martian surface. Caves are also the most plausible source of minerals, gases and possibly even ice that would be required by any future manned mission.
Back here on Earth, the world’s most populous troglodyte residences are, without a doubt, the yaodong caves of China. Stretching across six northern provinces, the ‘yaodongs’ (or ‘arched tunnels’) are thought to house around 40 million people. The caves are carved into hillsides composed of loess—a mixture of silt, sand and clay with a distinctive yellow colour. In summer, the caves are naturally cool, while in winter they are heated by a network of channels that distribute hot air from fireplaces to the rooms. Since the yaodongs are carved into hillsides that would be difficult to farm, they free up valuable arable land, making it possible to house large numbers of people with relatively little environmental impact.
At the South Pole, enforced self-sufficiency is a fact of life during the long polar winter. When the US military constructed the first South Pole Antarctic research station in 1956–57, they built part of the base underground to offer some protection against winter temperatures that could fall below -70°C. While the underground facility was successful in this respect, it had not been designed to cope with the ferocious polar winds. Each and every year, these led to snowdrifts more than one metre high accumulating on top of the building. Since this snow never thawed, the increasing weight on the roof eventually led to the base being abandoned in 1975 and a replacement constructed, this time above ground. The third, and current, South Pole station is not only above ground: it has been designed so that if snow accumulates unevenly over the top of the building, the structure can be ‘jacked up’ and re-levelled.
Looking out from such sites onto the barren landscape of Antarctica, one could be forgiven for thinking that you were as isolated as you could possibly be. Indeed, it is this isolation that draws astronomers to the Amundsen-Scott South Pole station, where they can study the stars without interference from artificial light, atmospheric pollution, and noise that invades all but the furthest reaches of the Earth. Isolation is exactly what the scientists at the IceCube neutrino telescope are looking for; buried one kilometre below the ice, it is not the traditional place to have a telescope, but then this is not a traditional telescope. This team of physicists are searching for neutrinos, but instead of looking up into the sky, they are looking down.
Neutrinos are among the smallest particles known to man, so small that their mass has yet to be determined. As their name suggests, they are electrically neutral, and this lack of electrostatic charge means that they barely interact with other forms of matter. In fact 99.99% of the time a neutrino will pass straight through the Earth. They are also incredibly abundant—trillions of them have passed through you in the time it took to read this sentence. But because it is so rare for a neutrino to interact with other matter, experiments to detect them need to be highly sensitive. However, this sensitivity can make them susceptible to interference from cosmic rays coming down through the atmosphere. To combat this interference, detectors are placed one kilometre underground, facing down towards the centre of the Earth. This way the Earth acts as a filter to deflect or absorb particles from the cosmic rays that can interfere with the neutrino telescope. Only particles that can pass straight through the Earth and come out the other side—like neutrinos—can be detected.
Its location at the South Pole allows the IceCube telescope to be larger than any other neutrino detector in the world. As neutrinos cannot be detected directly, the telescope searches for signs of another subatomic particle being produced, known as a muon. These muons are produced when the neutrino collides with something, such as a water molecule. By following the direction the muon came from, we can calculate the neutrino’s original trajectory. These muons are produced with a lot of energy and so travel at very, very high speeds—so fast that they can travel faster through water than light. Travelling faster than light produces a characteristic blue glow known as Cherenkov radiation, which is essentially the light equivalent of the sonic boom. It is the Cherenkov radiation that is detected by the telescope, which can be used to infer the presence, direction and energy of the neutrinos. Many neutrino telescopes use vast chambers (50,000 litres) of ultra-pure water, in which they wait for neutrinos to collide. However, at IceCube, they use the ice itself to form an experiment that is a cubic kilometre in size—approximately one trillion litres of ice.
Construction of IceCube took five years to complete, finishing in December 2010. Drilling could only happen during the austral summer each year, as it is the only time when planes can reach the South Pole. Using hot water drills, 86 strings were lowered down to 1.5-2.5 kilometres below the surface. At this depth, the immense pressure has driven out any imperfections or bubbles in the ice, leaving it almost completely pure and transparent. Each of the 86 strings has 60 ‘digital optical modules’, totalling 5160 detectors, each consisting of a photomultiplier to detect the blue light and a digitiser to maintain the fidelity of the signal on its 2.5 kilometres journey back to the surface. Such a large array of detectors produces an incredible amount of data (already over 400,000 gigabytes). For each muon detected below the ice, the data is cross-referenced with results from other sensors on the surface, to check for coinciding events. If a muon is observed at the surface before it reaches IceCube detectors, it could not have originated from a neutrino passing through the Earth and interacting with the ice, so is ignored.
Unlike photons or charged particles, neutrinos will not deviate from travelling in a straight line while hurtling from one galaxy to the next. This means we are able to trace the paths of neutrinos back to where they came from and produce a map of neutrino sources in the universe. This can be compared to what we have been able to observe from optical, infrared and X-ray telescopes. Combine that with measuring the energy of the incoming neutrinos and we can begin to pinpoint the locations of black holes, gamma ray bursts and supernova explosions. Understanding these events is all part of cracking the puzzle of dark energy, and buried under a billion tonnes of ice, thousands of detectors are looking for a faint blue flash of enlightenment.
In contrast to the freezing depths of the Antarctic, the ground beneath us also contains phenomenal amounts of heat. No one who has witnessed the eruption of a volcano, or the devastation of earthquakes, can doubt that there are huge amounts of energy stored inside the Earth. Some of this energy, known as geothermal energy, is released as heat. It comes from energy stored as the Earth formed, as well as from naturally occurring radioactive elements that release energy as they decay.
Of course, not all manifestations of this energy are destructive. The Yellowstone region of the USA is famous for its spectacular geysers, having one of the highest concentrations in the world. There are also hot springs and pools with beautiful coloured rocks, where minerals from deep within the Earth are carried to the surface by deep flowing water. The driving force for geysers and hot springs comes from the pool of molten rock (magma), which sits relatively close to the surface in Yellowstone National Park.
Another place with similar geothermal features is the East African Rift System. Sitting on top of some of the thinnest continental crust on Earth, there are places where steam vents and hot springs have been used by local people for thousands of years. Similarly, the Japanese Onsen (literally ‘hot springs’ in Japanese) has always been a popular bathing site, and the Romans used hot springs for indoor heating as well as for public baths.
In the last 60 years, the development of new technologies means that we are able to exploit geothermal energy more effectively than our ancestors. This has been achieved in two ways: electricity generation and the direct utilisation of heat. While electricity is a more versatile form of energy, generating it from geothermal energy is more difficult, requiring a more favourable site. Approximately 70 countries have the infrastructure in place to use the direct heat from geothermal sources, as opposed to 24 countries that are able to generate electricity.
For electricity generation, there are several types of power plants, all of which are very similar to those that run on more conventional fossil fuels—they base their operation on steam driven turbines. The difference is that the steam is obtained either directly from a geothermal source, such as a very hot underground water reservoir, or by using the heat from deep underground to turn water into steam. This requires drilling a few kilometres into the ground to gain access to the hot underground water, or injecting water into hot rocks, which then comes back to the surface as steam.
The hot water that emerges from the ground can also be used directly. In some countries with rich geothermal resources, such as Iceland, the hot water coming from springs or from artificial wells is used to heat buildings. The first few metres of the ground beneath our feet tend to remain at a fairly constant temperature of 10–16˚C. In winter, the ground is warmer and heat goes through a geothermal heat exchanger (a series of pipes buried underground) into the house. In summer, the heat goes the other way, from the hotter building into the cooler ground. As electricity is only required to power the pumps that drive the fluid through the exchanger pipes, it is much more environmentally friendly than running an air conditioning system or central heating.
As fossil fuel stores start to run low, and with increasing concerns about the damage to our environment, using the energy under our feet is an attractive option. Not only is it available at all times, unlike solar or wind energy, geothermal power plants can also be adapted to make the most efficient use of the geothermal resources available.
We seldom think of what lies beneath our feet as we go about our daily lives. Earth’s internal energy gives us a mechanism for generating electricity; it powers life at ocean depths sunlight can never reach; it provides a sheltered environment for relatively cheap, effective and efficient housing and laboratories; but the most exciting part of all is that we have only hit the tip of the iceberg—as technology advances, we can be prepared to see many more exciting discoveries coming from the world below.
Nick Crumpton is a PhD student in the Department of Zoology
Ian Le Guillou is a PhD student in the Department of Biochemistry
Louisa Lyon is a postdoctoral researcher in the Department of Experimental Psychology
Wendy Mak is a PhD student in the Department of Physics
Lindsey Nield reflects on the life and voyages of Admiral Robert Fitzroy
In 1854 a vote was taken in the House of Commons on funding for the new Meteorological Office. Laughter filled the chamber at the suggestion that it might one day be possible to predict the weather 24 hours in advance. The very idea of forecasting the weather seemed ridiculous to the Victorians, but today the Met Office is a leading source of information on weather and climate. In that time, we have come to rely on the daily forecasts and adverse weather warnings they provide. The man whose vision we have to thank for this invaluable service was the first director of the office, Robert Fitzroy.
Fitzroy is better remembered as Captain of the HMS Beagle, the ship Charles Darwin sailed on, but his appointment to the Meteorological Office was the culmination of a long and varied career in service. First to the Navy, then to Parliament and even as Governor of New Zealand, he repeatedly showed a dedication to duty that shaped his life.
Born on 5th July 1805, Robert Fitzroy’s naval career was settled by the time he was twelve years old, when he enrolled at the Royal Naval College at Portsmouth in February 1818. He excelled there, completing the three year course in half the time and passing his examination for Lieutenant in 1824 with full marks, the first recruit ever to do so.
Fitzroy was an officer of exceptional ability and together with the influence of his aristocratic family he progressed quickly. When Pringle Stokes, Captain of the HMS Beagle, shot himself in a fit of depression, the Admiral in command promoted the twenty-three year old Fitzroy to the rank of Commander and made him Stokes’ successor.
The mission was to survey the Straits of Magellan and Tierra del Fuego, a task Fitzroy conducted with such attention to detail that his charts were still used over a hundred years later. He quickly grew into his role as Captain, demonstrating the conviction of command and independent thinking required when communications from home were limited to sporadic seaborne dispatches.
During this first command, Fitzroy learned firsthand the importance of acting on changing weather patterns. Caught in a severe storm, the Beagle was close to being overturned, causing two sailors to fall overboard and drown. By dropping both anchors Fitzroy saved the ship, but he felt he had failed by not acting as soon as he noticed a drop in pressure that preceded the storm.
On returning to England in 1832, the Beagle carried five Feugian natives who would ultimately lead to Fitzroy’s second command. After educating them with Christian teachings he hoped to return them to Tierra del Feugo to start a Mission, but was unable to find a posting for the return trip. On hearing this, Fitzroy’s uncle and his friend Captain Francis Beaufort, then Hydrographer to the Admirality, persuaded their Lordships to again appoint Fitzroy to the Beagle for a second survey in South America.
No expense was spared in preparing the Beagle for voyage, much of it from Fitzroy’s own pocket, and they set sail in December 1831. In addition to the usual crew compliment were the returning Feugians, a missionary, a ship’s artist, and the naturalist Charles Darwin. Darwin was there not only to examine the land they would survey, but also to be a companion to the Captain. Fitzroy had a depressive nature, was prone to mood swings, and was well acquainted with the loneliness and strain of command with both an uncle and his predecessor having committed suicide. He hoped that by carrying a scientifically minded gentleman with whom he could converse, it would prevent him from sinking into depression.
The strategy was not wholly successful. Finding it difficult to conduct the survey in the time available and unwilling to sacrifice detail to cover more ground, Fitzroy bought or hired extra boats and crews to man them. In July 1834 he received news that the Admiralty would not cover the costs he had incurred, plunging him into debt. Together with the stress of long working hours this provoked a personal crisis. He wrote a letter of resignation but withdrew it after his officers convinced him to reconsider.
Fitzroy’s spirits returned over the remainder of the voyage, and the Beagle finished her epic journey in November 1836. Just a month later Fitzroy was married and enjoying a settled period in his life. He completed charts from the survey and wrote an account of both Beagle voyages that was published in May 1839. The following years saw a departure from naval life as he was elected to Parliament in 1841, where he worked to improve conditions in the merchant fleet. Then in 1843, he was appointed Governor of New Zealand, a post in which he quickly became unpopular for openly disagreeing with how the settlers dealt with the natives and he was recalled in 1845.
Returning to the Navy, Fitzroy was made Captain of HMS Arrogant, a new hybrid ship equipped with both sail and a screw propeller turned by steam engine, the first of its kind. However, overwork and financial problems once more threw Fitzroy into depression and without loyal officers to talk him round, this time he resigned to settle his affairs. An excess of captains in 1850 left no hope of another command, paving the way for Fitzroy to take his place in meteorological history.
Interest in the weather sciences was growing and after an international conference on the subject in Brussels, the new Meteorological Department in Britain was established. Though not a scientist in the traditional sense, the many weather observations Fitzroy had made during his years at sea made him uniquely qualified to run the new office. He took up his position as Meteorological Statist to the Board of Trade on 1st August 1854.
The initial remit of the Meteorological Office was simply to collect observations of weather at sea. To achieve this, willing captains were found and provided with equipment. Fitzroy realised he could use the data to create a ‘synoptic chart’, coining the term, showing weather from a given time over a large area. With the telegraph becoming more widely used these observations could be charted quickly. Fitzroy theorized that studying synoptic charts could pinpoint recurrent patterns, enabling him to predict hazardous conditions at sea and warn mariners. When a major storm hit the British Isles destroying many vessels, Fitzroy was given the go ahead to provide such warnings. He extended his observation network to include land weather stations and with the data pouring in, the first official storm warning was issued on 6th February 1861.
Fitzroy felt more could be achieved and began to print a general daily weather ‘forecast’, a term he is also credited with, in The Times newspaper. With his forecasts sparking public interest in his new publication The Weather Book in 1862, and notification of his promotion to Vice Admiral the following year, Fitzroy had reached the pinnacle of his career.
Unfortunately, it was not to last. Criticism of the science in The Weather Book, the accuracy of his daily forecasts, and even the money ship-owners were losing from ships staying in port during storm warnings, had Fitzroy feeling attacked on all sides. The growing popularity of Charles Darwin’s Theory of Evolution by Natural Selection did little to ease his mind. Fitzroy, whose devout Christian beliefs had become more fundamental, felt responsible for what he saw as his friend’s blasphemous ideas being unleashed on the world, leading to a rift between them. Under increasing stress Fitzroy took his own life in 1865.
Robert Fitzroy was a man to whom duty meant everything. He spent his life serving the public, his friends and family, and the sailors whose safety he championed. Sadly, his achievements faded into the shadow cast by Charles Darwin, leaving him known simply as the Beagle’s Captain, but his reputation in Meteorological circles has been somewhat revived in recent years. In 2002, when a large area of sea to the west of the Bay of Biscay had to be renamed, it became the sea area Fitzroy, the only area in the British system to be named after a person. He is now daily remembered as part of the shipping forecast, reminding all those listening out for the storm warnings they rely on, just who they have to thank for the service.
Lindsey Nield is a PhD student in the Department of Physics
Helen Gaffney explores the rise of popular science magazines
The streets of eighteenth century London bristled with endeavour. Home to many of Europe’s self-styled ‘enlightened’ thinkers, it was here that the world’s first ‘magazine’ hit the printing presses. The Gentleman’s Magazine, founded in 1731, offered the discerning public informative articles on a wide range of topics, including politics, history, economics and new scientific advances. In its first volume a report on the “conduct of the ministry” lies only a few pages away from a piece about the “credulity of witchcraft” and “observations in gardening”. Page nineteen sported a keen defence of “Mr Cheselden’s intended operation on the drum of the ear” and in subsequent issues medical features are as common as pieces about new technological advancements.
Despite its varied content, the magazine’s intended readership was limited to a subsection of society: as its name suggests, the target audience was wealthy and male. This reflected the general philosophy of the Enlightenment in Europe which, despite encouraging civil liberty and self-improvement for the masses, was led predominantly by the educated and the economically powerful. Alongside dry academic journals, popular periodicals emerged, and among them, this newly termed ‘magazine’. Storehouses of novel and improving information, these new publications provided a way for science to enter the household of the ordinary professional, allowing individuals to rise to the challenge set by philosopher Immanuel Kant: “dare to know”.
‘Enlightening’ magazines were by no means the first attempts to disseminate scientific information. The Royal Society had been publishing its Philosophical Transactions since 1665 and even in ancient times the Greeks produced pamphlets relating to knowledge of the natural world. However, the eighteenth century marked the first time that science was presented as a source of self-improvement for the general public as well as a topic of general interest. As cheaper periodicals began to appear, this opportunity was made available to a much wider audience.
In 1826, the Society for the Diffusion of Useful Knowledge was established in Britain. This explicitly aimed to educate the masses and began to publish The Penny Magazine in 1832. Like many of the philosophical and political works of the Victorian era the articles published within it were for the most part anonymous. This helped to protect authors with unorthodox views but also enabled the magazine to act as an authority and a purveyor of fact. Readers could expect to learn about exotic creatures from all over the British colonies as well as great advancements in the age of industrial revolution. Alongside the new breed of professional scientists, professional science writers also began to emerge: the number of popular science periodicals doubled from the 1850s to the 1860s.
Across the pond, prolific inventor Rufus Porter founded Scientific American in 1845. The publication, which started out as a weekly four page newspaper, now sells monthly editions to hundreds of thousands of readers worldwide. In its early days Scientific American focused on the latest innovations, reporting on new patents and curious inventions. Its archives also contain evidence of early scientific debate. The edition of 11th October 1862, for example, features a caution against the use of oil for fuel; a view we might be tempted to call forward thinking. It reports advice to the American government to “stop further pumping and boring for oil” from “a gentleman who has spent some days in the region of the coal oil wells in Pennsylvania”. The warning was based on this gentleman’s own theory, which predicted dire consequences. He argued, “The oil is being drawn… from the bearings of the earth’s axis,” which “will cease to turn when the lubrication ceases.” Since this was around the time that large-scale oil drilling began to take off, we might now wish that this claim had been taken more seriously at the time, even if his reasoning was a little off. Nonetheless, we can see how, even 150 years ago, scientific magazines provided a forum for discussions about important issues; issues which continue to challenge governments, international relations and humanity as a whole.
Soon after, in 1872, a new magazine called Popular Science Monthly was put into print. Continuing the approach of the popular Enlightenment periodicals, it aimed to provide the educated layman with scientific knowledge and received contributions from a wealth of high profile scientists including Charles Darwin, Louis Pasteur and William James. Yet early issues would be judged dry by today’s standards and, with lengthy articles and few illustrations, it was more of an academic journal than a magazine. During the First World War, under the care of a new editor, it underwent a transformation and soon shifted towards more, shorter articles with colourful pictures, thereby dramatically increasing its circulation.
In the twentieth century the development of the scientific magazine continued to reveal much about the changing position of science within society, with each new issue providing a record of collective interests. Wartime pages of Scientific American and Popular Science are filled with expositions of the latest planes and weaponry; they seem to have been as much propaganda tools as informative guides. The tensions of the Cold War are also captured in the pages of New Scientist’s back issues, which date back to 1956, and in Russian magazines, such as Nauka i Zhizn (Science and Life).
Nowadays, magazines such as New Scientist and Popular Science have millions of subscribers worldwide and are storming ahead in the growing online and portable device industries. Yet these modern magazines are starkly different from their early predecessors. Rather than dry, authoritarian pieces they play host to scientific debate with clear, attributed articles and an emphasis on evidence. The divergence between academic journals and popular media outlets has enabled scientists to assess new findings in their chosen fields as well as engaging with work from other areas. With scientists working in ever more diversified disciplines on ever more collaborative projects, the need for general as well as specialist science publications is as important as ever.
Helen Gaffney is a 3rd year student in the Department of History and Philosophy of Science
Tim Middleton uncovers the role of science in the storage and conservation of paintings
War broke out on 3rd September 1939. The paintings at Trafalgar Square’s National Gallery needed to be evacuated. Some suggested that they should be shipped to Canada for safekeeping, but such an operation would have been susceptible to German U-boat attack. Kenneth Clark, the then Director of the National Gallery, received the following telegram from Winston Churchill: “hide them in caves and cellars, but not one picture shall leave this island.” And so it was that the National Gallery’s entire collection came to be stored in a disused slate mine at Manod, in North Wales, for most of the Second World War.
High on the slopes of Manod Mawr, 500 metres above sea level, a horizontal tunnel leads into the mine. Once the site had been selected as a hiding place, explosives had to be used to enlarge the entrance so that the bigger paintings would fit down the tunnel. Small brick ‘bungalows’ with concrete roofs were built within the 60 metre high caverns to house the paintings. The National Gallery’s priceless treasures were joined by all the royal pictures from the palaces, a number of paintings from the Tate Gallery and even the Crown Jewels. The entire convoy travelled to North Wales disguised as a fleet of delivery vehicles for a chocolate company.
However, even in this subterranean hideaway there were fears for the safety of the paintings. Fragments of rock regularly fell from the roof of the mine and on one such occasion a number of paintings were destroyed. It was also known that the paintings would not cope well with changes in temperature and humidity, so a crude ventilation system was devised. Air was passed over trays of dehydrated silica gel in order to absorb any moisture before being circulated throughout the ‘bungalows’ by a series of electric fans. The trays of silica gel were changed regularly and dried in an array of domestic electric ovens near the mine entrance. The entire operation was powered by an ancient, slow-speed, two-cylinder diesel engine that had been recovered from a disused brickworks.
The rudimentary ventilation system was not only successful, but also led to valuable scientific discoveries. The humidity in the mine was checked daily, something that had never previously been done, and when the paintings returned to London they were in a better condition than when they had left. Indeed, according to former Director Neil MacGregor the “scientific advances made in the underground chambers of Manod quarry changed the Gallery forever.” When the Gallery was renovated after the end of the war, it became the first gallery in the world to include an air-conditioning system for controlling the temperature and humidity within the building. The excursion to Manod also prompted the Gallery to establish a Conservation Department alongside its pre-existing Scientific Department.
Today, the Conservation Department works with curators and scientists to try and preserve works of art for the enjoyment of future generations. The Department checks the state of the paintings, controls their storage conditions and oversees major restoration and cleaning operations. A variety of physical and chemical techniques can also be used to obtain information on the different layers of a painting and the materials that were employed. Such techniques often help to elucidate the entire history of a painting, including changes made by the original artist, and later alterations or forgeries.
One method is to remove flakes of paint from damaged areas of a picture, mount them in cold-setting resin, and polish them to reveal a cross-section of the masterpiece. When these samples are examined using reflected light microscopy, the layered structure of the painting is revealed and some pigments can be identified by their colour and optical properties.
Infrared reflectography digs a little deeper. Carbon black, a form of charcoal, was used by many of the great European painters around the seventeenth century for making their initial underdrawings on a blank canvas. Carbon black is also very absorbent in the infrared portion of the electromagnetic spectrum. Therefore, by constructing an image from reflected infrared radiation, it is possible to delineate an artist’s original sketch of their composition. Sometimes, the results are not as expected. When Leonardo da Vinci’s The Virgin of the Rocks was subjected to infrared scrutiny, the conservators discovered two different underdrawings: one for the final composition we see today and the other of a kneeling woman with her face in profile and one hand across her breast. The Virgin of the Rocks that now hangs in the National Gallery is a copy of a previous painting that Leonardo had produced for a private client. It appears that Leonardo had initially planned to use a slightly different composition in this version, which was for the Confraternity of the Immaculate Conception in Milan. However, after patching up a financial disagreement he changed his mind and reproduced the original composition.
X-ray photographs reveal other aspects of a painting’s past. Heavy metals, such as lead and mercury, are more opaque to X-rays than other elements. High concentrations of these elements in certain pigments produce white areas in the resulting X-ray image. An X-ray photograph of Renoir’s The Umbrellas reveals that the woman on the left of the composition used to sport a small hat and frilly collar, whilst the painting’s background used to consist entirely of foliage with no umbrellas at all. Art critics believe that Renoir made the original painting in 1880, but, when he returned to the picture in 1885, his attitude to his art had changed. Instead of his previous agitated and feathery style, he now preferred broader brushstrokes, geometric shapes (hence the umbrellas) and a less Impressionistic style.
Modern spectroscopic methods are also increasingly being used by conservators to aid the identification of materials and pigments. Laser microspectral analysis uses a laser to vapourise a few micrograms of a sample. Elements present in the sample are then identified by emission spectroscopy, recording the wavelength of the emitted energy as atoms revert from a high energy electronic configuration to a lower one.
A recent BBC documentary investigated a painting called The Procuress, which is owned by the Courtauld Institute in London. Many art specialists attributed the work to the great Dutch master Johannes Vermeer, but spectroscopic analysis of the upper layers of paint revealed that the paint was mixed with phenol formaldehyde resin. Phenol formaldehyde, commonly known as ‘Bakelite’, was one of the first ever thermosetting plastics. In the early twentieth century it was used in everything from kitchenware to children’s toys.
However, to an art conservator, the presence of phenol formaldehyde means only one thing: the painting must be a fake. The Procuress is in fact by Han van Meegeren, a Dutch painter and arguably the most ingenious art forger of all time. He mixed his oil paints with phenol formaldehyde resin to make them harder, giving his finished works the appearance of seasoned seventeenth century masterpieces.
The use of scientific technology in the world of art conservation has come a long way since the electric fans of Manod slate quarry. Moreover, as the science has progressed, conservators have learnt much more about the curious lives led by many individual works of art.
Tim Middleton is a 4th year student in the Department of Earth Sciences
Richard Thomson gives his opinion on whether the Large Hadron Collider is worth its substantial investment
The large hadron collider (LHC) has a less than straightforward history. The idea for the Geneva-based experiment emerged in the 1980s, but it wasn’t until 1996 that approval for the £1.3 billion project was granted. It took 13 years to complete during which time it went grossly over budget, having to borrow hundreds of millions of euros. Furthermore, when it was first brought online it lasted just nine months before a malfunction forced it to shut down for nearly a year. However, the LHC is finally up and running again, and providing some amazing insights into the birth of the Universe itself. But in a time of such economic hardship, many are wondering whether it is really worth the investment of so much time and money.
The collider is a collaborative initiative based at the European Organisation for Nuclear Research (commonly referred to as CERN, from the French acronym). It consists of a synchrotron particle accelerator housed in a 27 kilometre long underground tunnel. It uses strong magnetic fields to accelerate two beams of protons (hadrons) in opposite directions, which, once up to speed, are collided. On impact, these particles release the huge amounts of energy gained during acceleration creating new, more massive particles. Colliding the beams in opposite directions gives the maximum energy for particle creation, allowing mankind to create its heaviest particles.
However, there is scepticism in some circles about how the LHC will benefit science. The purpose of the LHC is to recreate the conditions and energies present immediately after the big bang, with the hope of discovering how our Universe came into existence. The latest theories in particle physics on why the Universe around us behaves the way it does, will be put to the test. The idea of ‘recreating the big bang’ initially worried a portion of society who held the misconception that the LHC may create black holes which could swallow the Earth—an idea fuelled by uninformed scaremongering in some popular tabloid newspapers. Fortunately, this was never a possibility. The LHC is looking to simulate the energies present after the big bang, not recreate it. Still, these reports went some way to damaging the reputation of the experiment and with an ever increasing list of public spending cuts, calls for the UK to give up its part in the work at Geneva are at their loudest.
So how much does the LHC actually cost the UK? The total project cost is £2.6 billion split between the 20 member states of CERN. The UK’s contribution to the LHC is approximately £95 million per year, which equates to just over a pound per head of our population. With the total UK budget for 2012 being £702 billion, the UK’s contribution to CERN equates to less than 0.00015% of this. Other leading scientific forces in Europe are also heavily invested in the LHC project, no doubt because they realise the huge potential and importance of the experiment. To keep its place among those at the pinnacle of scientific research, the UK has to remain involved.
The importance of the project can only be understood when we look at the magnitude of the work being undertaken. The most high profile work being carried out at CERN, in collaboration with colleagues at the Tevatron, a similar particle accelerator based near Chicago, is the search for the Higgs boson. This fundamental subatomic particle, named after British physicist Peter Higgs, is predicted by the current standard model for particle physics. If it is proved to exist, it provides evidence for a mechanism by which particles can acquire mass, an idea that is vital in modern theories. In addition, the LHC is looking for dark matter particles. Dark matter only interacts via gravitational effects and does not emit or scatter electromagnetic radiation, such as visible light, the way normal matter does. Its existence is hypothesised to explain discrepancies between the calculated and observed masses of distant galaxies and why stars of these galaxies are orbiting ‘too quickly’ for the observed mass. Dark matter is thought to compose around 25% of the Universe, with normal matter contributing about 5% and the rest being composed of dark energy. If the LHC can help to identify a working theory for dark matter then it will lead to a new and novel understanding of how the Universe behaves. The LHC also hopes to shed some light on the idea of supersymmetry, which states that for every particle there is an opposite superpartner, and on string theory, which postulates that there are 11 dimensions rather than just the 4 we observe (height, width, depth, time). As with any science experiment, the answers will not be instantaneous, but the results that have already come out more than validate the project’s existence.
For example, scientists at CERN have already successfully trapped antimatter and more importantly kept it stable long enough to study it: a breakthrough which could help us understand the origin of the Universe. Antimatter is the opposite of matter. At the beginning of the Universe matter and antimatter should have been created in equal amounts yet we observe far greater amounts of matter. It is currently postulated that collisions between matter and antimatter in the early Universe caused the destruction of large amounts of antimatter. Studying the artificially created antimatter at CERN could help explain the early life of the Universe.
All of us yearn for understanding and it is natural for us to want to comprehend the world on which we live and the Universe in which our planet resides. The LHC is a massive leap forward in discovering how the Universe was born and how it continues to evolve and behave. Breakthroughs at CERN are already leading to a greater understanding of how the Universe came into being.
The questions that we have all asked are not going away, and neither is our fascination with their potential answers. What happened at the beginning of the Universe? Why is the Universe the way it is? What is the Universe made of? The LHC is trying, and slowly beginning, to answer these queries—I believe the price is a small one to pay to continue this endeavour.
Richard Thomson is a PhD student in the Department of Earth Sciences
Rose Spear analyses the varying global stances on nuclear energy
On 11th march 2011 a magnitude 9.0 earthquake triggered a tsunami which overwhelmed the seawall protecting the Japanese city of Fukushima. Following a power failure due to the earthquake, the tsunami destroyed the back-up cooling mechanisms for the Daiichi nuclear reactors and the subsequent reactor meltdowns released harmful levels of radioactive gases into the atmosphere. In response to the leak, the Japanese government encouraged residents within a 30 kilometre radius of the nuclear plant to evacuate or protect themselves from contact with the contaminated gases. The fires in Daiichi’s reactor four were reminiscent of those that burned for ten days at Chernobyl, the site of the only other level seven nuclear accident in history. However, such comparisons reveal more differences than similarities between the two accidents. The level of radiation released, the size of the area contaminated, and the number of people evacuated were ten times higher at Chernobyl than at Fukushima. In the case of Chernobyl, there were 64 confirmed deaths due to radiation and over 6,000 cases of thyroid cancer. Although the long-term effects of the Fukushima Daiichi accident cannot be determined at this time, there have been no deaths attributed to radiation poisoning to date.
But despite these differences between Chernobyl and Fukushima, the media’s tale of these two cities has contributed to heightened public concern and precipitated anti-nuclear protests throughout the world. National representatives have responded by reconsidering policies regarding nuclear energy and the future of nuclear power plants in their country. Since the time of Chernobyl, the number of countries with active nuclear programs has proliferated rapidly and now 30 countries have developed nuclear power plants. The political responses to the Fukushima accident differed significantly from country to country: some made plans to end their nuclear programs, while others moved forward with plans to build new reactors.
In response to protesting and mounting public pressure, both Germany and Switzerland announced that they would phase out their nuclear power plants in the coming years. In a similar expression of democracy, the Italian public announced their concern in a nuclear power referendum held in June 2011, deciding to close all nuclear power plants in the future. Similarly, plans for Venezuela to build a new nuclear power station with Russian assistance were cancelled by President Hugo Chavez following the Fukushima accident. In response to the global concerns regarding the safety of nuclear energy, these countries and others will join Australia, Austria, Denmark, Greece, Ireland, Latvia, Lichtenstein, Luxembourg, Malta, Portugal, New Zealand, Norway, and Spain in the list of countries with anti-nuclear policies. For these countries, the risks of nuclear contamination outweigh the benefits of an additional energy source.
In contrast, France and the United States remain champions of nuclear energy with 58 and 104 nuclear power plants operational in 2010 respectively. China leads the way in terms of construction plans, with 26 reactors in the pipeline. Russia is also planning 10 more whilst India, South Korea, the UK, and several other countries are actively moving ahead with plans to expand their supply of nuclear energy. In June 2011, only three months after the Fukushima accident, the United Kingdom identified eight sites for the construction of new nuclear power plants. Nuclear energy provides a significant percentage of the national energy supply in several countries, including Belgium (51%), France (74%), and Slovakia (51%).
The Fukushima accident brought national policies on nuclear energy into the limelight, with nations lining up on either side of the debate. Scientific research on topics such as the safety of nuclear power plants, the environmental and social effects of radioactive contamination, and the potential of nuclear energy as a world power source has a critical role in this ongoing debate. Efforts to reduce consumption of fossil fuels and continuing national energy demands provide powerful motivators in the search for alternative energy sources. As a result, many countries have determined that the benefits of nuclear energy outweigh the risks; others exercise more caution, but see nuclear energy as a necessary stepping stone whilst alternative technologies are developed; and some, in the wake of Fukushima, have decided that the dangers are simply too great.
Rose Spear is a PhD student in the Department of Materials Science and Metallurgy
Aaron Barker explains how certain types of CHaOS can be fun and informative
Cambridge hands-on science (or, to give it its rather fitting acronym, CHaOS) is a student- and alumni-run organisation which aims to bring fun, interesting and interactive science demonstrations to children all over the UK. It sets out to enlighten those who might not otherwise experience the more practical side of science.
In 1997, a group of Cambridge undergraduates organised an event during the Cambridge Science Festival featuring hands-on experiments, calling it Crash, Bang, Squelch! In 2002, the committee decided to take the experiments to six venues on the south coast, and the CHaOS Science Roadshow was born. In 2011, Crash, Bang, Squelch! welcomed over two thousand visitors on a five week tour around the country, with around fifteen demonstrators volunteering at any event.
CHaOS demonstrators range from first years to PhD students to alumni—one of whom, Dave Ansell, has a garage filled with CHaOS and other science-related odds and ends. It includes lathes, a box of light bulbs with an intensity equivalent to sunlight, and a bazooka powered by a vacuum cleaner. This year I joined the CHaOS Science Roadshow for the last three weeks of the tour.
Since CHaOS is run by volunteers and funded by sponsorship, we spent most of our nights camping, apart from a few committee members who (madly) chose to sleep outside under an open marquee. This led to a friendly, sociable atmosphere when the weather was fair, with barbecues, seemingly endless quantities of fajitas, roasted marshmallows and days off spent relaxing or even swimming. However, heavy rain occasionally forced us to huddle inside the marquee with hot chocolate or repair leaking tents in the dark, but at least we could shelter in a warm pub for dinner.
Most demonstrators, including myself, found the experiments even more fascinating than the children—though it was understandable that most people didn’t get quite as excited as me about rocks and fossils! I also enjoyed launching home-made water rockets constructed from fizzy drink bottles, particularly when replacing the pump and bung made them so powerful I had to repair the rockets several times. It never stopped being entertaining watching every child in the room jump whenever the electrolysis experiment made a small explosion as hydrogen and oxygen were recombined to make water. I learnt a lot by watching other demonstrators explain their experiments, including why a crisp packet sparks and shrinks when cooked in a microwave, how microwaving a brillo pad can produce a plasma, or why placing a spinning mesh around a fire can turn it into a spectacular flame tornado.
The venues ranged from schools, to museums, to festivals, and all were very rewarding—particularly when I could tell that a group of children were enjoying the experiment that I was demonstrating. My favourite venue was CamJam, a scouting jamboree held just outside Huntingdon. For some reason many of the scouts thought it hilarious to wear a particular ‘cuddly microbe’ (check out www.giantmicrobes.com) as a moustache, until reading that it was diarrhoea! The CHaOS team also successfully prevented the jamboree’s inflatable stage from escaping during a very blustery storm, while a dozen security personnel looked on. Plus Boris the Skeleton, a new favourite, appeared on the radio, posed on the climbing wall and walked on stilts. Unsurprisingly, he is not the first CHaOS skeleton.
I enjoyed my three weeks with CHaOS so much that I’ve found myself considering joining next year’s committee. I would recommend demonstrating at CHaOS events to anyone with an interest in science and a love for children—if this sounds like you, more information can be found at www.chaosscience.org.
Aaron Barker is a 4th year student in the Department of Earth Sciences
A selection of the wackiest research in the world of science
Why men forget anniversaries
Dutch social psychologists have finally confirmed what many of us have suspected for years: men can’t think straight when faced with an attractive woman. The researchers asked 40 male undergraduates to complete a test measuring short-term memory. Midway through the test each undergraduate was interrupted by an attractive model posing as an experimenter. She then engaged them in conversation on a predefined neutral topic. After chatting for seven minutes the ‘experimenter’ left the room, instructing the student to continue with the test. Male undergraduates who had spoken to a male model performed the same before and after this interruption. Those who had chatted with an attractive female, however, found that their powers of memory suddenly deserted them. This happened irrespective of the man’s current relationship status. The researchers suggest that their findings, published in the Journal of Experimental Social Psychology, could help to explain why girls consistently outperform boys in mixed-sex schools. But, if just seven minutes with an attractive woman is truly enough to impair a man’s memory, the implications could be far more wide-ranging than that. Louisa Lyon
It’s a male-female thing
A butterfly at London’s Natural History Museum caused great excitement this summer when it emerged from its pupa with fully normal female characteristics throughout its entire right side, whilst appearing as a male on the left side. The butterfly, an example of the species Papilio memnon, the Great Mormon native to Asia, lived for several weeks before being preserved within the museum’s expansive collection. Although this is a rare event and quite impressive to see, it is a well reported phenomenon. This is particularly true of butterflies and is known as gynandromorphy. Of over 4.5 million butterflies that the Natural History Museum has acquired in its 130 year history, only 200 demonstrate this unusual feature. Whilst similar occurrences have never been seen in humans or other mammals, there have been many instances of crustaceans, such as lobsters and crabs, as well as spiders and even chickens that show full gynandromorphy. There are also lesser versions where only a small part of an otherwise single-gendered individual shows physical characteristics of the opposite sex. There may be many unreported cases of this phenomenon in species where it is more difficult to distinguish between males and females. Jonathan Lawson
It’s food, but amplified
Charles spence and massimiliano zampini have shown that the crispness of Pringles is perceived differently depending on what your ears hear during the biting process. In their study, each volunteer sat in a soundproof booth with a pair of earphones and a microphone placed in front of their mouth, while using their front teeth to bite into Pringles of uniform shape and crispness. The sound produced by the bite was captured with the microphone and electronically modified (attenuated and/or frequency manipulated) before it was fed into earphones worn by the volunteer. A pair of foot pedals allowed the volunteer to rank the perceived crispness and freshness of the Pringle according to a scale on a computer screen. The results showed that there was a significant loss of perceived crispness and freshness when the biting sound was attenuated, whereas the contrary was true when the high frequency components of the biting sound were amplified. This study won the 2008 Ig Nobel Prize for Nutrition and has already been translated into commercial applications. According to the official homepage of the Ig Nobel Prize, Starbucks has composed a special piece called VIA Alle Undici, following recommendations from Professor Spence, to complement their new Italian Roast. So why not download this low pitched, brass and woodwind piece to enhance your Sainsbury’s basics instant coffee experience? Gengshi Chen
- Editorial: Issue 22 – Michaelmas 2011
- Cover: Cultured Brains
- News: Issue 22
- Cool It – Bjorn Lomborg
- Islam’s Quantum Question – Nidhal Guessoum
- The Art of Science – Acabo Games Ltd
- Feature: A Clean Slate
- Feature: The Age of Endeavour
- Feature: Eye-popping Films
- Feature: A Bolt from the Blue
- Feature: Beyond Darwin
- Focus: Beneath the Surface
- Behind the Science: The Father of Forecasting
- History: Science in Print
- Arts & Science: Caring for Art
- Perspective: Colliding at Colossal Costs
- Science & Policy: Reactive Politics
- Away from the Bench: Skeletons and Flame Tornadoes
- Weird and Wonderful