- Editorial: Issue 19 – Michaelmas 2010
- Cover: Manipulating Behaviour
- News: Issue 19
- Pavilion: Issue 19
- Feature: Forgotten Knowledge
- Feature: Ocean Acidification
- Feature: The Transformation of Archaeology
- Feature: Cherish Your Enemies
- Feature: Out of Body Experiences
- Focus: Gene Therapy
- The Beginnings of an Idea
- Gene Therapy: Success and Triumphs
- Gene Therapy in Practice
- Behind the Science: The Man Who Weighed The Earth
- Away from the Bench: Are You Receiving Me?
- Perspective: Saving Species
- Arts & Reviews: Modern World, Modern Art
- History: Boosting Your Defence
- Technology: Ready to Go Paperless?
- Book Reviews
- The Cambridge Companion to Science and Religion
- Blood and Guts: A History of Surgery
- The Price of Altruism
- Weird and Wonderful
- Good news for knuckle crackers
- Naming cows increases milk production
- Beer bottle brutality
Alex Hyatt, Editor
In the FOCUS section of this issue of BlueSci, we check up on the progress being made in gene therapy, a technology that has gone from being overly hyped to overly criticised, and is finally delivering on some of its promise. In Technology, we examine electronic lab books, the latest gadget aimed at making research a little easier (or at least a little more organised). And in Arts and Reviews, we explore how science has shaped the development of art and why scientists are ultimately responsible for the existence of ‘modern art’.
You’ll also find a new light-hearted regular called Weird and Wonderful, where we ponder some odd but interesting scientific questions: does cracking your knuckles really give you arthritis? Does naming a dairy cow increase its milk production? For the answers to these and other fascinating questions, find a quiet spot and continue reading. Alex Hyatt
Stephanie Glaser, Managing Editor
As the new term begins, fresh enthusiasm is obvious throughout the University. Most of us have been able to refill our batteries over the summer, regaining some energy to weather the last few months of the year. Accompanied by this new wave of excitement, we present to you the 19th issue of BlueSci. As always, you can expect articles about the latest scientific research: keep reading and you will find out why our oceans are in danger, how King Tut might have died and how vaccines were developed.
The publication of this term’s issue of BlueSci also marks the launch of our improved website (www.bluesci.co.uk). As well as an upgraded layout, brand new features have been added, so it is definitely worth checking out. In addition to our termly magazine, we will also be publishing regular articles online. From now on we will be able to accept more student articles and publish many of those on our homepage. So if you are keen to start writing about science and join the fascinating world of science communication, contact us! Stephanie Glaser
Issue 19: Michalemas 2010Editor: Alex Hyatt Managing Editor: Stephanie Glaser Business Manager: Michael Derringer Second Editors: Rachel Berkowitz, Diana Deca, Ian Fyfe, Nicholas Gibbons, Heather Hillenbrand, Tim Middleton, Catherine Moir, Lindsey Nield, Praful Ravi, Anders Aufderhorst-Roberts, Nicola Stead, Katherine Thomas, Djuke Veldhuis Sub-Editors: Diana Deca, Jai Grover News Editor: Katherine Thomas News Team: Taylor Burns, Nicholas Gibbons, Ayesha Sengupta Book Reviews: Alex Jenkin, Tim Middleton, Djuke Veldhuis Focus Editor: Jessica Robinson Focus Team: Maja Choma, Jack Green, Wendy Mak Weird and Wonderful: Xia Chen, Nicola Stead, Richard Thomson Pictures Editor: Wendy Mak Pictures Team: Heather Blackmore, Wing Ying Chow, Lydia Hunter, Nicola Stead Production Manager: Ian Fyfe Production Team: Wing Ying Chow, Wendy Mak, Nicola Stead Cartoonist: Alex Hahn Cover Image: Ivan Minev
Lindsey Nield looks into the story behind this issue’s cover image
To a large extent, the behaviour of each cell in your body is determined by the environment that surrounds it. That environment influences whether a cell migrates, proliferates, becomes more specialised, or implements programmed cell death. The stiffness of the surfaces with which the cell interacts plays a key role in determining how a cell behaves. The biomechanical properties of cells are the focus of research being done by Ivan Minev, a second-year PhD student in the Department of Engineering.
Ivan studies the responses of living cells to artificial surfaces. These surfaces consist of millions of rubber pillars, each with a diameter of approximately two micrometres (one micrometre is one thousandth of a millimetre).
Cells can exert small mechanical forces on the surfaces in contact with them; this means that cells sitting on Ivan’s artificial surfaces are able to deflect the rubber pillars. The amount of deflection gives the cells feedback about the stiffness of the surface, which in turn dictates how the cells behave. By altering the stiffness of the pillars, Ivan is able to study how the cells respond to different surfaces; he can also more accurately model how cells function within the body. For example, to study astrocytes (a cell type that mediates inflammation in the brain) it is much more appropriate to use surfaces with similar stiffness to those found in the brain. Ivan is able to control the stiffness of his surfaces by varying the heights and widths of the pillars, as well as by changing the material they are made from.
A process called photolithography is used to fabricate the surfaces. A mask consisting of transparent dots on an opaque background is placed over a layer of photosensitive epoxy resin that hardens upon exposure to light. Only the material beneath the transparent part of the mask is affected, creating columns of hardened material. A developing fluid is then used to wash away the unaffected material, leaving the pillars behind.
In the next step, the epoxy pillars are used as a template in a double casting procedure. Silicone polydimethylsiloxane, which is initially the consistency of honey, is poured onto the epoxy columns. After a night in an oven, the silicone becomes rubbery and is peeled away from the template. This creates a reverse mould, so that when the process is repeated, a silicone replica of the original pillars is produced.
This issue’s front cover shows a colour-enhanced scanning electron microscope image of one of these pillared surfaces, taken by Ivan and his colleague Rami Louca. A defect appears in the centre of the image, where the pillars have spontaneously collapsed onto one another. Different patterns of collapse can indicate that the sample has been accidentally scratched or that after immersing in fluid, the receding edges of drying droplets have caused the columns to be pulled over. The collapse does not tend to be a problem however, since one square centimetre is seeded with thousands of cells, and only a small proportion of the area has any defects.
In addition to helping researchers better understand cellular behaviour, artificial surfaces are potentially useful for a number of medical applications. For example, these surfaces could be used to minimise the ‘foreign body reaction’. An implant would be wrapped in an artificially soft surface, tailored to influence the response of immune cells, tricking them into accepting the implant as part of the body. Other possible applications include using the surfaces to promote nerve regeneration within a damaged spinal cord or to stimulate certain cells within the brain, in such a way that they help to alleviate the symptoms of Parkinson’s disease. If any of these endeavours are successful, artificial arrays of tiny pillars could lead to some novel medical materials and treatments.Lindsey Nield is a PhD student in the Department of Physics
Meaning in motions
The first non-human, non-verbal dictionary has been created at the University of St Andrews. Erica Cartmill and Richard Byrne spent nine months observing orangutans and trying to discern a lexicon of gestures and signals.
The duo used an approach they dubbed ‘goal-outcome matching’. It focused around the apparent aims of the gesturer (considering context, visual attention and social status) and whether or not the reaction of the recipient satisfied these aims. Where the two matched consistently, a meaning was attributed to the signal.
Twenty-eight orangutans from three European zoos in the UK, Netherlands and Jersey were video recorded over a period of three months. After studying the footage, researchers were able to identify 64 gestures, 40 of which were used consistently enough to determine a meaning.
This dictionary is a milestone in the continuing move towards a more cognitive approach to non-human communication. Although studies of non-verbal language have been carried out before, particularly in great apes, this is the first to emphasise specific, intentional meanings of observable gestures. The study also hints at relative universality in the orangutan’s language, with the authors arguing that “an orangutan in Singapore gestures in pretty much the same way as an orangutan… in Philadelphia or Wales.” Taylor Burns
Filling in the gaps
The formations of seromas – ‘dead spaces’ within the body – can be a painful and dangerous consequence of surgery, often occurring after extensive tissue removal. These spaces can fill with fluid and form what are essentially internal blisters; in the worst cases an additional operation is required to remove them. Researchers at Cornell University have reported a novel solution to this problem, in which they use an injectable, biodegradable polymer to fill in the empty space.
The standard treatment for seromas is to insert drains and remove the fluid. However, this can cause pain and discomfort, particularly upon removal from the body. Other techniques, such as surgical collapse of the seroma, are often not effective. Utilising a synthetic biomaterial to simply fill the cavity provides a safe, efficient, and cheap alternative to current treatments.
David Putnam and his team have developed an MPEG-pDHA-based hydrogel with easily adjustable physiochemical properties. Crucially, the polymer is ‘thixotropic’ and behaves as a ‘non-Newtonian’ fluid: solid in its natural state, but when subjected to stress or agitation it develops liquid properties, allowing it to be easily injected into the body. Once it has filled the seroma, it re-solidifies, preventing liquid from entering. Testing was carried out on rats and has yielded promising results. The biomaterial was seen to be highly compatible with surrounding tissue and effectively prevented the formation of seromas. It was also shown to degrade into non-toxic components. Nicholas Gibbons
Making good use of viruses
A group of international researchers at Radboud University in The Netherlands have found a way to use viruses as delivery vehicles for drugs, growth factors, and even metallic particles. To do this, they removed all the harmful components of the virus, leaving just the empty permeable viral packaging behind. They were then able to fill the packaging with their desired molecule and join the viral particles together using positively charged polymers. These long, branched chains of molecules can bind to the negatively charged viral particles at multiple locations. However, the binding is so strong that separating the particles again – in order to allow them to release their contents – poses a challenge. The researchers solved this problem by using polymers that were light sensitive – high frequency or ultraviolet light can be used to break apart the polymers. After injecting the polymer-virus complexes into the body, focusing a light source on the area where you want the contents delivered should allow for a highly controlled and localised release. Ayesha Sengupta
Check out www.bluesci.co.uk, or BlueSci on Twitter http://twitter.com/BlueSci for regular science news updates
Fish vertebrae – unlike mammalian bones – grow in a layered manner, much like tree rings. Isotopic analysis of the layers can tell us about the environment, growth patterns and behaviour of the fish across their entire lifespan. This vertebra is from a cod that was 95 centimetres in length, hence the size and clarity of the rings, and has been used for carbon and nitrogen isotopic analysis.
Tessa De Roo
Department of Archaeology
Andrew Holding looks at the discovery and loss of a cure for scurvy
Scurvy is rarely seen in the Western world, but this has not always been the case. It was once a prevalent disease with no treatment, and its cause was a mystery. We now know that it is caused by a deficiency of vitamin C and can be prevented by a good diet. But in the course of history, this knowledge had to be discovered twice.
Scurvy can be easily treated by simply reintroducing vitamin C into the diet. Left untreated however, scurvy is inevitably fatal. Before the discovery of a cure, scurvy played a massive role in naval history. It was particularly prevalent in the age of sailing ships, when there were limitations on carrying fresh supplies of fruit and vegetables and long periods were spent on board ship. Ships rarely travelled far from port out of fear of the deadly disease. It was not unheard of for ships to return to port with 90 per cent of the crew having succumbed to scurvy.
In 1747, James Lind conducted what is probably one of the first examples of a formal clinical trial into the prevention of scurvy aboard ships. His work was based on that of Johann Bachstrom, who had noted in 1734 that scurvy was solely due to “a total abstinence from fresh vegetable food and greens”. Lind conducted his work whilst on board the British naval ship HMS Salisbury, where many of the crew were suffering from the effects of scurvy. He carried out his studies on 12 of the crew who had succumbed, subdividing them into pairs for the experiment. Isolating these six groups from the rest of the crew, he provided them with various treatments alongside their regular rations: these included cider, acid, seawater and lemons. At the end of the six-day trial, Lind had used the entire supply of fruit on board the ship, but his findings changed naval history. The pair of sailors who received the lemon supplement made a staggering recovery, while the health of all the other sailors in the trial deteriorated.
This study clearly showed that scurvy could be prevented by the addition of citrus fruit to the sailors’ diets. These findings were eventually adopted by the Royal Navy in 1790, 40 years after Lind’s discovery. The ability to cure scurvy gave the Royal Navy a massive tactical advantage during the Napoleonic wars. Ships were able to travel further from port for longer periods and hold blockades for years at a time. Unsurprisingly, other navies soon adopted a similar solution.
It seems shocking then that during Robert Scott’s 1911 expedition to the South Pole, one of the Royal Navy surgeons is recorded as saying: “There was little scurvy in Nelson’s days; but the reason is not clear, since, according to modern research, lime juice only helps to prevent it”. How was it that the crew, who were on an expedition at the beginning of the 20th century, did not know how to treat an ailment that had been successfully cured over 100 years earlier?
The loss of knowledge has been attributed to several factors. Firstly, Lind showed in his work that there was no connection between the acidity of the citrus fruit and its effectiveness at curing scurvy. In particular, he noted that acids alone (sulphuric acid or vinegar), would not suffice. Despite this, it remained a popular theory that any acid could be used in place of citrus fruit. This misconception had significant consequences.
When the Royal Navy changed from using Sicilian lemons to West Indian limes, cases of scurvy reappeared. The limes were thought to be more acidic and it was therefore assumed that they would be more effective at treating scurvy. However, limes actually contain much less vitamin C and were consequently much less effective. Furthermore, fresh fruit was substituted with lime juice that had often been exposed to either air or copper piping. This resulted in at least a partial removal of vitamin C from the juice, thus reducing its effectiveness.
The discovery that fresh meat was able to cure scurvy was another reason why people no longer treated the condition with fresh fruit. This discovery led to the belief that perhaps scurvy was not caused by a dietary problem at all. Instead, it was thought to be the result of a bacterial infection from tainted meat. In fact, the healing properties of fresh meat come from the high levels of vitamin C it contains.
Finally, the arrival of steam shipping substantially reduced the amount of time people spent at sea, therefore the difficulties in carrying enough fresh produce were reduced. This decreased the risk of scurvy so that less effective treatments, such as lime juice, proved effective enough to deal with the condition most of the time. Unfortunately, this meant that knowledge of the most effective treatment for scurvy was gradually lost.
It was not until 1907 that Axel Holst, a professor of hygiene and bacteriology at the University of Oslo, and a paediatrician named Theodor Frølich, rediscovered the lost cure for scurvy. They became interested in a disease called beriberi, which is now known to be caused by a thiamine (vitamin B1) deficiency. They used guinea pigs to test their hypothesis that beriberi was the result of a nutritional deficiency. Their decision to use guinea pigs was crucial; apart from humans and other primates, most animals are able to synthesise vitamin C themselves. Guinea pigs – by chance – cannot, and although they did not develop beriberi, they did develop the symptoms of scurvy. Had Holst and Frølich chosen almost any other animal, they would not have discovered that guinea pigs develop scurvy when fed on a diet of just grain.
Holst and Frølich went on to show that they could prevent scurvy by simply feeding the guinea pigs lemon juice, something that Lind had shown a century and a half earlier. While their original publication on these results was not well received (since the idea of nutritional deficiencies was seen as something of a novelty at the time), the model they had developed with guinea pigs was vital to subsequent work on scurvy and vitamin C.
The work of James Lind on board the HMS Salisbury will no doubt forever be remembered in the history books as a great turning point in science, while the loss of that knowledge continues to be overlooked. The cost of those mistakes to human lives may be firmly in the past, but the tale still holds relevance within the modern world. Time and again during the history of scurvy, individuals put their own agendas and beliefs ahead of scientific results, the consequences of which should not be forgotten.
Andrew N Holding is a postdoc at the MRC Laboratory of Molecular Biology
Matthew Humphreys examines the impact of carbon dioxide on the oceans
Life in the oceans is governed by a fascinating complexity of ecosystems and biogeochemical cycles that are at risk from a little talked about consequence of increasing atmospheric carbon dioxide: the acidification of the oceans. Approximately half of the carbon dioxide generated by humans since pre-industrial times has dissolved into the sea. The effects on marine life are still poorly understood, despite more than a decade of research, but if this acidification continues, it could cause damage that takes millennia to repair.
Carbon dioxide is soluble in water, and when it dissolves, can react to produce carbonic acid and ions of bicarbonate and carbonate. The proton concentration in the solution rises, causing an increase in acidity. The dissolving of atmospheric carbon dioxide in this way is the origin of ocean acidification. Measurements in a wide range of locations over the last few decades have picked up increases in ocean acidity that correspond to increases in carbon dioxide concentrations in the atmosphere.
One of the most important consequences of acidification is its effect on calcifiers, marine organisms that construct shells and skeletal features from calcium carbonate. This involves the precipitation of carbonate ions through a range of mechanisms that are not understood in detail. However, the process commonly involves a step in which the organism isolates and de-acidifies seawater, to ease controlled crystallisation of carbonate. If the seawater is more acidic to begin with, this process requires more energy. Furthermore, calcium carbonate dissolves more easily at higher acidity, meaning that once structures have been formed, they will re-dissolve more easily and will be more difficult to maintain. Many small-scale studies of specific species, including corals, have shown a reduction in calcification rates and shell weights in response to increased carbon dioxide.
Corals can form vast yet intricate reef structures from carbonate and so are potentially threatened by acidification. Mainly formed in warm, shallow waters, reefs are incredibly important environments for many different reasons. Ecologically, they are hotspots of diversity, supporting countless species with their provision of food and shelter. Economically, they are worth tens of billions of pounds per year globally in income from tourism and fisheries, often for poorly developed tropical countries with few natural resources. Physically, they can act as important barriers to coastal erosion.
We do not fully understand reef ecosystems or their relationship with the wider ocean, so the implications of damage to reefs cannot be reliably predicted. In places where reefs have died naturally (for reasons unrelated to acidification) they are often replaced by thriving communities of algae and herbivorous fish. So life still goes on – but there is a dramatic decrease in diversity, and the variety of species present is different, with many commercially important species unable to survive.
Other calcifiers that are adversely affected by ocean acidification include a range of planktonic species that photosynthesise and are therefore primary producers at the base of the marine food web. While land plants tend to benefit from elevated carbon dioxide levels, these marine producers already process seawater to concentrate the gas, so they do not benefit from higher concentrations. However, they do suffer adverse effects from acidification: their growth is slowed and so their net productivity decreases, with knock-on effects throughout the oceanic ecosystem.
The effects on planktonic productivity are amplified by the fact that the majority of dissolved carbon dioxide is concentrated in the surface layer of the ocean. This layer – less than 200 metres deep – is well mixed and in close contact with the atmosphere, and therefore takes up carbon dioxide easily. It takes less than a decade for dissolved carbon dioxide to spread throughout the surface layer, but it takes hundreds to thousands of years for this layer to equilibrate with the deeper ocean. This property means that rapid human emissions have caused carbon dioxide to build up in the surface layer. Warming of the planet may also decrease vertical mixing, thus affecting the surface layer even more.
Plankton that photosynthesise live exclusively in the surface layer, since sunlight quickly diminishes with depth. They are therefore in the most acidic layer of the ocean, causing maximum impact on their productivity. The slow mixing of the surface layer with the deeper ocean means that significant damage could be done very rapidly, with a much longer time needed for recovery.
Acidification does not only vary with depth, but is also not evenly distributed across the globe. Carbon dioxide is more soluble at lower temperatures, so parts of the oceans at higher latitudes are affected more than warmer, more equatorial waters. The Southern Ocean surrounding Antarctica, which has a high concentration of photosynthesising primary producers, is of particular concern since its low temperatures make it more susceptible to acidification.
Elevated levels of carbon dioxide in the oceans may be harmful to non-calcifying organisms too, especially those with higher metabolism. Some experiments have shown detrimental effects on the health and reproductive ability of specific marine animals. However, the levels of carbon dioxide used in these experiments far exceed the most pessimistic predictions for human output, and it is unlikely that non-calcifiers will be directly harmed. Indirect consequences from the effects on plankton and other calcifiers could, however, be serious.
Fluctuations in the acidity of the ocean can be buffered to some extent by natural mechanisms: the dissolution of carbonate minerals in the sea decreases its acidity, and these are in plentiful supply. Ecosystems and species can also adapt and evolve to survive in the new conditions. But our rate of carbon dioxide output is unprecedented in the recent past and potentially too fast for these mechanisms to keep up. The carbonate buffer reactions certainly progress too slowly to significantly reduce the initial spike of acidity, and the physiological and chemical adaptations required of marine organisms may be too great to occur quickly enough.
The Palaeocene-Eocene Thermal Maximum that occurred 55 million years ago is a well-studied event that has several parallels to the current situation, and demonstrates the potential consequences of significant acidification. It was characterised by rapid output of carbon (probably as methane, which quickly oxidises to carbon dioxide in the atmosphere), comparable to the current human output. Sediment records from this time show a sharp warming event, the extinction of several sea floor calcifiers, and large-scale dissolution of sea floor carbonates; all of which are predicted outcomes of acidification. It took around 1,000 years for the climate to be perturbed, and 20,000 years to recover.
Ocean acidification caused by humans may not independently kill off marine organisms and ecosystems, but it will add significant stress that makes them more vulnerable to other factors such as climate change, pollution and large-scale fishing. Although this applies more to calcifiers than non-calcifiers, both will be affected.
Regardless of your opinion on global warming, the threat of ocean acidification is an independent and powerful motive to immediately reduce human carbon dioxide emissions. Though a lack of detailed understanding of the immensely complicated and inter-related systems involved makes exact predictions of the consequences impossible, the inevitable changes to the marine environment will surely impact heavily on Earth’s ecological stability, its biodiversity, and its nations’ bank balances.
Matthew Humphreys is a 4th year undergraduate in the Department of Earth Sciences
Maggie Jack digs into the techniques of archaeology, how they are evolving and how this may help us to answer questions like “who killed King Tut?”
Archaeology is undergoing dramatic changes in its research methodology. Historically, it has been an individualistic and humanities-based discipline, but scientific methods are being increasingly used. Projects are becoming collaborative, and publications now resemble the multi-author articles common in more traditional scientific subjects. One example is a study published in the February 2010 edition of the Journal of the American Medical Association (JAMA), in which a team of researchers based in Cairo shed light on some of the mysteries surrounding King Tutankhamun. The authors’ extensive use of biomedical techniques highlights the changes taking place in the field.
King ‘Tut’ was a pharaoh of the New Kingdom, an era of relative peace and prosperity within Egypt that lasted from the mid-16th to 11th century BC. Previous researchers had used a limited amount of evidence to hypothesise that King Tut died of a genetic disease such as Marfan’s syndrome, which weakens the connective tissue in the body. A radiological scan of his foot showed that he had a malformed arch, suggesting a disease of this type. Furthermore, the depictions of King Tut that decorate his tomb portray him with elongated features, characteristic of Marfan’s. Canes that were found in his tomb, intended to help him walk in the afterlife, also support this theory. However, no firm conclusions could be drawn without more rigorous biomedical experimentation.
The Cairo research team undertook this challenge by analysing the DNA from the bone tissue of eleven royal mummies of the New Kingdom, including King Tut. They tested Tut for Marfan’s syndrome and discovered that he did not have Marfan’s, but did suffer from avascular bone necrosis, a disease characterised by a breakdown of bone tissue that results from a prolonged lack of blood circulation. This would explain the canes in Tut’s tomb, but it is unlikely that this disease resulted in his death. By analysing the other mummies, the researchers were able to identify the parents of King Tut, discovering in the process that they were also siblings. This inbreeding may have contributed to Tut’s early demise by predisposing him to genetic defects that affected his health.
Papyri from the era of the New Kingdom also suggest that Tut may have suffered from malaria. Four of the mummies analysed by the Cairo team, including Tut, tested positive for AMA1, a protein found on the malarial parasite. AMA1 is responsible for the binding of the parasite to human cells, and its presence in the body is a sure sign of infection. Many of the initial media reports describing the study misquoted it and claimed that the team found definitive proof that Tut died of malaria. However, although this is another possible explanation for his death, the presence of AMA1 does not necessarily mean that Tut died of malaria; he could have been infected with the parasite without actually succumbing to it.
Another mystery addressed by the study is the feminised appearance of King Tut in the murals and sculptures in his tomb; researchers have often queried whether he might have been intersex. Genetic testing by the Cairo team determined that he was biologically male, having an XY set of sex chromosomes. The researchers therefore ascribe the feminine depictions of King Tut more to the artistic style of the time rather than his actual appearance.
Using genetic testing and other biomedical techniques to answer some of the questions surrounding the life and death of King Tut is not only a huge technical achievement, it also represents a change in archaeological tradition. Archaeologists and scientists are now forming teams to solve problems together. This practice has become common here in Cambridge according to Kate Spence, an archaeologist at the University. A number of labs are now dedicated to applying scientific methods to archaeology. For example, Cambridge runs a geoarchaeology lab that specialises in micromorphology. This technique is used to determine the composition of materials such as the floor of an ancient room or the surface of an ancient cooking vessel. In an isotope lab, the bones of ancient people are examined to determine their diet and land of origin. Palaeobotany is another example of a technique now commonly used, in which samples of ancient plants are analysed in an attempt to reconstruct the environments of ancient peoples.
Even with the integration of scientific methods, the core of archaeology remains unchanged. “There will always be a place for individual work and people thinking through questions synthetically,” says Spence; “Good archaeologists have always worked with all of the evidence that is available.” These techniques are simply providing archaeologists with more evidence to consider. However, the addition of scientific methods does present some challenges to archaeological culture. Spence notes that it is easier to fund archaeological projects that take advantage of scientific methods. Also, articles published in medical journals such as JAMA gain more attention and ‘impact’ than articles published in archaeological journals. This means that more and more archaeologists are being pushed to use the latest cutting-edge techniques.
Although the application of scientific methods to archaeology has become common in other countries, it has thus far been limited in Egypt due to post-colonial politics. Since the 1970s, the Egyptian government has banned the removal of any antiquities from the country. This policy was instated in order to curb the extensive exporting of objects from Egypt into Western museums and private collections. Previously, archaeologists based in the UK would bring samples from the excavation site to machinery located at their home institution (as they still do from excavation sites in other countries). But due to Egypt’s antiquities policy, any scientific research performed at new excavations must be conducted within Egypt. This makes applying new techniques to the field of Egyptology a very slow process, since most of archaeology’s latest scientific tools are developed and located in universities outside of Egypt.
The introduction of scientific techniques has opened up new avenues of investigation to archaeologists. By using some of the most advanced biomedical technology available, Egyptian researchers were able to properly address some of the centuries-old questions surrounding King Tut and his family. In doing so, they highlighted the innovations and disciplinary changes that are transforming archaeology and helping to solve some of the ancient world’s greatest mysteries.
Maggie Jack is an MPhil student in the Department of History and Philosophy of Science
Olivier Restif explains why we should learn to live with our pathogens
Why do we get sick? The 2009 pandemic of swine flu was just another demonstration of how vulnerable we are to those minute particles known as viruses. Fortunately, the cost in human lives of this new version of the influenza virus was much lower than initially feared. However, looking back at recent human history, there has been no shortage of mass killer diseases, from the 1918 Spanish Flu to the ongoing HIV pandemic; new pathogens keep sprouting up like weeds in a garden. But why can’t we just develop an impervious immune system? Part of the reason may be that it isn’t in our best interest to do so; sometimes we can actually use viruses to our advantage.
Every time we get infected with a new pathogen, our immune system not only fights it but also keeps a record of it in the form of antibodies, ensuring a swift and efficient response the next time we encounter the same microbe. In addition, because we have all inherited a slightly different immune toolkit from our parents, individuals who are better equipped to resist a new deadly disease will stand a better chance of passing on those good genes to the next generation.
Thus our species, like all living organisms on this planet, keeps evolving and adapting to an ever-changing environment. This coevolution is like an eternal arms race between host organisms and their pathogens.
American biologist Leigh van Valen has likened this struggle to Lewis Carroll’s Red Queen Race, borrowing her famous phrase: “it takes all the running you can do, to keep in the same place.” Although this antagonistic vision of the relationship between a pathogen and its host makes intuitive sense, it is only part of the story. Natural selection may unexpectedly favour imperfect immune systems, enabling a host to maintain a pathogen and use it as a biological weapon against competitors.
Most people in England are familiar with the grey squirrels that dwell in many parks and wooded areas. A species native to North America, the grey squirrel was introduced to Britain only a century ago by travellers returning from the New World. It rapidly replaced the indigenous red squirrel, which still survives in parts of Wales, Scotland and most of continental Europe. While it had been thought for decades that the newcomers were simply better adapted to colonise our environment than their ginger-haired cousins, scientists recently discovered that grey squirrels had a microscopic ally: a virus.
A large number of red squirrels at the fringe of the red-grey divide in Northern England were found to have died of a squirrel pox infection. Meanwhile, the grey squirrels were often found to possess antibodies against that same virus, suggesting they may be healthy carriers. The current hypothesis is that the invaders have been unwittingly using the virus as a biological weapon against the red squirrels. The irony of this story is that the squirrel pox virus is a distant relative of the human smallpox virus that was introduced to the Americas by European settlers. The smallpox virus was also mostly introduced unwittingly, but at times deliberately, and it had the more horrifying effect of decimating the American Indian populations.
This tale of two squirrels is just one illustration of a widespread phenomenon termed ‘pathogen-mediated competition’. It affects a wide range of animals, plants and even microbes. Many of the bacteria that make us sick by colonising our intestinal or respiratory tracts can themselves become infected and killed by specialised viruses known as bacteriophages. However, some bacteria are capable of harbouring a dormant version of the bacteriophage that only gets activated if the host bacterium gets stressed. When that happens, copies of the virus are rapidly produced and start infecting the surrounding bacteria – except those that carry the dormant bacteriophage, which confers them immunity. Experiments have shown that this strategy (of carrying what is effectively a time bomb) can enable bacteria to invade an environment previously occupied by a competitor that has no protection against the bacteriophage.
This teaches us an important lesson about the evolution of immune systems: instead of continuously trying to wipe out pathogens, it can be more efficient to tolerate their presence and use them as weapons against competitors. As far as natural selection is concerned, it does not matter if a pathogen makes you sick as long as your competitors get even sicker. Furthermore, all organisms have to allocate a finite amount of resources into a wide range of vital functions, so increasing the investment in immune defences may have to be at the expense of another necessity. The optimal allocation of resources (i.e. the one that achieves the best reproductive success) depends on the benefits and costs associated with each vital function.
In the case of immunity, the benefits depend on the risk of getting infected and the virulence of that infection. Most infections are spread by some form of contact between conspecifics (individuals that are likely to compete among themselves for access to food, habitat or sexual partners). As a result, the risk of infection for an individual is in part dependent upon the immune defences of its competitors. If all conspecifics are well protected, they are unlikely to harbour and transmit infectious agents. So, even if you do not invest much in your immune system, you may still be protected by your neighbours, as long as they are ready to pay the cost. This indirect protection is known as ‘herd immunity’ and is an essential factor in the spread of disease.
Herd immunity presents an evolutionary conundrum: if the cost of immune defences is paid by the individual but its benefits are shared with neighbours through herd immunity, no one is expected to evolve a strong immune system. Natural selection appears to favour investing a minimum amount of resources in the immune system while taking as much advantage as possible of the investments of others. So does that mean we cannot hope to win the war against pathogens? Actually, if those pathogens can be used as biological weapons against competitors who are more susceptible to infection, we may not want to kill all our microbial enemies. Instead, it can pay to learn to live with them.
Olivier Restif is a postdoc in the Department of Veterinary Medicine
Celia St John-Green discusses the science behind out of body experiences
Out of body experiences are generally considered to be simple illusions, created in the mind. But is there a scientific explanation for this phenomenon? Why do so many different individuals have such similar experiences?
Out of body experiences crop up in highly diverse contexts, from religious gatherings to hallucinogenic trips. There are numerous reports of out of body experiences from people undergoing a general anaesthetic, and over five per cent of epileptics, at some point in their life, have a perception of leaving their body.
The occurrence of out of body experiences might appear to support the view that mind and body are separate. It seems intuitive that the ‘I’ that causes your knees to bend is in control of, yet distinct from, the body of which those knees are a part. This division of mind and body is an old Aristotelian throwback, in which the mind is seen as a ‘ghost’ in the physical machine. If the mind is indeed separate, then it is conceivable (even if conceptually difficult to imagine) that out of body experiences could be due to the mind’s dissociation from the physical body.
This view has, however, been challenged over the past few decades as we have started to understand the neuronal basis of our sensory perceptions and thought processes. It seems that the mind is in fact equivalent to the brain. If this is so, then the mind cannot by definition become displaced from its physical location. Instead, we have come to recognise out of body experiences as derangements in perception, in which the location of ‘self’ has been mapped or processed incorrectly. So what mechanism underlies our perception of self? Furthermore, what causes it to malfunction?
With any sensory perception, we are used to considering the inputs, the neuronal interactions and then finally the resulting sensory read-out. One example of sensory perception is taste: the input is the activation of taste receptors, neuronal interactions process these inputs, and the output is our perception of flavour. We can view perception of self in a similar manner. Faults at any location in this information flow could be responsible for out of body experiences.
To demonstrate that neuronal interactions underlie our localisation of self, we can start by attempting to identify the region of the brain involved. To this end, various neurological and psychiatric patients have been studied. In these patients, specific brain lesions result in distinct clinical presentations that relate directly to the area of brain that is damaged. Therefore, identifying the damaged area in patients who have had out of body experiences should lead us to the region necessary for this perception. Although the temporal lobe has long been implicated in the perception of out of body experiences, modern imaging techniques have identified the right temporo-parietal junction as the critical locus for self-perception. This region, found just behind the ear on the right-hand side, is important for integrating various senses.
In order to demonstrate that the temporo-parietal junction is indeed responsible, some researchers have set out to artificially induce out of body experiences by directly stimulating the region. A Belgian research team was one of the first to succeed. Using electrodes originally inserted for the treatment of tinnitus, they were able to cause their 60-year-old patient to feel as though he were “floating outside his body”. Many successes using subdural electrodes to stimulate the same region have followed. The fact that stimulation of this specific area of the brain consistently results in similar perceptions provides strong evidence that this is the vital area that malfunctions during out of body experiences.
In addition to epileptics and those with brain lesions, out of body experiences are known to occur occasionally in people with sleep paralysis and Guillain-Barré syndrome (a peripheral nerve disorder). Also, people under anaesthesia and chronic users of cannabis have reported out of body experiences. It is possible that, alongside their diagnosed condition, these patients have damage to the temporo-parietal region that causes their out of body experiences, although it seems more likely that the necessary inputs to the right temporo-parietal junction are just being interrupted.
The similarity of people’s experiences is another line of evidence that supports the existence of a distinct and localised mechanism of self-perception. Reports of out of body experiences are remarkably similar, with people predominantly describing floating above their bodies at a distance of one or two metres. This commonality of experience seems logical if we are dealing with similar interruptions to a specific pathway.
So we can show that there is probably a distinct brain region involved in out of body experiences, but how is it that an incorrect input or a fault in processing can result in such a precise relocation of the self to an alternate location? An answer to this question may be found by drawing a parallel between out of body experiences and phantom limbs. Phantom limbs are a rare phenomenon in which amputees perceive their lost limbs as still being present. In many cases, they also feel that the limb still moves or causes them pain. It has been suggested that these cases are due to damaged nerves generating abnormally strong or accidentally cohesive inputs, so that the brain misinterprets the limb as being present. This concept is transferable to out of body experiences, with the generation of an alternate self being considered equivalent to that of the phantom limb. However, it does not entirely explain the phenomenon, as phantom limbs are perceived in appropriate locations while the alternate self is perceived as being displaced to a new location. It has been suggested by some groups that this is due to the involvement of the vestibular system, on which our body normally depends for balance.
There are still many gaps in our understanding, such as why so many people report feeling ‘a presence’ while disembodied, or intuitively associate the occurrence with religion. Not to mention what quirk in our psychology means that we intuitively accept disembodiment as plausible. There is, however, an ever-increasing body of evidence that seems to confirm that out of body experiences are errors in perception, not true dissociations of mind from matter.
Celia St John-Green is a medical student in the School of Clinical Medicine
Wendy Mak, Maja Choma and Jack Green take you on a journey through the past, present and future of gene therapy, the hurdles this technology has faced and why it is becoming accepted in mainstream medicine.
The Beginnings of an Idea
The concept that DNA can be shuffled between cells to change their behaviour is extraordinary, but this feat is achieved naturally by the simplest of organisms and can be easily replicated within modern molecular biology laboratories. A revolution in the understanding and manipulation of DNA has led the way into a new world of scientific and medical capabilities, with gene therapy at the cutting edge. Yet the humble origins of this technology lie with a cautious British medical officer and a troublesome little bacterium.
In the 1920s, Frederick Griffith was working on classifying strains of Streptococcus pneumoniae, the bacterium responsible for pneumonia, when he made a startling discovery. He found that when a normally harmless strain of the bacterium was injected into a mouse along with dead cells of a pathogenic strain, the mouse became ill and died. This suggested that the harmless strain could transform into the pathogenic one when simply accompanied by dead cells that had been pathogenic. Researchers later realised that the non-virulent form of Streptococcus received and integrated a segment of DNA from the pathogenic form. The DNA was physically transferred from one cell to another and contained information that led to the transformation. Little did Griffith realise that his discovery was to begin a journey into the understanding of the genetic code.
Following this breakthrough in bacteria, work quickly moved on to transforming mammalian cells using DNA transfer. Szybalska and Szybalski demonstrated in 1962 that human cells lacking an enzyme that breaks down uric acid were able to take up DNA from wild-type cells and subsequently produce the functional enzyme. With this experiment, the conceptual groundwork of gene therapy was laid: functional versions of genes could be used to replace defective versions in mutant cells.
The early methods of cell-to-cell gene transfer were inefficient, but in the 1970s new techniques led to dramatic improvements. Newly discovered enzymes allowed sections of DNA to be cut from chromosomes with incredible precision and stuck into other segments of DNA from completely different organisms; the result was a recombinant DNA molecule. With this technology in place, scientists began to contemplate using gene therapy on people.
The earliest applications of gene therapy in humans proved to be a bitterly contentious affair. It was 1980 when an American lab led by Martin Cline made the first, ill-fated attempt at human gene therapy using recombinant DNA. Cline was a rare breed of scientist who combined clinical practice with an expert knowledge of cutting-edge molecular biology. The first isolation of a human gene, called beta-globin, had been achieved, and with an eye on the brimming therapeutic potential of gene therapy, Cline was in a hurry to put this knowledge to good use.
Cline wanted to use gene therapy to treat beta-thalassaemia, a genetic blood disorder characterised by reduced production of haemoglobin (the oxygen-carrying molecule of the blood). People with the disorder often suffer from anaemia, fatigue and other more serious symptoms. It is caused by a mutation in beta-globin, which is one of the two protein chains that make up haemoglobin. Cline’s lab established that the human beta-globin gene could be introduced into mouse bone marrow cells in vitro, and that these genetically modified cells could repopulate niches cleared by irradiation in the marrow of a recipient mouse. With the same experimental framework, he set out to repeat this work in human patients with beta-thalassaemia. Would it be possible to remove these patients’ bone marrow cells, replace their defective globin gene with a functional copy, and reintroduce the corrected cells to cure their disorder?
Unfortunately it was not to be; not only were the results inconclusive, but the experiment sparked much public criticism and anger. Cline obtained permission from ethical committees to inject into patient cells the naked form of the human globin gene, but not the gene tacked onto the bacterial scaffold. Cline argued that the bacterial section of DNA contained another important gene that would give the treated cells a growth advantage over uncorrected cells and make success more likely. Despite the lack of permission, Cline went ahead and injected both human and bacterial DNA into the patients’ cells. This left many with doubts over whether scientists could be trusted to behave responsibly with these new and potentially dangerous technologies. In the end, Cline lost his funding and resigned from his university position.
Although it was a controversial start, enthusiasm for gene therapy was not dampened. A consensus emerged that, had Cline waited for more animal research to confirm the safety of the recombinant DNA, he would have undoubtedly obtained ethical approval; it was his haste that was his downfall. Gradually, a body of interest grew into a more confident field. Now, a new chapter of gene therapy has begun. The first successes in humans have been reported and new treatments are starting to become medical realities.
Gene Therapy: Success and Triumphs
The field of gene therapy is still developing, and most diseases cannot be treated with current technology; however, there have been some significant successes, where gene therapy has cured or helped to treat some devastating conditions.
Imagine living your life without an immune system: you would be prey to any pathogen that comes your way. A common cold could kill you. This is what patients with severe combined immunodeficiency disorder (SCID) have to deal with. They have a faulty gene that prevents their bodies from producing T lymphocytes, a major component of the immune system. This means that their immune system is incredibly weak and many patients must deal with major infections from early on in their lives.
Until recently, the only effective treatment was a bone marrow transplant. However, this is not an ideal remedy, as a matching donor must be found, and the transplant can be rejected. Gene therapy provides a more elegant solution; simply correcting the problem gene allows the body to produce T lymphocytes normally. Unfortunately, the first clinical trials using gene therapy to treat SCID ran into major problems, as the treatment triggered cancer in some patients. Nonetheless, a recently completed 10-year trial proved to be much more successful, allowing several patients with SCID to live essentially normal lives. While there are certain side effects, all of the patients are still alive, and none of them have had problems with cancer.
Leber’s congenital amaurosis (LCA) is an inherited eye disorder in which patients suffer from vision deterioration and eventual blindness. Since the condition results from a single abnormal gene, it is an ideal candidate for gene therapy, especially considering there are no other treatments available.
In 2008, researchers at UCL and Moorfields Eye Hospital in London completed a clinical trial of a novel gene therapy treatment for LCA. The correct version of the gene was injected directly into the retinal cells of patients. Tests after the treatment showed no evidence of side effects, and one patient gained significant improvement in his night vision.
As the trial was conducted on adult patients whose disease progression was already highly advanced, this likely masks the true potential of the treatment. Trials have now begun on younger patients who will hopefully benefit significantly more.
Cystic fibrosis is one of the most common life-threatening inherited diseases, especially among people of European descent. Sufferers have no working copy of the CFTR gene, which produces a protein that regulates the salt balance in the body. As a result, internal organs become clogged with a thick mucus, making sufferers prone to infection, especially in the lungs, and causing other problems such as difficulty in digesting food. The hope is that gene therapy could be used to insert a functional copy of the CFTR gene and thus allow patients to live normal lives.
Advances in medical knowledge have increased the life expectancy of cystic fibrosis patients, but it is still short (approximately 48 years) and at present there is no cure. Although all cells of the body are affected, it is incredibly challenging to insert a new gene into every cell. Thus, current efforts are concentrated on inserting the gene specifically into the cells of the lungs, as some of the most debilitating symptoms of cystic fibrosis affect this organ.
A pilot study involving 16 patients was completed in 2009. The therapy was found to have no major side effects and several patients produced levels of working protein comparable to healthy people. Due to the fact that treated cells die after a certain period of time, repeated treatments are almost certainly necessary. A full scale trial is now underway with a multi-dose treatment regime.
Cancer is a complex disease resulting from many genetic alterations acquired over many years. However, gene therapy does offer a feasible treatment. Cancer cells can be targeted directly by introducing genes that drive them to commit suicide or slow their replication. A current study is using this approach in patients with advanced lung cancer.
Patients’ immune cells can also be modified to make them attack cancer cells, an approach that has already been used to treat melanoma, an aggressive form of skin cancer. Lymphocytes were removed from the patient and new genes were inserted into them. Upon reintroduction to the body, the lymphocytes were able to identify cancer cells and attack them. In a study completed in 2006, 17 patients with advanced melanoma were treated using this method; most of their cancers were reduced and there were no significant side effects reported.
The most exciting aspect of this area of research is that the techniques used in these studies should be applicable to a wide range of cancers, offering the hope of a universal cure for cancer.
Gene Therapy in Practice
Delivery of a gene, although relatively straightforward in cultured cells, presents many problems in the context of patients. The safety of a strategy is always the principle concern, but there are also many other considerations. How do you deliver the gene into the cells? How do you get the gene into the right cells? How do you get enough of the cells to express the gene? All of these questions present considerable technical challenges.
The ideal approach for gene therapy would be to remove the cells of interest and transfect them with the new gene outside of the patient’s body. This would allow specific cell types to be easily targeted, and treated cells could be screened for unintended or dangerous changes before reintroducing them into the patient. Unfortunately, this is only possible with a few tissue types, such as blood and skin. Often it is impossible, for example with the brain and heart, and gene therapy must be applied inside the patient. This requires a vector that can carry DNA past the immune system, reach the target cells, and overcome intracellular mechanisms that protect against foreign DNA. To find a suitable vector, the field of gene therapy has turned to viruses, which make their ‘living’ by efficiently introducing large amounts of DNA into cells. Using viruses, however, is not without risk.
There are many viral candidates to choose from, and many factors must be considered when selecting one. Firstly, the virus must target the correct cells for the genetic disease to be treated. For example, some diseases require a virus that affects only dividing cells, whereas in others it must target a particular tissue. However, no single candidate meets all of the criteria for a safe, efficient and effective vector, so the choice must be made carefully.
Most importantly, a potential viral vector must be well-known and thoroughly studied. It is therefore no surprise that many viruses used in gene therapy are closely related to pathogenic viruses. For example, adenoviruses, which cause the common cold, can also be used in gene therapy to target cells within the lungs and respiratory tract. The herpes simplex virus, which causes cold sores, can be used to target cells in the brain. And retroviruses, which include HIV, are useful because they are incredibly good at avoiding the immune system. But because these viruses are related to those that cause disease, there are dangers. An ideal vector would be completely safe and unable to revert to its virulent form, but it is impossible to eliminate all risk.
To ensure that a vector is as safe as possible, the virus is engineered to prevent replication and screened for its ability to cause cancer and elicit an immune response. Despite these precautions, vectors can still cause serious harm. Adenoviruses have been used for gene therapy but they can also elicit strong immune responses; in one trial, the immune response induced by the adenovirus actually killed a patient. Retroviruses insert their DNA directly into the patient’s DNA and therefore there is a risk that this will disrupt a vital gene or activate a cancer-causing oncogene. This was the case for some patients in the first human gene therapy trial to treat SCID. One quarter of the patients developed leukaemia, which was fatal in two instances.
Safety is of paramount importance, but it must also be balanced with the efficiency of treatment. This is particularly relevant in the case of retroviruses. While the insertion of a new gene into the patient’s DNA runs the risk of disrupting normal genes or causing cancer, it usually results in higher expression and more efficient transmission during cell replication, which ensures that the gene is retained over time. The risks can also be reduced; a recently developed HIV-derived virus inserts its DNA randomly throughout the genome, decreasing the chance that an important gene will be disrupted in a significant number of cells. A virus that does not integrate, such as an adenovirus, carries less risk but requires repeated treatment and may not be as effective.
There are further difficulties in choosing a viral vector, caused by the large size of mammalian genes. A vector must be able to carry large amounts of DNA, but most viruses are small and struggle to hold the genetic information that codes for most proteins. Herpes simplex virus can transport almost 20 times as much DNA as other viruses but still cannot carry the dystrophin gene, admittedly the largest known mammalian gene, but also one that is targeted in the treatment of muscular dystrophy.
Our limited understanding of what causes many diseases also limits the application of gene therapy. When there is a defect in a single gene that codes for a protein with a known function, it is relatively easy to design a remedy using gene therapy. However, for more complex diseases such as cancers or neurological conditions, it is unclear which genes to target because we don’t fully understand how these diseases work. We are also helpless when faced with disorders where the damage occurs early on in development, such as Down’s syndrome. Furthermore, we are at a loss for how to target common conditions such as obesity or high blood pressure, because the genes identified so far are linked to only small increases in risk.
Gene therapy is still far from a perfect treatment and it is often only used as a last resort for the most serious diseases, where the risks associated with treatment become acceptable. Many challenges still remain and if we are to make the technology safer and more widely applicable, we not only need better tools for gene delivery but also a more complete understanding of the diseases we wish to treat. Gene therapy has come a long way since its early days and it is finally delivering some real treatments for serious diseases. The success of recent trials highlights the promise of this amazing technology.
Wendy Mak is a PhD student in the Department of Physics
Maja Choma is a PhD student at the Cambridge Institute for Medical Research
Jack Green is a PhD student in the Department of Zoology
Anders Aufderhorst-Roberts describes the life of the eccentric genius Henry Cavendish
The cavendish family can be traced back through eight centuries and fifteen generations of history. The Cavendishes have consistently produced a large number of prominent men and women including statesmen, patrons of the arts, sponsors of education, intellectuals and socialites, as well as one prime minister. It is unsurprising then that the ranks of the family should also have included a number of successful scientists, the most distinguished being the eccentric genius, Henry Cavendish.
Henry Cavendish’s name would probably not appear on most people’s lists of great scientists. The reasons for this are bound up in his complex personality, his acute shyness and his unwillingness, or more properly his inability, to conform to the social norms of his time.
Cavendish was born in 1731. His mother, Lady Anne Grey, was the daughter of the Duke of Kent, and his father, Lord Charles Cavendish, was the son of the Duke of Devonshire. This meant that the young Henry entered the world as a member of two highly wealthy and influential families. His mother died young and he had a sheltered childhood, living with his father in London. As a result, very little is known of his early years.
His father was a prominent politician who later went on to be a successful scientist. Science funding at that time was sparse and so the subject was often disproportionately dominated by those who could afford to build and maintain their own laboratory. Being a wealthy man, Lord Charles was able to conduct a number of important experiments on meteorology, as well as invent a new type of thermometer – for which he was awarded a medal by the Royal Society. As a child, the young Henry was often seen in the garden helping his father with experiments.
At age 18, Cavendish was accepted to the University of Cambridge, but he left three years later without taking his exams. He returned to live at home, and remained there until the death of his father, more than 30 years later.
It was during this period of his life that he first became renowned for his unusual behaviour. He spoke with a shrill, high-pitched voice, but mostly avoided conversation altogether, and was embarrassed in the presence of strangers, especially women. A self-confessed solitary man, he left notes for his servants on a daily basis in order to avoid meeting them. Any female servant seen by him was dismissed immediately. He had little interest in his appearance, and was known for always wearing the same faded violet coat and a three-cornered hat, a style that had not been fashionable since the previous century. When these clothes became worn out, he simply ordered identical copies to be made.
His only social outlet was the regular meetings of the Royal Society, for he was in great demand by other scientists, who sought his knowledge and advice. Surprisingly, when it came to speaking on matters of science, he was said to lose his inhibitions. The chemist Humphrey Davy even described him as “luminous and profound”.
Cavendish made major contributions to a wide range of scientific disciplines. In 1766 he became the first to isolate hydrogen gas and discover that water was made from hydrogen and oxygen. Soon after, he demonstrated that air is composed of oxygen, nitrogen and a tiny percentage of other obscure gases. His research also extended to astronomy, meteorology, thermodynamics and the nature of electricity. In one experiment, where he was studying the electrical impulses given off by the torpedo fish, he measured the strength of the electricity by shocking himself and recording the level of pain he felt.
It is a testament to the weight of his research that Cavendish was so admired by his peers, since much of his behaviour bordered on the socially unacceptable. Cavendish had no interest in promoting or applying his research to practical uses at a time when scientists were expected to be pioneers in improving the human condition. A number of academics, including the neurologist Oliver Sacks, have noted that he almost certainly had Asperger’s syndrome; however, the lack of information about his childhood has led others to be more sceptical, and thus the debate continues even today.
Whether he suffered from Asperger’s or not, Cavendish was, at the very least, an eccentric. The term ‘eccentric’ was often reserved for those with wealth, as they alone could afford to behave so strangely and avoid being condemned to the asylum. In any case, the link between eccentricity and wealth certainly applied to Cavendish. Around the time of his father’s death, Cavendish inherited vast sums of money from a number of different sources, quickly making him the richest man in England. His wealth and eccentricity appeared to grow together as he used the money to build himself a new house, which he fitted out as a laboratory, complete with a second staircase that allowed him to avoid his housekeeper. He also invested in a library which was open to all serious scholars, but had it built four miles from his residence so that he could avoid meeting anyone who used it.
Towards the end of his life, Cavendish carried out the experiment for which he would become best known. Using a torsion balance, he measured the tiny gravitational attraction between two lead spheres. This calculation allowed him to work out the strength of gravity, and from there, to extrapolate and calculate the density of the earth. Cavendish had a better name for it: he called it “weighing the world”.
Cavendish died in 1810 at the age of 79. He left behind a large estate and a huge collection of unpublished papers. Both eventually made their way, almost a century later, to one of his descendants, the 7th Duke of Devonshire, who personally funded a new laboratory of physics at the University of Cambridge. When the Duke asked for the unpublished papers to be examined, it was found that – without telling a soul – Cavendish had made many seminal discoveries. These included Ohm’s Law, Dalton’s Law of Partial Pressures, Coulomb’s Law, and Charles’s Law of Gases. The credit for all of these were given to other scientists who made the discoveries much later. Following these revelations, Cambridge’s newly founded physics department was named the Cavendish Laboratory in his honour.
As an intellectual, Cavendish changed the landscape of science. At the beginning of his career, scientific publications consisted largely of short and obscure reports without any underlying themes or relevance. Almost as if to make a point, Cavendish’s last major publication on “weighing the world” took up 58 pages in the Royal Society’s Philosophical Transactions. This anecdote sums up his instrumental role in building the brave new scientific civilisation that would go on to define the Victorian era of discovery.
The Cavendish biographer John Pearson perhaps puts it best when he describes Henry Cavendish as “a scientific genius, the most original, wide-ranging British man of science since Isaac Newton”.
Anders Aufderhorst-Roberts is a PhD student in the Department of Physics
Sarah Leigh-Brown travels to London to see how the BBC produces the science radio programme Material World
As a PhD student in molecular biology, expeditions to track orangutans or study Icelandic volcanoes are just not going to happen. Nevertheless, I recently got away from the bench to visit Bush House, nerve centre of the BBC World Service, and learn how they produce a science radio show. It may not be the rainforest or Iceland, but I would not exchange it for either.
On entering Bush House, I was struck by the charged atmosphere. Every team, every producer, every presenter is working on something that a sizeable chunk of the world tunes into every week; the subject matter is as diverse as the listeners of the World Service. I was there, along with fellow science communication enthusiast Harry Harris, to see how they produce the BBC science programme Material World.
The production process is as energetic as the surroundings. Each week, scientific press releases are filtered for impact and interest. The resulting shortlist is then refined by telephone interviews with the researchers behind each story. The most engaging are selected to be interviewed on the programme and a final set of notes is passed to the presenter. They use this to write the script with the producer, allowing them to introduce their own flair for engaging with the audience. Harry and I arrived in the final stages of the redrafting process. Once the words were in place, we joined the production team in the studio as the countdown began.
The show must start and end precisely on time – you cannot hold up the day’s broadcasting while you finish your sentence. Since the scientists who are being interviewed are not reading from a script, the production team has to calculate the exact time remaining after each feature; the presenter must constantly adjust the script accordingly. The atmosphere is charged throughout, and as the show comes to an end – on exactly the right second – the studio buzzes with the excitement of a job well done.
It was an extraordinary experience to see the production of a live radio show, and we look forward to putting what we learned to good use at the community radio station, Cambridge105. Through our own project, GetSET, we plan to broadcast Cambridge science news to the community, with the goal of opening up the science, engineering and technology research of Cambridge to everyone who lives here.
Despite the differences in studio and audience size, Material World and GetSET have one key feature in common: an enthusiasm for sharing scientific discoveries in a way that everyone can enjoy. After all, you shouldn’t need a degree in art to enjoy a Picasso, nor a qualification in theatre studies to be moved by Shakespeare – neither should you need a PhD to appreciate the best that science has to offer.
Material World is broadcast every Thursday at 4.30pm on BBC Radio 4.
For further information about GetSET, email: email@example.com.
Sarah Leigh-Brown is a PhD student in the Department of Oncology
Imogen Ogilvie gives her perspective on conservation and asks whether it is worth trying to conserve species at all
We are in the middle of a mass extinction event. Conservationists spend enormous amounts of time and resources trying to minimise the number of species lost. Extinctions, both human-caused and otherwise, are by no means unique to this period in time, but the current pace and scale set today apart. Most writing on the subject assumes everyone to agree that biodiversity should be conserved. But is this really the case? And at what cost? Why should we bother to conserve biodiversity?
A huge number of species are indispensable to us, and for these, the benefits of conservation are clear. For example, 35 per cent of global food production depends on pollinators. Studies on coffee – one of the most valuable exports of developing countries – have shown the importance of both a diverse range of bee species and proximity of the plantation to natural forest for successful pollination and crop yield. Therefore, there is an indisputable case for conserving targeted areas and species. But surely if a species is economically valuable, self-interested groups will ensure its survival and there is no need to worry about conservation. History suggests otherwise.
The plant ‘silphion’ was an extremely important herbal contraceptive and medicine with a critical role in the ancient Cyrenean economy. It was so important that it appeared on coins of the time. Despite this, silphion mysteriously became extinct; in this case, self-interest alone could not protect even the most important of species. Whole groups of people have died out because they took too much from their environment; the famous collapse of the entire Easter Island civilisation is one example. It is clear then that efforts to conserve economically important species should still be a priority, but what about species with no clear use or economic value?
Many efforts to conserve species that are not indispensable to humans are still shaping and changing the environment for our own purposes. For example, charity funding for conservation often focuses not on species of any material value to humans, but on those that we are simply fond of. This targeted conservation just skews biodiversity in favour of organisms that we like. While the loss of such species is saddening, does it warrant the resources required to save them?
For me, conservation should be put in its global context. The most concerning aspect is not the intrinsic loss of biodiversity, which will always bounce back, but the consequences of this loss for humans. The complexity of ecosystems and interdependence makes it difficult to determine the value of each species; the loss of apparently insignificant species can have drastic consequences for others, which may be more valuable to us. For this reason, the value of any organism can not be underestimated. But sometimes this value is less than the cost of conservation; in these cases, conservation of biodiversity should be balanced with other concerns, and resources should be directed towards projects that more clearly benefit mankind.
Imogen Ogilvie is a 3rd year undergraduate in the Department of Zoology
Ian Fyfe explores the way in which science and technology have revolutionised fine art
The latest of the weird and wonderful exhibits at the Tate Modern may not appear to have any connection to science. Neither may the masterpieces of Warhol, Dali, Picasso or Monet. But without scientific innovation, we would have had none of these. The 19th century saw the birth of modern science, with a surge of technological progress, revolutions in thinking and the founding of the scientific method. It is no coincidence that the same period saw the birth of modern art.
Prior to the mid-1800s, art was used to produce realistic depictions of scenes. Subjects were almost always religious or mythical scenes, historical events or portraits of eminent people. Artwork was usually commissioned, and artists painted what their wealthy customers wanted. But within the last 150 years, modern science has changed the place of art in the world forever.
Photography is undoubtedly the technology that had the most obvious impact on art. By 1840, glass lenses, photosensitive silver compounds and fixing solutions had been combined to produce the first glass negatives. It was not long before photography was a cheaper and more accurate means than painting of producing realistic pictures of people, places and events. One of the purposes of art had been undermined.
At a similar time, in 1851, The Great Exhibition at Crystal Palace was the first of many international expositions to bring the latest industrial and technological advances to the public. They created enthusiasm for machinery, industry and the future; it was a new world, a new technological era. With photography threatening the value of art and the public being swept away with science and technology, it would take a revolution to prevent art from being left behind.
The first step in this revolution was the emergence of Impressionism in the 1860s. The Impressionists, exemplified by Claude Monet, departed from conventional subject matter and – inspired by photography – captured moments from the new technologically-driven life: city street scenes, train stations, bridges and boats. But they did not set out to reproduce the scene accurately. Instead, they aimed to recreate the experience of a passing moment. The representation of the light was more important than the subject itself, a major departure from artistic convention and one which was triggered by the integration of technology into daily life. The Impressionists’ techniques were equally unconventional and also relied on recent scientific progress.
The use of colour in Impressionism was influenced by the colour theories of Michel Eugène Chevreul. As professor of chemistry at the Lycée Charlemagne, with expertise in dye compounds, Chevreul became director of the Gobelins tapestry works in Paris. During his time there, he noticed that the colour of a particular yarn appeared to change according to the colour it was immediately next to. He realised that this was due to an alteration in our perception of the first colour caused by the second, and published his theory of simultaneous contrast in 1839. The Impressionists incorporated his theories into their work to achieve the desired effects of light and shadow. Chevreul had discovered a perceptual oddity that forever changed the use of colour in art.
Also key to the Impressionists’ success were discoveries and new manufacturing techniques that changed their materials. Science was applied to the development of paints. New pigments based on the recently discovered elements of chromium, cadmium, zinc and cobalt provided brighter colours, while the manufacture of synthetic pigments added completely new colours.
More significant than the paints themselves was the collapsible paint tube. Before the 1840s, artists purchased pigments to grind and mix themselves and stored them in pigs’ bladders in their studios. But new manufacturing techniques allowed tin to be rolled thinly and pressed, leading to the invention of the squeezable tube by James G Rand in 1841. The tube was refined to incorporate a screw cap, allowing paint to be stored without drying.
The paint tube liberated the Impressionists and allowed them to work outside, since the paint was contained and easily transported; their choice of subject was unlimited. The new paints also contained paraffin wax and animal fat, resulting in a consistency that allowed thicker application. Because paints in tubes could be stored without drying, artists could afford a greater range of colours, and the new vibrant pigments helped them to recreate light effects. Pierre-Auguste Renoir, an eminent Impressionist, said that “without tubes of paint, there would have been no Impressionism”. And without Impressionism, there may have been none of what followed in the art world; the next generation of artists built on the revolution of the Impressionists and moved art forward with science.
By the early 20th century, science was changing the way people viewed the physical world, both literally and conceptually. Passenger steam trains were in common use and mass production of cars began in the early 1900s. Motorised transport carried people through their daily lives at speed; the world flashed by in flickers of light, familiar forms blurred together. Meanwhile, Einstein was changing the way we thought of space and time, raising new questions about the nature of the world and our experience of it. The impacts on art were profound.
No longer restricted by artistic conventions of subject and technique, the artists of the early 20th century explored this new world with radical approaches. Traditional perspective, form and colour were discarded entirely and rather than depicting scenes at all, paintings were used to convey concepts.
The Futurists attempted to represent the movement and dynamism of the modern world, rejecting every artistic convention and embracing the triumph of science and technology as their subject. Picasso and the Cubists explored the experience of seeing, and how our perceptions of objects are constructed from continually changing perspectives. They captured this by including multiple views of the subject in one picture. Their work developed to include no recognisable subject at all, but instead became metaphors for relativity and our visual experience of the world. Abstract art had been born.
The early 20th century also saw the first attempts to explain human behaviour scientifically. Sigmund Freud in particular was highly influential to the Surrealists. These artists, including Salvador Dali, created dream-like scenes with strange motifs, objects that merged into one another, and often sexual undertones in line with much of Freud’s work. They explored concepts of the mind, often turning to episodes or fears from their own lives for inspiration; this was completely new ground for art.
In the 1950s, the introduction of the television and expansion of print media and advertising – together with mass production of consumer goods – created popular culture; this then spawned ‘Pop Art’. Personified by Andy Warhol, pop art took mass produced symbols of popular culture and presented them as fine art. Mechanical techniques produced several identical pieces of artwork, challenging the concept of art itself.
In the modern digital era, art is still changing. Having provided an important trigger for the development of modern art, photography is now a major art form itself. Digital cameras and sophisticated editing software allow the creation of almost any visual effect. Combination of traditional materials and techniques with digital editing further widens the scope in art.
In a similar way to photography, the recent explosion in mass media and the internet may well have provided a new trigger for changes in art. They provide a continual bombardment of images, meaning that fine art can show us little new on a visual level. Instead, works such as Tracy Emmin’s My Bed, and Doris Salcedo’s Shibboleth (the crack in the floor of the Tate Modern) have come to the fore. This kind of conceptual art does not try to impress visually, but instead presents familiar images in unfamiliar ways, hoping to affect how we think.
The art of today is often scorned; it seems wacky, obscure and overpriced. But this could have been said of art at any stage of the last century-and-a-half. Looking back, we can see that modern art is an attempt to represent and understand a rapidly changing, technological world. Modern science has provided new material, both physically and conceptually, to drive a gradual progression of art that has brought us to the modern day. There is no doubt that future science will continue to drive the evolution of art.
Ian Fyfe is a PhD student in the Department of Pharmacology
Stephanie Glaser travels back through the history of vaccinations
It has a diameter of only 30 nanometres and you don’t even notice as it travels quietly through your gut and into your bloodstream. Poliovirus then infects your central nervous system and slowly destroys your neurons. Your muscles weaken rapidly and you will be paralysed for the rest of your life.
Or would you rather choose the following scenario? When the needle pricks your skin you try to avoid looking. It hurts and you can feel the nurse press the liquid slowly into your tissue. Afterwards, your arm feels slightly numb where the injection was made, but you know that this short moment of discomfort will give you lifelong protection against polio.
Every day, disease-causing pathogens attack people all over the world; the human immune system is constantly challenged by viruses and bacteria. Today, vaccinations are available for a wide range of diseases, and most people take them for granted. But this hasn’t always been true. Vaccination is probably the greatest medical invention in human history.
Vaccine development began about 3000 years ago, when smallpox, a disease caused by variola virus, started to terrorise mankind. With a mortality rate of 30 per cent, smallpox was a serious threat to survival. The symptoms include fever, skin lesions and blindness, and there has never been an effective treatment. However, people began to look for a cure long before any of the cellular or molecular mechanisms were known.
Early on, it was discovered that people who survived a smallpox infection were immune to the disease for the rest of their lives. First described in 1022BC, a procedure called ‘variolation’ was developed, which protected against smallpox: material from a dried pustule of a person that survived smallpox was deliberately introduced into the skin or nose of a healthy person. This usually led to a mild infection followed by complete immunity to the virus. The procedure was not perfect however, two to three per cent of all those variolated did not survive the treatment. Diseases like syphilis and tuberculosis were also routinely transferred to patients. Health and safety regulations as we know them today would almost certainly not approve such a procedure, but at the time it saved the lives of many.
In the late 1700s, variolation was used regularly in China, India and Turkey. And so it came to the attention of Lady Mary Wortley Montagu, the wife of the British Ambassador to Turkey, who then introduced the method to England. During that time, people were desperate for help; smallpox was killing around 400,000 people each year in Europe. Variolation spread to the New World by 1721, and it became general practice to variolate soldiers before new military operations.
In 1757, the orphan child Edward Jenner was variolated in England. Jenner had always shown a great interest in medicine and biology. Not only did he study cuckoo hatchlings and their behaviour, he also carefully observed the people surrounding him. In doing so, he discovered that milkmaids who became infected with cowpox, from touching the udders of infected cows, would never develop the symptoms of smallpox. Although at that time it was not known that cowpox is caused by a virus that is closely related to variola virus, Jenner concluded that cowpox infection resulted in immunity to smallpox. To prove this, he used matter from a fresh cowpox lesion to intentionally infect an eight-year-old boy, who subsequently developed slight symptoms. Two months later he treated the boy with matter from a smallpox lesion. The boy didn’t develop any symptoms, and Jenner concluded that he was immune to smallpox. From the Latin word for cow, vacca, Jenner named his procedure ‘vaccination’ and published his results in 1798.
Despite biology textbooks stating that Jenner was the first person to carry out a vaccination, he had in fact been beaten to it. More than 20 years earlier, in 1774, the English farmer Benjamin Jesty carried out the same experiment. Jesty and at least five others who made the same discovery never published their results, so Jenner was given full credit for the first vaccination against smallpox. After its success was proven, vaccination spread quickly amongst European countries. Variolation was banned in the UK in 1842, and compulsory vaccination against smallpox was introduced in 1853.
After intensive vaccination programmes were established by the World Health Organisation in the 20th century, the number of smallpox infections declined rapidly. By 1979, smallpox had been successfully eradicated, demonstrating just how powerful vaccination can be – especially in the case of diseases that are transmitted exclusively by human hosts.
Many other scientists were able to build on the achievements of Jenner and his colleagues to further develop vaccination and immunology. One of these was Louis Pasteur, a French chemist and microbiologist. While he was studying the reasons for beer and milk going sour, he developed the idea that microorganisms could also cause disease in humans and animals. In 1880, coincidence helped him to understand one of the fundamentals of vaccination. He was working on chicken cholera, a fatal disease transmitted by bacteria. When some of his chickens were infected with an old culture of bacteria, they only developed minor symptoms and recovered fully. Pasteur suspected that these chickens would now be immune to chicken cholera; he was right. After infecting them with a fresh culture of bacteria, the pre-immunised chickens did not develop any symptoms.
Pasteur had proven that artificially weakened pathogens could be used as vaccines. He applied the same principle to generate a rabies treatment in 1884. Pasteur expanded the definition of a vaccine to include all administered solutions that contain attenuated or inactivated pathogens (or parts of them) that induce immunity in the vaccinated individual.
In the following century, the improvement and development of new vaccines was mainly shaped by one person: the American microbiologist Maurice Hillemann. Unknown to many people, he developed more than 40 vaccines during his career, including vaccines against measles, mumps, Hepatitis A and B, chickenpox, rubella and pandemic flu. His achievements are estimated to save the lives of nearly eight million people each year. Fourteen of his vaccines are still part of current vaccine schedules. Hillemann’s ability to decimate pathogens was unrivalled. When his five-year-old daughter Jeryl Lynn fell ill with mumps, he used the opportunity to combat the virus. By collecting samples from her, he was able to culture the virus in the cells of chicken embryos. From there, he proceeded to make an attenuated form of the virus – which served as the world’s first live vaccine against mumps – still known as the Jeryl Lynn strain. Hillemann also produced the well-known MMR vaccine, which combines three attenuated viruses and gives protection against measles, mumps and rubella.
The examples of Jenner, Pasteur and Hillemann show that coincidences and observations outside of the lab played a critical role in the history of vaccine development. Today, the UK immunisation schedule encompasses vaccines against 11 diseases. However, vaccine development is not without challenges or controversy. There are many diseases for which researchers have not succeeded in developing an effective vaccine. Two of the most devastating are HIV and malaria, which together cause three to five million fatalities each year. Criticism from the general public is also common; the necessity and side effects of vaccinations are commonly debated. Like any other drug or medication, vaccination can have side effects. But if we consider how many people have been saved from lifelong illness and death, and if we think of diseases like smallpox (which was successfully eradicated) or polio (which is close to being eradicated), vaccination can surely be considered the greatest medical discovery ever made.
Stephanie Glaser is a PhD student in the Department of Biochemistry
Wing Ying Chow investigates the advantages of electronic lab notebooks
Standing on the shoulders of giants is a phrase often used to describe the progression of science, with each generation of researchers building on the results of their predecessors. Successful experiments find their way into published papers, but what about the dead ends, the unsuccessful attempts? Often these are not published and become lost in laboratory notebooks. In the digital age, this may change as the recording of research moves from paper to computer.
A lab notebook is the place to sketch out ideas and record experimental procedures, results and conclusions. It is a valuable record of a particular scientific investigation for both the researcher who carried out the work and colleagues who may want to revisit and build upon it.
Yet not all researchers keep equally good lab notebooks, and repetition of work due to badly kept records is not uncommon. James Collip, the biochemist who first purified insulin, lost track of the variables during the initial successful purification. It took another two months for him to re-discover a working method. Such cases are not restricted to biochemistry in the 1920s. Bioinformaticians, whose research is born of the digital age, also sometimes find it “easier to run an experiment again instead of trying to find the data”. They rarely use paper, but they must still keep track of their investigations.
An electronic lab notebook (ELN) may help to address some of the shortcomings of the traditional paper one. The Department of Chemistry in Cambridge is the first chemistry department in the UK to adopt an ELN system, which is currently in the pilot phase.
The ELN has three key features: a central database, templates, and digital searching. A centralised database that is professionally maintained and regularly backed up means that data is much less likely to be lost. The ELN offers templates that carry out routine calculations automatically. These templates speed up the planning process and encourage the recording of experimental details in a format that other researchers can understand. As a digital system, the ELN can be searched using text or even chemical structures: very handy when writing a paper or thesis. Many types of files, from annotated images and spectra to journal papers in PDF format, can be dropped into the ELN and searched in the same way.
The main challenge with the ELN is getting academics to switch from their current method of recording experiments. The pilot scheme was targeted at first-year PhD students and new post-docs so that they could establish a paperless routine right from the start of their projects. Nine months into the pilot scheme, there are 45 users, with 6 being particularly active. Most users indicate that they still keep some of their lab records on paper.
In contrast to academia, ELN systems are becoming the standard in industry. GlaxoSmithKline, a major pharmaceutical company, has rolled out an ELN system to over 3000 employees. They switched from paper to electronic lab notebooks in only nine months, and most of their users prefer it over paper notebooks.
Unlike in industry, ELNs will not be mandatory for academic scientists in the short term, yet the eventual use of electronic notebooks in universities is “inevitable” according to Dr Tim Dickens, who is responsible for the computing systems that drive the current ELN in the Department of Chemistry. “An increasing amount of funding is for large, multidisciplinary projects, and the ability to search and share data is becoming particularly important.” Moreover, as a digitised database, the notebooks can eventually be released to the general public, who as taxpayers have a right to access the work that they funded.
Wing Ying Chow is a PhD student in the Department of Chemistry.
The Cambridge Companion to Science and Religion
Are science and religion necessarily in conflict? Was the development of intelligent life on our planet an evolutionary inevitability? Will it be possible to maintain religious faith as astronomers and physicists discover more and more details about the early universe and how it formed? These are the sorts of questions addressed in The Cambridge Companion to Science and Religion. It isn’t a light read by any stretch of the imagination, but for those interested in some of the deepest questions, it is compelling.
Fourteen separate contributions, each from a different author, cover a diverse range of issues. The first five chapters chart the historical interactions between science and religion, and are refreshingly objective in their analysis, if at times a little dry. The central five chapters focus on contemporary issues related to the two subjects, and are much more opinionated. The final chapters explore some of the philosophical aspects raised in the preceding chapters. There is a lot of emphasis on Darwinian evolution throughout and at times this starts to feel repetitive; whilst this is an unfortunate consequence of having numerous authors, it does give the reader a chance to reflect on the arguments.
I would thoroughly recommend this book to anyone who wants to fully examine the questions that science raises about religion. Be aware, however, this isn’t an easy-reading, popular science book. Tim Middleton
Blood and Guts: A History of Surgery
Blood and guts is a fascinating account of pioneering surgery and the people behind it. Hollingham illustrates the successes and failures of surgery in vivid detail using examples from ancient history through to modern times. Surgeons patching up Roman gladiators, high-speed amputations in the 17th century, astonishing facial reconstructions of the present day: all are used to describe moments where surgical breakthroughs occurred. The development of anaesthesia and the control of infections were two particularly important discoveries. Hollingham examines the changing perceptions of surgery within society by looking at the use of brain surgery to “cure” mental health problems in 1960s America, and the social value of reconstructive surgery, particularly to wounded soldiers.
Although the book accompanies a television series, it stands alone well and gives those less familiar with medicine an insight into surgery and its origins. Readers with more knowledge of the subject may find it a little slow, but the personal stories are worth reading. The author does not succumb to the temptation of filling the book with gory tales of mad scientists; instead, he credits each surgeon, even those who may seem misguided, with playing a part in the operating theatre of today. Alex Jenkin
The Price of Altruism
Opening with a colourful description of George Price’s funeral, attended by a motley collection of beggars and scientists, Harman proceeds to take the reader on a whirlwind tour through the life of this eccentric thinker. Ultimately, Price sought to answer the ultimate conundrum: if survival of the fittest is all that matters, “how could behaviour that lowered fitness be selected?…Why do vampire bats share blood? Why do sentry gazelles jump up and down when a lion is spotted, putting themselves precariously between the herd and the hungry hunter?” and “What do all of these have to do with morality in humans? Survival of the fittest or survival of the nicest?”
The author seamlessly intertwines the life of Price with some of the great minds of the 19th and 20th centuries, from Charles Darwin to William Hamilton. Despite frequent references to notable biological problems and complex mathematical concepts such as game-theory, the book reads effortlessly, making a scientific background wholly unnecessary. In fact, as Harman transports the reader from the Siberian steppes to the slums of London, from the Russian Revolution to Nazi Germany and from scientific laboratories to humid jungles, this brilliantly researched book offers more thrills than many novels. Djuke Veldhuis
A selection of the wackiest research in the world of science
Good news for knuckle crackers
A fifty year study has finally reached its climax and come to the conclusion that knuckle cracking does not cause arthritis. Finally, you can quieten all those who love to smugly tell you that you are slowly damaging your joints. Just point them in the direction of Donald L Unger, the 2009 Ig Nobel Prize winner for medicine. He came up with an ingenious experiment: for 50 years, he cracked the knuckles on his left hand no less than twice a day, whilst leaving his right knuckles untouched. This means that the knuckles on his left hand were cracked at least 36,500 times. At the end of the experiment, both hands were examined for the presence of arthritis, and not only was none found, but there was no apparent medical difference between his hands at all. Kudos to Mr. Unger and his left hand. Richard Thomson
Naming cows increases milk production
We are all used to calling our pets by names, but what about cows? Scientists at Newcastle University surveyed 560 farms in the UK and found that farmers who called their cows by names had a 258 litre increase in milk yield (per cow, per year) compared to the farmers who didn’t. Catherine Bertenshaw, one of the scientists involved in the study, believes that naming the cows resulted in more positive human interactions with the animals and that this is what made them so much more productive. The actual naming itself is not so important, rather it is how the naming changes the interactions between the farmers and the animals. Fearful cows produce more cortisol, which interferes with milk production. The same thing is known to happen to human mothers. Positive interactions with the cows reduce their fearfulness towards humans, and so they are in a more relaxed state when it comes to the milk production. Although this study first made people laugh, it is now making farmers think seriously about what to name their cows. Xia Chen
Beer bottle brutality
In a study appropriate for an episode of CSI, forensic pathologist Stephan Bolliger and his colleagues looked into whether empty or full bottles of beer make better weapons, and whether they are able to fracture a human skull. After being asked these questions in court, and perhaps after a few beers, the researchers decided to measure the minimum energy required to break both full and empty bottles of beer. They attached the bottles to a pinewood board using modelling clay and then dropped a steel ball from various heights onto the bottles. The modelling clay was meant to represent the soft tissue surrounding the skull and the pinewood board was meant to act like the boney skull and distribute the impact of the ball. They found that empty beer bottles required significantly more energy to break (40J) than full beer bottles (30J); with the breaking threshold of the human skull ranging from 14.1J to 68.5J, it’s hard to say which one will break first. So it may be better to get smashed over the head with a full bottle of beer than an empty one, but either way you could end up with a fractured skull. Nicola Stead
If you have a wacky research story, let us know and we may include it in the next issue of BlueSci
- Editorial: Issue 19 – Michaelmas 2010
- Cover: Manipulating Behaviour
- News: Issue 19
- Pavilion: Issue 19
- Feature: Forgotten Knowledge
- Feature: Ocean Acidification
- Feature: The Transformation of Archaeology
- Feature: Cherish Your Enemies
- Feature: Out of Body Experiences
- Focus: Gene Therapy
- The Beginnings of an Idea
- Gene Therapy: Success and Triumphs
- Gene Therapy in Practice
- Behind the Science: The Man Who Weighed The Earth
- Away from the Bench: Are You Receiving Me?
- Perspective: Saving Species
- Arts & Reviews: Modern World, Modern Art
- History: Boosting Your Defence
- Technology: Ready to Go Paperless?
- Book Reviews
- The Cambridge Companion to Science and Religion
- Blood and Guts: A History of Surgery
- The Price of Altruism
- Weird and Wonderful
- Good news for knuckle crackers
- Naming cows increases milk production
- Beer bottle brutality