- Editorial: Issue 24 – Easter 2012
- Cover: Deducing Diffractions
- News: Issue 24
- Reviews: Issue 24
- Feature: Staying Alive
- Feature: Symmetry in Science
- Feature: Big Ideas, Small Beginnings
- Feature: Type ‘L’ for Love
- Feature: Turbocharged Thinking
- Focus: Higher, Faster, Stronger
- History: From Herbs to Hormones
- Arts & Science: Dreaming up Science
- Behind the Science: The Grand Question
- Perspective: The Genome Generation
- Science & Policy: Preparing for the Unknown
- Away from the Bench: Science on Ice
- Weird & Wonderful: Issue 24
Left-Brain vs Right-Brain
It’s a popular notion that the left half of the brain is responsible for logic and the right half for creativity. While there is some truth to this, the extent of this separation is often greatly exaggerated. The idea persists, however much it annoys neuroscientists, because it provides a reminder of the need for balance in the way we think. Just because science prides itself on its logical foundations does not mean that some ‘right-brained’ thinking isn’t also needed.
Many of the people involved at BlueSci are balanced individuals with healthy regard for both logic and creativity, and so find writing about science to be a natural path. Others, like me, have been ‘left-brained’ all their lives until finding their ‘right-brain’ in need of creative output. Communicating about science, and hopefully making it interesting, provides the opportunity to engage with both ‘halves’ of the brain.
So why doesn’t science already provide this stimulation? One of the most important aspects of science is the communication of results, yet this is often paid little attention. Science is a global community and it is heavily dependent on people travelling around the world to conferences in order to communicate their results and listen to others’. It is far from the most environment-friendly or budget-friendly activity, so why are these conferences necessary? Are we not communicating effectively enough through journals?
Publishing results in an academic journal, where much of the language and structure is heavily prescribed, leaves little room for creativity. However, in a presentation or a poster an author is free to explain their work in any manner they wish. Often the best speakers are those who can weave a story from their work. No matter your background or which side of your brain you use, everyone loves a story. A narrative keeps the audience following the train of thought and paying attention. This isn’t always easy and engaging your creative side can make all the difference.
The ‘right-brain’ is even more crucial when trying to communicate with a wider audience. The ability to talk about a specialised area of research without using equally specialised terminology can require a lot of practice, which is greatly lacking in most scientists’ lives. Yet communicating with the public is vital as science continues to impact upon society. In this issue of BlueSci, we cover debates over cognition-enhancing drugs, money spent on genome research, and preparation for potential disasters. All of these matters require scientists to have their say but they’ll need both halves of their brains to make it heard. Ian Le Guillou
Issue 24: Easter 2012
Editor: Ian Le Guillou
Managing Editor: Tom Bishop
Business Manager: Michael Derringer
Second Editors: Lizzie Bateman, Wing Ying Chow, Camilla d’Angelo, Matt Dunstan, Leila Haghighat, Nicola Hodson, Anand Jagatia, Sarah Jurmeister, Jonathan Lawson, Nicola Love, Louisa Lyon, Wendy Mak, Laura Mears, Vicki Moignard, Alexey Morgunov, Lindsey Nield, Betheney Pennycook, Hugo Schmidt, Nicola Stead, Hinal Tanna, Divya Venkatesh, Beth Venus, Madushi Wanaguru
Sub-Editors: Emma Bornebroek, Helen Gaffney, Leila Haghighat, Nicola Hodson, Vicki Moignard, Hugo Schmidt
News Editor: Louisa Lyon
News Team: Joanna-Marie Howes, Catherine Moir, Ayesha Sengupta
Reviews: Matthew Dunstan, Leila Haghighat, Jordan Ramsey
Focus Team: Maja Choma, Ruth Gilligan, Leila Haghighat, Anand Jagatia
Weird and Wonderful: Emma Bornebroek, Yvonne Collins, Martha Stokes
Pictures Team: Emma Bornebroek, Yvonne Collins, Nele Dieckmann, Helen Gaffney, Jonathan Lawson
Production Team: Emma Bornebroek, Jenny Crowhurst, Nele Dieckmann, Helen Gaffney, Anna Goulding, Leila Haghighat, Jonathan Lawson, Lousia Lyon, Wendy Mak, Jordan Ramsey
Illustrators: Alex Hahn, Dominic McKenzie, Claudia Stocker
Cover Image: Joseph Paddison & Andrew Goodwin
Staying Alive – HP Liss, A history of resuscitation, Annals of Emergency Medicine, 15:65–72
Symmetry in Science – Dwight E Neuenschwander, Emmy Noether’s Wonderful Theorem
Big Ideas, Small Beginnings – The view from the top, IEEE Spectrum (2004)
Type ‘L’ For Love – P Todd, F Billari and J Simao, Aggregate Age-at-Marriage Patterns from
Individual Mate-Search Heuristics, Demography, 42.3:559–574
Turbocharged Thinking – BJ Sahakian and S Morein-Zamir, Neuroethical issues in cognitive enhancement Journal of Psychopharmacology, 25(2):197–204
From Herbs to Hormones – CA Quarini, History of Contraception, Women’s Health Medicine, 2(5):28-30
Dreaming up Science – www.plato.stanford.edu/entries/thought-experiment/
The Grand Question – Simon Winchester, Bomb, Book and Compass: Joseph Needham and the Great Secrets of China
The Genome Generation – Francis Collins, Has the Revolution Arrived?, Nature, 464:674-675
Preparing for the Unknown – www.bis.gov.uk/assets/goscience/docs/b/12-519-blackett-review-high-impact-low-probability-risks
Lindsey Nield explains the science behind this issue’s front cover.
The magnetic properties of materials can be thought of as arising from microscopic bar magnets located on each atom of the material. At high temperatures, these magnets are arranged randomly, like atoms or molecules in a gas. By contrast, at low temperatures the magnets usually order themselves, analogous to the atoms in a crystal. The way these magnets align depends on the atomic structure of the material, and determines its magnetic properties.
The arrangement of atoms in a material can be probed by using a technique known as neutron scattering, whereby a sample is bombarded with a beam of neutrons. When the beam comes into contact with the regular atomic structure of a crystalline sample it is scattered, and the resulting pattern can be analysed to deduce the original structure of the material. This allows us to describe structures that, even with the best microscopes available, are too small to observe directly. Since neutrons are uncharged, they are able to penetrate matter farther than charged particles, so are sensitive to the bulk structure of the material, not just the surface. They also probe the atomic nuclei rather than the surrounding electrons and interact directly with the magnetic fields in the sample, so that measurements of the magnetic structure can be obtained along with the atomic arrangement.
Although most magnetic materials behave in the way described above—showing magnetic order at sufficiently low temperatures—there are a small number which do not. In these ‘frustrated magnets’, the geometry of the crystal structure can prevent a regular pattern from occurring. Take a simple triangle as an example: if magnets are placed at each corner and two point in opposite directions, there is no direction that the third can face in order to oppose both of the others. The frustration of these magnetic moments can lead to some interesting properties. For example, spin ice, a form of frustrated magnet, has been found to behave like a gas of magnetic monopoles—hypothetical magnetic north poles without opposing south poles. Limited evidence has also shown that some frustrated magnets may act as high-temperature superconductors, which are much sought after for their potential to enable electric power transmission with greatly reduced energy loss. Unlike the sharp peaks seen for most magnets, neutron scattering experiments performed on a large single crystal of a frustrated magnet would result in patterns resembling that of our cover image, which still shows structure and symmetry but is quite diffuse.
However, the cover image for this issue was not actually obtained through a single crystal experiment. Instead, this image was produced, by Joseph Paddison and Andrew Goodwin of Oxford University, from powder scattering data using a computer modelling technique. In neutron scattering experiments using powder samples, the small crystallites that make up the sample are randomly oriented, and measurements take an average over all of these orientations. This means that the diffraction pattern is not an intricate two-dimensional image but is collapsed into a simple graph of intensities. Paddison and Goodwin simulated such powder data for some systems where the magnetic structures were already well understood. They were then able to find a structural model that best fit the data. This allowed the single crystal image to be reconstructed, in strong agreement to experimental data actually collected from crystals.
The cover image is a proof-of-principle example of a reconstructed neutron diffraction pattern based only on data from powder neutron diffraction. Since many materials cannot be successfully crystallised, this approach offers many new possibilities for the study of frustrated magnets.
Lindsey Nield is a 3rd year PhD student in the Department of Physics
Promising Alzheimer’s Treatment
As the population ages, the prevalence of Alzheimer’s disease is increasing. The impairments seen in learning and memory are thought to partly reflect build-up of a protein called amyloid beta. However, researchers at Case Western Reserve University, USA, have recently found a drug that can reduce levels of amyloid beta and rapidly reverse cognitive impairments in mouse models of the disease. The drug is called bexarotene and it is already used to treat cancer. Whereas current Alzheimer’s disease medications take months to reduce amyloid plaques, bexarotene took just 72 hours to reduce plaque load by 50 per cent in the mouse brain. It also decreased soluble amyloid by 25 per cent within the same time period. Encouragingly, the mice regained skills in nest-building and odour-learning following drug treatment. Bexarotene works by acting on retinoid X receptors to increase production of Apolipoprotein E, a protein involved in amyloid clearance. It also stimulates the immune system to degrade amyloid aggregates in the brain. If bexarotene is as successful in human clinical trials as it has been in mouse models, it could be a promising new therapy for Alzheimer’s disease. Ayesha Sengupta
Death by DNA Degradation
Scientists in copenhagen have developed an objective method for determining how healthy you are at the molecular level. By analysing blood samples from 20,000 Danes as part of the “Copenhagen General Population Study”, the researchers found that a person’s telomere length correlates with their risk of a heart attack or early death. Telomeres are located at the ends of chromosomes, where they protect genetic information from deterioration. They are known to shorten over time; a process that has long been thought to play a role in ageing. This shortening process can also be accelerated by poor lifestyle choices such as smoking or obesity. Now, for the first time, a clear correlation has been shown between having short telomeres and being at 50 per cent increased risk of a heart attack and 25 per cent increased risk of early death. “Smoking and obesity ages the body on a cellular level, just as surely as the passing of time,” said Borge Nordestgaard, chief physician on the study. The researchers now hope to develop a simple blood test that could be used by GPs to identify people with a poor health status and thus a higher risk of heart attack and early death. Catherine Moir
Magnetic Soap: New Hope for Oil Spills
Researchers at the University of Bristol have developed a liquid soap that can be controlled by a magnet. Because the soap and the pollutants that it removes can be isolated using a magnetic field, the team, led by Professor Julian Eastoe, hope that with further refinement it could help to clean up oil spills and waste water. Soap is a surfactant, comprised of amphiphilic molecules with a water-loving anionic head joined to an insoluble cationic oil-loving tail and which self-assemble into spheric ‘micelles’. The new ‘magneto-responsive’ soap incorporates iron into its anionic head resulting in soap particles with a metallic centre. While individual iron molecules are not magnetic, the researchers have shown that the iron compounds within the soap are able to cluster and respond to solid magnets. With current cleaning agents often causing significant environmental damage, the potential applications of magnetic surfactants are huge. As well as having properties which make their isolation and removal from an environmental clean-up far easier, the soap’s electrical conductivity, melting point and solubility could be altered by a simple magnetic on and off switch. Joanna-Marie Howes
Radioactive: Marie and Pierre Curie – Lauren Redniss
Marie curie once said, “There is no connection between my scientific work and the facts of private life.” However, Lauren Redniss’ new pictorialised biography, Radioactive, makes the link between Marie and Pierre Currie’s private and professional lives visibly clear. By pairing the narrative with sketches, Redniss presents a new take on the history of the husband-and-wife team who discovered radioactivity, as well as the chemical elements radium and polonium. Much to the chagrin of readers searching for deep insight into the science behind the Curies’ achievements, the content of Radioactive reads much like a Wikipedia entry—although it does take salacious turns, venturing into the couple’s less well-known love scandals. Quotes from personal letters lighten the historical anecdotes, alongside accounts of individuals directly affected by events borne out of the Curies’ legacy, such as the Chernobyl disaster and the bombing of Hiroshima. Redniss also uses her own innovative techniques to tell their story. For example, her hand-coloured, blueprint-style illustrations reflect the colour of radium when spontaneously lit. However tempting, it would be a disservice to call Radioactivea picture book. It reveals the emotional backdrop to the Curies’ work in science, whilst also positing their discoveries in relation to the complexities of a new nuclear age. Leila Haghighat
Reinventing Discovery – Michael Nielsen
According to Michael Nielsen’s Reinventing Discovery, science is entering a new era that will be characterised by massive online collaborations, the general public contributing to research breakthroughs, and publications and data being open to all. Even if we can’t recognise it yet, Nielsen insists that this world of open and networked science has an incredible capacity to enhance our collective intelligence and make otherwise impossible discoveries. Nielsen draws upon a wealth of examples, from citizen science initiatives including Galaxy Zoo and Foldit, to other collaborative efforts in mathematics and genetics to illustrate how we can use online tools to change who can participate in scientific research. This exploration ends in an appraisal of why the current mantra of the research community, ‘publish or perish’, needs to change if we are to reap the benefits of this scientific revolution. Nielsen’s style is approachable and informative, and he presents a clear and insightful vision of the future of science. His careful analysis of the issues in convincing all parties to move towards a system of open science is one that is desperately needed. It should be read by anyone with even a casual interest in the future of scientific research. Matthew Dunstan
Science Ink – Carl Zimmer
Carl Zimmer’s fascination with science tattoos began after he spotted the familiar DNA double helix where he least expected: at the swimming pool, ‘inked’ on a scientist friend. Curiosity grabbed hold of Zimmer, and in response to a post about the tattoo on his blog, Carl began receiving photos of science enthusiasts’ geeky body art. Zimmer has carefully chosen a selection for his book, Science Ink: Tattoos of the Science Obsessed. Alongside full-sized pictures of the tattoos, carefully crafted tales incorporate both the deeply personal reasons behind each permanent mark and the scientific phenomena that they depict. Not to be missed: the nine axioms of set theory on the arm of a software designer in Colorado, fulvic acid on the back of a graduate student at Cornell, a eukaryotic cell on the shoulder of a postdoctoral researcher in Pennsylvania, and a noise circuit on the neck of an electrical engineer. Each page brings science to life on the skin of its tattooed victims: from graduate students and researchers to self-proclaimed geeks. In this unique way, Science Inkreminds us that science comes from us and can be a source of passion and pleasure for those who pursue it. Jordan Ramsey
Beth Richardson looks at new recommendations for performing CPR.
Staring into the camera, ex-footballer and celebrity Vinnie Jones growls, “You only kiss your missus on the lips”. This scene—appearing on a TV or 4oD screen near you—is not from EastEnders or a reality show. It is part of an advertising campaign by the British Heart Foundation, promoting a new ‘hard and fast’ technique: hands-only CPR (cardiopulmonary resuscitation). Survival rates after cardiac arrest are woeful; just 10% of those who suffer a heart attack in public recover and leave hospital. The days of the ‘kiss of life’ being recommended for the general population are gone. Now, the only instruction for performing this lifesaving technique is to lock your hands over the chest and push hard and fast, to the tune of the Bee Gees’ classic Stayin’ Alive.
This is not the first time that the official CPR advice has been revised, though most changes do not come with a catchphrase. The history of resuscitation is enormously long and varied. Although adjustments in recent decades tend to be minor changes in positioning the victim or the ratio of breaths to compressions, this was not always the case. Historically, CPR was based on best-guesses and whatever tools were available at the time. Most of these methods would definitely not come under doctor’s orders today, but the central question of how to keep a non-breathing patient alive is no less crucial now than it was centuries ago.
Early attempts at resuscitation, before the anatomy of the circulatory system was fully understood, ranged from the ineffective to the downright bizarre. Resuscitation in the Middle Ages was based on the correlation of life with body heat, and involved warming the patient with blankets, hot water, or even heated excrement placed directly onto the skin. Survival was predictably poor. In the 1530s the ‘bellows method’ was devised as a way of introducing air into a non-breathing casualty, but poor understanding of anatomy meant that the tongue often blocked the airway and little air was able to enter the lungs. Despite the problems presented by this method—and the fact that few people had a set of fireplace bellows to hand when out and about—variations of this technique persisted for the next two hundred years. Emphasis continued to be placed on maintaining the victim’s body heat, as warmth was considered to be one of the most important signs of life in an unconscious person.
The first serious advocacy group for resuscitation was the Society for the Recovery of Drowned Persons. This formed in Amsterdam in 1767 to address the leading cause of death in the city at that time. Their recommendations included the traditional warming of the body and the bellows method, as well as some innovations. Inversion of the victim to help the fluid drain from their lungs was common, as was ‘fumigation’: blowing tobacco smoke into the mouth and rectum. Although some of their treatments were still aimed at stimulating life-like signs in the casualty, other methods such as ventilation of the lungs, using a bellows or mouth-to-mouth, and applying manual pressure to the chest, are strikingly similar to resuscitation methods used today. The society’s ideas are evidence of a significant shift towards modern CPR methods, guided by a better understanding of anatomy and circulation.
The principal method of forcing air into and out of the lungs, by any means necessary, continued into the next century. Life-saving aids in the 1800s included barrels, which the victim was rolled over to compress the chest, and even horses, with many American lifeguarding stations having their own horse to take drowning victims for a quick trot down the beach. The movement of the horse did sometimes succeed in forcing air into the chest cavity and squashing it out again, but the technique was abandoned after complaints from the ‘Citizens for Clean Beaches’ group. As a result, this technique was replaced by the more subdued ‘roll method’ in 1859, where the victim was rolled repeatedly back and forth to alter the volume of the chest.
After several centuries of trial-and-error methods of resuscitation, the two major techniques currently recognised as effective, mouth-to-mouth ventilation and chest compressions, were developed in the mid-twentieth century. Mouth-to-mouth came first in the 1950s, with the initial scientific articles describing its use and efficacy appearing in 1954. While the importance of getting air into a non-breathing casualty was self-evident, cardiac massage—now considered to be possibly the most important feature of effective resuscitation—was not formally described until 1960. It was previously thought that this was only effective if the heart itself was massaged, but new research showed that even external compressions on the chest were enough to stimulate blood flow. The primary advocates of these new techniques, anaesthesiologists James Elam and Peter Safar, are widely credited with pioneering the modern ‘ABC’ method. This combines all of the components necessary for successful resuscitation: an open airway and the two elements of CPR, breathing and compressions. This formed the first demonstrably successful resuscitation method, which is still broadly in use today.
The rescue breaths are proven to be an important feature of CPR, which makes the sudden, well-publicised shift towards compression-only CPR a little odd. However, there is another feature of CPR that had not been previously considered: the person performing it. If the casualty has had a heart attack in public, there is a high chance that the person performing CPR on them will be a stranger. Recent guidelines, outlined by the British Heart Foundation, state that many bystanders are put off from performing CPR by the thought of doing mouth-to-mouth. However, the damage caused by not giving breaths is offset by the chances of no-one performing CPR at all. The focus has shifted from everyone knowing the technique of CPR to simply making sure that everyone feels able to do it.
This is not the first time that first aid advice has been altered due to the public’s attitude, though in a far less life-threatening situation—St John Ambulance had to change their guidelines for dealing with strains and sprains, given by the acronym RICE: Rest, Ice, Compression, Elevation. Overzealous bystanders took ‘Compression’ to mean ‘wrap as tightly as possible’, which restricts blood flow to the limb and can cause nerve damage. ‘Compression’ is now instead described as ‘comfortable support’. It is hoped that a similar change to a simpler, more direct set of instructions for CPR will mean more people will feel confident enough to administer the first aid which could be life-saving.
The move towards hands-only CPR has been widely supported. Organisations such as St John Ambulance, London Ambulance Service and American Heart Association have all endorsed the change, and studies suggest that the new guidelines will increase the number of cardiac arrest patients reaching hospital alive. However, most of these groups have also stated that they will continue training first-aiders in full CPR, and those who have up-to-date training in performing rescue breaths should continue to do so. It is not yet certain that considering the psychology of the first-aider will improve the poor survival rate of cardiac arrest patients in public, but compression-only CPR marks yet another change in our understanding and treatment of emergency incidents.
Beth Richardson is a 1st year undergraduate studying Natural Sciences
Jack Williams discusses symmetry in nature and its fundamental place in the Universe.
Symmetry has long fascinated the great minds in all disciplines from art to theoretical physics. We are drawn to symmetrical objects such as snowflakes, planets and even the human body. But how can we define symmetry itself? The mathematician Professor Hermann Weyl gave an excellent definition of this elusive concept: an object is symmetrical if it looks the same after it is transformed in some way. For instance, if a snowflake is rotated by 60 degrees, it is indistinguishable from the original so it has a particular kind of symmetry.
Among everything found in nature, living things exhibit the greatest variety and most fascinating range of symmetries. Symmetry in form and behaviour is a key factor in the survival of all organisms, from the leaping of a lion to the capturing of light by photosynthetic plants. A lion’s need for symmetrical movement and balance is clear, but the advantages of symmetry for plants and nature’s tiniest creatures are far more subtle.
Honeybees are experts in the use of symmetry for efficiency. Worker bees construct hexagonal cells to house their supply of honey. But there is more to the hexagonal grid than mere beauty. Only recently, in 1999, the mathematician Thomas Hales proved that this arrangement minimises the amount of wax required. Ever since Pappus of Alexandria discussed it around 300 AD, this ‘honeycomb conjecture’ had been an open problem in mathematics.
Even the simplest of organisms display extraordinary self-similarity. For instance, the chambered nautilus builds its shell using ever-larger compartments, each of the same shape, creating a curve known as the Fibonacci spiral. Apart from providing protection, the shell controls the buoyancy of the animal. Older compartments are sealed and contain gas, allowing the animal to float freely and move easily in water, improving its chances of finding food and escaping from predators.
Such Fibonacci spirals occur in plants as well, which use the spiral to arrange leaves and petals to maximise exposure to light and to attract insects. With each successive leaf projecting outwards at roughly 137.5 degrees to the previous leaf, this arrangement gives minimal overlap between neighbouring leaves. Indeed, the number of petals, leaves or seeds in each spiral is often one of a special series of numbers known as Fibonacci numbers. Evolution, it seems, has converged upon this theoretical maximum to maximise growth, survival and reproduction.
Although symmetry underpins some of the greatest triumphs of evolution, asymmetry also has an important role in living systems. Many biological molecules, such as lactose, can exist in two different forms that are mirror-images of each other, known as enantiomers. Although they have the same structural formula, they are not identical. In order for an organism to produce both enantiomers, two different enzymes would be required. Therefore, in order to conserve energy, many living systems only produce one and pass on this capability of only processing one enantiomer to their offspring. After millennia of evolution, all organisms on Earth produce only the enzyme required to digest the version of the compound that is found in their food. Enantiomers can have quite different properties: the artificial sweetener aspartame has one enantiomer that tastes sweet and another that tastes bitter. This indicates that the taste receptors have a complementary asymmetry, in order to bind to the two aspartame enantiomers. In answer to Lewis Carroll’s Alice, who wonders whether looking-glass milk would taste good: it would probably not taste like milk and would be impossible to digest.
Non-living objects display an equally striking range of symmetries. It is a wonder that we are surrounded by such a wide variety of symmetrical objects, but this is not a coincidence. Not only are the objects we encounter symmetrical, but so are the physical laws that describe their behaviour. For example, the snowflake is symmetrical simply because the physical laws which governed its creation are symmetrical.
But how can a physical law be symmetrical? A law is symmetrical in space if it is unchanged by moving from one point to another. If everything in the universe could be translated (shifted in one direction), preserving the relative positions of planets, stars and galaxies, the motions and interactions between them would proceed uninterrupted—the physical laws that describe the motion are unchanged by a translation.
One of the startling observations about the laws of physics is that they are not only symmetrical in space, but also in time. This idea that an experiment will give the same result even if performed at two different points in space or in time has enormous implications for physics. In particular, this invariance of the laws of physics is the principle on which Einstein’s theory of relativity is based. Since there is no experiment which can distinguish between one point in space-time from another, Einstein concludes that there is no such thing as absolute time or space and different observers may disagree on the duration of time or even the order in which events took place.
Almost every physical law known today is symmetric in time, and postponing an experiment has no effect on its result. Indeed, these laws are unchanged even if the direction of time is reversed. However, this is contrary to our experience since time appears to flow in one direction, with the past inaccessible from the present. The universe began at a single, fixed point in time. The only known physical law that is not time-symmetric is the second law of thermodynamics, which states that entropy (a measure of disorder) cannot decrease with time. Our sense of time seems incompatible with our current description of the universe but it is a fundamental part of reality: we all know a tower of bricks can fall over but will never reconstruct itself, even though this is permissible by the laws of motion. Time appears to flow in the direction of increasing entropy. The nature of this arrow of time is one of the great mysteries of modern physics and must be resolved if science is to achieve its ultimate goal of discovering a ‘Theory of Everything’.
It is therefore no surprise that symmetry is a standard tool in theoretical physics; in 1918 Emmy Noether published a theorem that explained the deep connection between symmetry and physical laws. Noether’s theorem allowed physicists to take a symmetry property of the universe, such as spatial symmetry, and find a corresponding conservation law, such as the conservation of momentum. Similarly, rotational symmetry leads to the conservation of angular momentum and time translational symmetry leads to the conservation of energy. Such principles allow physicists to solve a wide range of problems. Noether’s theorem guarantees that any new physical laws we discover, as long as they are symmetric in space and time, will conserve momentum and energy.
Symmetry is the most enduring aspect of all scientific theories. While refinements are made and outdated theories refuted, the underlying symmetries remain. Indeed, Einstein’s theory of relativity is a consequence of a few basic principles of symmetry. It seems symmetry is an inherent part of our Universe and it continues to guide the discovery of new physical laws.
Jack Williams is a 1st year undergraduate in the Department of Mathematics
M Fernando Gonzalez investigates the microelectronic revolution and the role of transistors.
Television, radio, internet, smartphones, laptops, MP3 players, DVD players—could you imagine your life without these things now? While we increasingly take them for granted, all of these inventions owe their existence to what is arguably the most important technological breakthrough in the past 50 years: the development of microelectronics.
This global revolution has its origins in a single milestone: the invention of the transistor by three physicists at Bell Telephone Laboratories, USA, on 23rd December 1947. John Bardeen, Walter Brattain and William Shockley became known as ‘the transistor three’. Their invention has been so important for the evolution of human communication that historians compare it to the development of written alphabets in 2000 BC and the printing press in the mid-15th century. The transistor marked the birth of the ‘Information Age’, permitting inventions that would change forever the way in which people communicate.
But what are transistors, and why are they so important? When Bardeen, Brattain and Shockley were working at Bell Labs in 1945 under the direction of Mervin J Kelly, mechanical switches were used to operate telephone relays. However, these switches were notoriously slow and unreliable, and Kelly quickly realised that an electronic switch could have enormous commercial potential. He told his employees simply, “Replace the relays out of telephone exchanges and make the connections electronically. I do not care how you do it, but find me an answer and do it fast”. After two years of intensive research, the transistor—a solid-state on/off switch controlled by means of electrical signals—was born. Transistors had the additional advantage that they could be used to amplify electrical signals over a range of frequencies, making them perfect for long-distance TV and radio transmission.
Bell Labs’ transistor easily met the three criteria required for phenomenal success in the technology world: it outperformed its competitors, it was more reliable and it was cheaper. The potential for technological revolution was foreseen by Fortune magazine, which declared 1953 “the year of the transistor”. Bardeen, Shockley and Brattain’s achievement even earned them the Nobel Prize in Physics in 1956.
A key limitation, however, was that transistors were made and packaged one at a time and then assembled individually into circuits. As the number of transistors per circuit increased from a few units to dozens of them, the complexity of wiring required to connect the transistors increased exponentially—a problem known as ‘the tyranny of numbers’. The solution to this problem, and the greatest leap forward in the microelectronics revolution, came only three years later, courtesy of Jack Kilby, a physicist working at Texas Instruments, and Robert Noyce, from Fairchild Semiconductors. Working entirely independently, both men managed to design and manufacture an integrated circuit (IC) from a piece of silicon wafer. The IC comprised a complete circuit with transistors as well as additional elements such as resistors and capacitors contained within a single silicon chip. While both companies filed for patents for the invention they eventually decided, after several years of legal dispute, to cross-license the technology. Such was the importance of the IC that, in 2000, Kilby received the Nobel Prize in Physics.
But the key to the transistor’s enormous success and ubiquity is arguably its compatibility with digital technology. The language of computing is binary code—a stream of ‘1’s and ‘0’s. This is perfectly suited to transistors, which can encode ‘0’ as ‘off’ and ‘1’ as ‘on’. This creates streams of digital bits that can flow between electronic systems. The ‘Age of Information’ speaks binary.
The final boost for the microelectronics industry and the integrated circuit market came unexpectedly (and inadvertently) from John F Kennedy in 1961, when he spoke of his vision to put a man on the Moon. ICs were compact, lightweight and cheap to manufacture, making them ideal components for the computers onboard NASA’s spacecraft. The race to build ever more powerful ICs was on. A full IC contained 10 components in 1963, 24 in 1964 and 50 by 1965. It was the former CEO of Intel, Gordon Moore, who first noted this trend and summarised it in a simple statement that would later become known as Moore´s Law: “The complexity for minimum component cost has increased at a rate of roughly a factor of two per year…certainly this rate is expected to continue.” However, it was not until 1971 that Intel launched its first commercially available microprocessor IC, the Intel 4004, a device containing 2,500 transistors within a single silicon chip. Accepting digital data as an input, the microprocessor would transform it according to instructions stored in the memory and provide the result as a digital output.
The first destination of Intel’s 4-bit microprocessor was the central processing unit (CPU) of a high-standard desktop calculator for the Japanese manufacturer Busicom. The success of this first implementation did not pass unnoticed. Soon a number of companies, such as Motorola, Zilog and MOS Technology, were investing in microprocessor ICs. In 1974 the intense competition for market share led to the development of the more advanced 8-bit processors, as well as a substantial reduction in price from hundreds to just a few dollars.
These advanced modules found applications in embedded systems, such as engine control modules for reducing exhaust emissions, but were mainly used in microcomputers. It was a perfect time for ambitious electronic amateurs. Perhaps the most notable were Steve Wozniak and Steve Jobs, who in 1976 gathered 62 individual ICs to assemble the first practical personal computer, the Apple I, which sold for $666.66, almost three times the manufacturing cost.
After the enormous success of Apple and Commodore computers, IBM launched the IBM PC in 1981. This was the origin of the ubiquitous PC, as IBM’s design was free for anyone to copy. Rapidly, companies such as Intel, AMD, Toshiba, Texas Instruments and Samsung came to provide CPUs with an increasing number of transistors, not only for PCs but also for the multitude of devices and gadgets that fill our homes and lives.
In 1947, only a single transistor existed in the world. It was bulky, slow and expensive. Now there are more than 10 million billion and counting. Nearly 1.5 billion transistors can fit into a single 1 cm2 piece of silicon crystal, capable of performing almost 5 billion operations per second. Never before in human history has any industry, at any time, come close to matching the growth rate enjoyed by the microelectronics industry.
M Fernando Gonzalez is a 4th year PhD student in the Department of Physics
Jordan Ramsey reveals how computers are being used to simulate love and investigate our choice of life partners.
From the first kiss to that last vow, each person’s relationship trajectory is unique. At least, that’s what we tend to think after swearing off men, women, or both for the sixth or seventh time. Demographic age-at-first-marriage studies, however, show a trend in relationships with a similar overall pattern across many countries and recent years. This trend shows that your chance of getting married for the first time increases from a minimum age to a peak—somewhere in your twenties or thirties for most developed countries—then drops off as you age. What kind of behaviour do we exhibit in our relationships on an individual level to produce such consistent widespread phenomena at the population level? And if we can find this typical behaviour of individuals in a population, can we somehow exploit this knowledge to put ourselves in a better position to find ‘the one’ (and to potentially optimize who ‘the one’ is)?
Researchers Peter Todd, Francesco Billari and Jorge Simão used a unique method to model population marriage trends with plausible psychological phenomena. Todd and his team used computer simulations to model a group of males and females assigned a rank from 0 to 100 rating his or her desirability as a mate. Each mate had incomplete knowledge of the pool of possible candidates, meaning he or she had to select a partner with limited rationality. In addition, each individual searched sequentially through potential partners, without knowing the quality to expect in the next mate and without being able to return to previous partners—a concept that is all too familiar to many of us. Time constraints on biological reproductive potential limited the number of partners and time spent with each. Can you feel that biological clock ticking now?
Under these circumstances, research into the psychology of decision making has shown that people generally stick to simple heuristics. This means that instead of coming up with a complex set of rules and probabilities to determine whether to stay in a relationship or move on and find someone better, we stick to simple rules based on experience. Todd’s team chose to simulate a ‘satisficing’ rule (a combination of satisfy and suffice), in which a person chooses to settle down with anyone ranked above his or her threshold of acceptability, or aspiration level.
In the simulation, the aspiration level was set during a learning phase. For this, the team used previous studies in which researchers found that people of similar attractiveness tend to pair up. This could be extended to similarity in general, for example couples often form through shared hobbies and interests. The trick is to learn roughly how you rank in a population and choose a partner based on this. The researchers ‘taught’ their population how they ranked during the learning phase by simulating a series of opposite sex encounters in which each person was either accepted or rejected by a potential mate. With every ‘date’, a simulated person made an adjustment to his or her own ranking in the population and set an aspiration level for a future partner.
The researchers found that with fewer than 20 encounters, simulated males and females were able to roughly assess their own value in a population, set an appropriate aspiration level and find a mate with relative ease. More than 20 encounters in the learning phase, however, significantly reduced the number of mating pairs. In essence, the population became too picky and was unable to find someone to match his or her standards. Todd and his colleagues found that a reasonable number of encounters before entering the mating phase was 12—this has been dubbed the ‘12 bonk rule’. The researchers were able to reproduce the characteristic age-at-first marriage trends after adjusting for individual variation in the number of ‘bonks’ in the learning phase.
As an alternative to this highly entertaining model, Todd and his team formulated a more realistic model in which the learning phase was replaced with a courtship period. During the courtship period the couple continued to meet new people, at which point they decided either to stay in the current relationship or switch to a better partner. Very romantic indeed. Individual aspiration levels began low but were raised depending on how their partner ranked and lowered depending on the length of time the person waited for a better partner to come along. After a pre-determined courtship period, the couple mated permanently and were removed from the pool of ‘singles’. Entirely too realistic. Varying the courtship period also produced realistic age-at-first-marriage graphs.
These models give us interesting—and potentially alarming—insights into the dynamics of individual relationships producing the patterns seen in populations. But in these models each individual uses the same rule of ‘satisficing’ to choose a partner. What happens in a real population where individuals use competing strategies to find a mate? Researchers in Indiana and Berlin simulated a population using three different mating strategies based on speed, quality or harmony. Each strategy had its own strengths and weaknesses. In the speed strategy, individuals proposed to each person he or she ‘dated’, regardless of rank. In the quality search, individuals only made offers to potential partners with a ranking above a certain threshold. Finally, in the harmony strategy individuals proposed to those only within a specific range of their own ranking. The strategies were then evaluated based on their abilities to achieve these goals of speed, quality and harmony in a competitive environment.
Simulations showed that, though the harmony strategy won out when these competing groups were less choosy (that is, the aspiration level was set low in the quality strategy and the range was wide in the harmony strategy), speed quickly became the best strategy in a more discriminating population. In this case, an individual who proposes on every ‘date’ wins in terms of the speed necessary to find a mate and the quality and harmony of that chosen mate. The picky population ensures the best possible outcome for someone looking to settle down with the next person who comes along. In other words, it would seem that when everyone else is worrying about the rank of his or her potential mate, the individual employing speed as a strategy does not need to—the rest of the population takes care of that for him. On the other hand, quality was rarely the best strategy to employ. Take note, gentlemen. The hot girl in the club is sick of your unwelcome advances.
Simulating the romantic relationships of a population based on psychological models can yield surprising insights into our own behaviour. It can be comforting to know that the process of finding a mate—an ordeal that can feel isolating and painful on an individual basis—is one that everyone goes through in a similar way. So next time your most recent boy or girlfriend breaks up with you, try not to be heartbroken. Instead, try to think of it as setting an aspiration level for your next, more suitable, mate. Now there’s a silver lining.
Jordan Ramsey is a 1st year PhD student in the Department of Chemical Engineering and Biotechnology
Camilla d’Angelo asks whether society will become dependent on brain‑enhancing drugs to function.
In the last 20 years, medicine and neuroscience have made great strides towards a better understanding of the brain and improved treatment of mental health disorders. These achievements have been accompanied by a host of socio-economic effects. We have witnessed, for example, a reduced stigma associated with mental health disorders and their treatment. However, we have also seen an increase in the non-medical use of drugs, in particular the advent of cognition enhancers. These ‘smart drugs’ can help us to focus, learn and think faster. They are altering the way we perceive the medicinal purposes of drugs and may one day revolutionise the way we behave as a society.
Cognition enhancers are most commonly used to treat cognitive impairments. These are associated with age-related neurological disorders, such as Alzheimer’s disease, and certain psychiatric conditions, including schizophrenia and attention deficit hyperactivity disorder (ADHD). Common enhancers include the stimulants methylphenidate (Ritalin) and amphetamine (Adderall) used to treat ADHD, and modafinil, a newer drug currently prescribed for narcolepsy, a disorder leading to excessive sleepiness.
The known pharmacology of these drugs is relatively limited but they are widely thought to affect the so-called monoamine neurotransmitter system, which is important in regulating mood, cognition and reward. By modulating levels of dopamine and noradrenaline in the brain, these drugs act to improve attention and working memory, the aspects of cognition mediated by an area of the brain called the prefrontal cortex.
Drugs that enhance brain performance were originally developed to improve patients’ quality of life, as well as reducing long-term healthcare costs associated with treating an ageing population. However, it seems that cognition enhancers such as modafinil are increasingly being used by the healthy. In particular, university students have reported using stimulants like amphetamine and methylphenidate to help them work better and stay awake longer, in the hopes of giving them a competitive edge over their peers. Similarly, business people are believed to be using them to achieve improved performance in the boardroom.
Drugs such as modafinil that enhance brain performance with only a few side effects will have significant implications for society and may one day be used in a range of professions, such as military roles, air traffic controllers and surgeons. In such roles, fatigue is known to impair human performance and pharmacological enhancement could help. A 2011 study led by Barbara Sahakian investigated the effects of 200 milligrams of modafinil on the cognitive performance of sleep-deprived doctors and found that the drug significantly improved cognitive flexibility and reduced impulsivity. The result is promising and suggests that in the future doctors may benefit from modafinil enhancement, perhaps replacing the current most popular drug of choice, caffeine, which can have many side effects including anxiety, tremor and nausea.
Studies such as this one lead to the question of whether the future should see drugs like modafinil become available to any healthy individual wishing to enhance their cognitive capacities. The controversy surrounding the use of such ‘mental cosmetics’ to optimise brain performance in the healthy is that they constitute enhancement rather than therapy. Our current normative medical paradigm advocates therapy, which aims to fix people who are unwell by curing injuries and diseases. Enhancement, on the other hand, can be defined as improving people beyond their normal, healthy state. The problem is that defining whether an intervention constitutes therapy or enhancement can sometimes be difficult and even arbitrary.
For instance, a better understanding of the brain and the destigmatisation of mental health has been met with the diagnosis of novel disorders and increased use of drugs by people who would not have been considered ill twenty years ago. Statistics suggest that the antidepressant Prozac has become a part of life for many, not only as a treatment for depression but also as a mood enhancer, suggesting that the drug is able to improve aspects of people’s personalities not classed as part of their illness. Excessively active children are diagnosed with ADHD and novel drugs are available for people who suffer from extreme shyness (social anxiety disorder). These examples help to illustrate the fine and sometimes arbitrary line between illness and extreme traits. So rather than be solely devoted to curing illness, medicine could help us live better and achieve our goals by enhancing our health, cognition and emotional well-being beyond today’s norm.
The use of pharmacological cognitive enhancers has led to moral concerns that it is unnatural, cheating and equivalent to drug abuse. Fears have been raised regarding the potential of enhancing drugs to undermine the cultural emphasis on the value of hard work. However, caffeine is an example of a drug that is considered legitimate as an enhancer of alertness and performance. There is the potential for drugs with similar effects to become established in society in the same way as the caffeine in coffee. Therefore considerable debate exists over whether enhancing drugs could be as acceptable as non-pharmacological ways of improving cognitive importance.
As we witness a blurring between treatments and lifestyle drugs, concerns about the impacts of the increased medicalisation of society are rife. Those who prefer not to enhance their brain fear future coercion, both direct and indirect, into taking drugs merely to fit the social norm. Others worry that if cognition enhancers do become widely accepted we may become a ‘24/7’ society, with increased pressure to perform better by employers who expect pharmacologically-enhanced employees, akin to having the right qualifications today. There are also concerns that if enhancement is dependent on wealth this may lead to greater inequality and so fair access must be ensured.
As with all medicines, an evidence-based approach to assess the harms and risks of cognitive enhancers should be taken. Society has much to benefit from cognitive enhancement if the risks are managed correctly. If society accepts future enhancement, adequate policies should be put in place to protect individuals from coercion and promote fairness, such as ensuring universal access in schools. If we are to embrace this new technology, assessment of long-term health risks and the development of safe cognition enhancers will be imperative, including addressing their potential for abuse.
Modafinil use by the healthy appears set to increase but there is already considerable research into other types of cognition enhancers. For example, drugs known as ampakines, originally developed to treat schizophrenia, have also been shown to improve attention and memory in healthy individuals. The development of such enhancers is imperative as studies into their effects show great potential in the treatment of mental health disorders. A key challenge for use in the healthy will be to ensure responsible use and their careful evaluation to ensure safety.
However, status quo bias, a cognitive bias that renders people aversive to change, affects social attitudes to human enhancement and so stigma associated with the development of mental cosmetics to improve the lives of healthy individuals remains strong. Enjoying over-the-counter cognition enhancers, much like high-caffeine drinks in supermarkets, today does not seem so futuristic. Changing social landscapes and public perception may well pave the way for enhancement of the human condition through science, although the question of whether such enhancement is for the better of mankind still remains.
Camilla d’Angelo is a 1st year PhD student in the Department of Experimental Psychology
BlueSci explores the role of science in pushing the boundaries of human physical ability.
The revival of the Olympic Games in 1896 marked a new era for modern sports, with hundreds of athletes coming together to compete in Athens. One hundred and sixteen years later, the games have evolved, with thousands of athletes representing hundreds of nations. Each Olympics marks the setting of new sporting records—at the Beijing Olympics alone 132 records were broken. With the passing of time, Olympians are becoming higher, faster, stronger.
Scientific progress plays a major role in the ongoing advancement of our sporting heroes; whether it is in designing better materials, understanding how best to achieve optimum performance both physically and mentally, or applying physics to determine the ‘perfect’ technique, science is behind the scenes, driving improvement.
Sports equipment, for example, has seen tremendous development since the modern Olympics began—once influenced purely by safety and comfort, an athlete’s equipment can make the difference between winning and losing. Cycling helmets are a key example of how an item, implemented as a result of safety concerns, can enhance performance. The improved aerodynamic profile obtained by wearing a helmet can enable cyclists to complete time trials seconds faster. Similarly, while running shoes featuring air-cushioned, shock-absorbent soles are designed with comfort and injury-prevention in mind, athletes who wear them can take advantage of increased bounce and lengthened stride. These advances are made possible by breakthroughs in science and the use of modern materials, such as plastics and composites.
Polyurethane is one such material—a polymer consisting of a chain of repeating units called carbamates. These carbamates are judiciously selected to allow the flexibility, firmness and density of the resulting polyurethane to be tailored specifically for its use. Seventy-five years ago, this versatile material was discovered in a German laboratory; now, it is revolutionising contemporary sportswear. Polyurethane was used in the design of swimsuits where, in conjunction with other materials, it was an asset to an athlete in a number of ways. A flexible polyurethane foam contained microscopic pockets of air, which increased buoyancy, while the water-repellent surface reduced the drag experienced by the swimmer. Not only did the polyurethane material aid the swimmer, but the style in which it was shaped was also critical. Suits were designed to compress the body and make the swimmer more hydrodynamic. However, the improvements in buoyancy and hydrodynamics offered by these swimsuits were so effective that they distorted races and have been banned since 2010.
Polymer materials like polyurethane can be enhanced further by reinforcement with filaments of carbon. These filaments are created from a polyacrylonitrile polymer—made up of carbon and nitrogen atoms—which is heated in air to create a stable hexagonal structure, and finally heated at over 1000 degrees Celsius in order to expel all non-carbon atoms. This process results in crystalline carbon fibres of approximately 5 micrometres across. These fibres can be woven into a flexible fabric ready for incorporation with a polymer. Carbon-fibre-reinforced polymers (CFRP) are lightweight yet strong and durable, and have found use in many areas including aerospace engineering and medical devices.
The late 1980s heralded the arrival of sports prostheses constructed from flexible carbon-fibre, which have since transformed the Paralympics. The flexibility of CFRP makes it an excellent material for lower limb sports prostheses, in which propulsion and impact-minimisation are critical. On each downward step, the body weight of the athlete compresses the curved carbon-fibre shank of the prosthesis and that energy is stored within the material. When the body weight transfers off the prosthesis, its original shape is restored and this energy is returned, propelling the athlete forward. CFRP is also used in a variety of sporting equipment, including golf clubs, bicycle frames and tennis racquets, which benefit from its lightweight and durable properties.
The evolution of sports materials is fuelled by scientific discovery; these materials are enabling athletes to optimise their ability and improve their performance. Nevertheless, at the end of the day it is still the athletes’ bodies that have to push them forward. However easy these Olympians make their athletic feats seem, their achievements actually represent one of the most strenuous challenges to the body’s homeostasis: exercise. With an understanding of the basic science behind the exercise of any given sport, coaches can optimise the training and dietary regimes of Olympians under their care.
The common denominator of all events in the Olympic games—from stretching the bow in archery to leaping across a balance beam in gymnastics—is the need for energy in the form of adenosine triphosphate (ATP). Muscle fibres are primed with ready-to-use ATP, which can be replenished by a compound called phosphocreatine. However, the combination of phosphocreatine and muscle ATP is enough for merely 15 seconds of intense exercise. For longer intervals, muscle fibres must scavenge ATP from elsewhere.
The liver and fat tissue are the reserves for the body’s next sources of energy: carbohydrates and fat. How much energy comes from each source depends on the intensity of exercise. While the body is able to supply sufficient oxygen, aerobic metabolism churns energy out of glucose and fatty acids. This type of metabolism is common in endurance sports, like running, cycling and swimming, where slow-twitch muscles are most utilised.
Typically, the threshold for aerobic metabolism is passed once athletes exceed 70 per cent of their maximal exercise capacity. At that point, the body shifts to anaerobic metabolism in the fast-twitch muscles, which cannot use fat as fuel. Because anaerobic metabolism produces ATP 2.5 times faster than aerobic metabolism, it comes into play whilst training for high-intensity sports, like weightlifting.
By knowing what kind of metabolism a particular sport elicits, Olympians can consume a diet rich in the nutrients they most need. In other words, there is a reason behind the Olympic diets that are notoriously high in calories. For example, Michael Phelps reportedly consumed 12,000 calories a day during the 2008 Olympics.
In 1980, Dave Costill at Ball State University in Indiana showed that endurance athletes benefit from increasing the carbohydrate portion of calorie intake from 50 to 70 per cent during the three days preceding a big race. Though often disputed, this notion of ‘carbohydrate loading’ has remained a popular practice. In 1987, John Ivy at the University of Texas showed that there is a two-hour window following strenuous exercise when muscle fibres actively withdraw carbohydrates from the bloodstream for future use. Athletes are encouraged to stockpile on carbohydrates during this short interval in order to build up energy reserves in their muscles.
Although multiple factors affect exercise tolerance, the maximum volume of oxygen athletes can carry best predicts whether or not they can reach Olympic status. Clinically, this volume is measured as VO2max. This needs to be able to match the increased blood flow athletes experience during exercise, when the volume of blood pumped by their hearts goes up 6 to 8-fold. Initially, this change in cardiac output is due to an increased heart rate, but activation of the sympathetic nervous system allows the heart to squeeze out more blood per stroke and leads to slightly elevated blood pressure. Interestingly, the distribution of blood flow also changes. At rest, 21 per cent of cardiac output goes to muscle, but during heavy exercise this increases to 88 per cent, having been diverted from major organs like the gastrointestinal tract, kidneys, and skin.
Priming your body for competing is clearly important, but what about your mind? Knowing how to ski jump is one thing, but jumping at a national competition requires a lot more resilience. One has to have the mental strength to withstand the pressure of competing, risk of physical injury and demand to perform in front of a big and expecting audience. Nowadays, professional athletes will often have not only a sports coach but also a dedicated sports psychologist to help them achieve the right mental state during training and competing.
Confidence, ability to keep composure under stress, and internal motivation are just three of the personality traits that help achieve success in sport. Individuals who posses these traits make excellent athletes, but the question is: are these innate qualities or can they be acquired?
Self-confidence can be encouraged using goal setting. As long as the targets are specific, measurable and difficult, but still attainable, achieving them should create a feeling of ability and empowerment. For example, in archery, competitors can set their goals at a particular score and then increase that by 10 or more points from one competition to the next, so even without winning they can feel a sense of accomplishment. Sportsmen and women are also advised to perform the same routine during training and competition to help them control their nerves and maintain composure. For example, runners will have a set warm up, and repeating that before a major competition can put them in the mindset they had when training in private.
With a calm mind and enough training, athletes can use techniques to relax and control their physiology, including slowing down their heartbeat, by sheer will power. This has been claimed to be particularly useful for aiming true in shooting sports, although science is at a loss to explain the exact process.
Another widely used technique is imagery, for example where an athlete will imagine the run up, the precise movement of a jump and the landing. Research has shown that thinking about a physical action helps strengthen the connections between neurons in the brain, making its eventual execution much better.
Psychological resilience can also be used in a more offensive way: intimidation of competitors. While this is seen as a normal part of some sports, such as professional boxing, it can be problem in others. This has been particularly pronounced in running, where certain competitors would cause repeated false starts in order to put off their opponents. In 2010 the International Association of Athletics Federations introduced a zero-tolerance policy, which means that regardless of circumstances any false start results in disqualification. The rule has come under scrutiny and became particularly controversial when multiple Olympic gold medallist Usain Bolt was disqualified during the 100 metres final at the World Championships in Daegu last year. It has been suggested that the rule should be changed to allow each athlete at least one mistake during an event, to account for a simple human error caused by excessive pressure.
It is clear that the mind is just as important as the body in order to excel and push the limits of human performance. Often the key to unlocking physical capacity is in the mind, and sport psychologists are working hard to find and use this key.
So, you have eaten all the right things, visualised yourself winning and bought state-of-the-art equipment—now what do you do? Natural ability or raw power are useless without proper technique, and in the high jump technique is everything.
The high jump was originally attempted from a standing start and the introduction of a run-up to the event led to widespread use of the ‘scissors technique’. Jumpers would drive the inside leg up and over the bar, forcing it downwards on the other side to help lift the trailing leg over—producing a movement of the legs like the blades on a pair of scissors. This technique is actually rather inefficient, but to understand why we must look at some of the physics behind an athlete’s jump.
Modelling the athlete as a projectile making an arc over the bar shows that the height of the peak is proportional to the initial upwards velocity squared. This height represents the position of the athlete’s centre of mass—the single point at which the distribution of mass in every direction is balanced. For something like a snooker ball, this is fixed at the dimensional centre. For a moving, pulsating object like a person things are not so simple—as you contort your body by moving your arms, legs and neck, the position of your centre of mass changes.
During a scissors-jump the body is upright, and its centre of mass is at a point considerably above the bar—around 30 centimetres. Lying back while airborne brings the centre of mass closer to the bar, so less energy is needed to lift the body and the jump is more efficient. This technique, the ‘Eastern cut-off’, was used by Michael Sweeney in 1895 to set a new world record of 1.97 metres.
High jumpers began refining their technique by flattening out their bodies and lowering the height of their centre of mass. The ‘straddle jump’ is a very elaborate procedure where take-off is made from the inside leg. The outer leg leads the movement and swings over the bar. The athlete jumps with their chest, rather than their back, facing the bar throughout as they roll over it. The straddle dominated for about 40 years, with Soviet Valeriy Brumel using it to raise the bar to 2.28 metres in the 1964 Olympics.
It was four years later, however, that the technique now used across the world gained international attention. American athlete Dick Fosbury developed a technique that put the soft landing mattresses to good use. The ‘Fosbury flop’ enables athletes to jump over the bar while their centre of mass is actually underneath it, making optimal use of the energy used to explode into takeoff. Whilst previous techniques required the whole body to clear the bar at about the same time, modern high jumpers curl themselves over the top of the bar, with the majority of their body being below it at any one moment. The head, torso, hips and legs clear the bar in succession and jumpers finish on their backs—something which was not possible when athletes were required to land on sand.
Cuban Javier Sotomayer used the flop in 1993 to jump 2.45 metres—the current world record. Few other athletic events have exhibited such a significant change in technique over their history, allowing athletes to set new records and reach new heights.
In the stadium, at the laboratory bench and along the vast distance between, science is enabling athletes to achieve their Olympic dreams.
Ability and confidence are optimised through preparation made possible by an enhanced understanding of physiology and psychology. Performance comes that bit closer to perfection when the mechanics of sport are modelled and analysed, while modern advances in sports materials augment the ability of the athlete beyond that which was dreamt possible when the Olympic Games were revived over a hundred years ago. In these and numerous other ways, it is possible to see the rewards of scientific achievement reflected in the triumph of our Olympians.
This summer sees London host the Olympic Games for the third time, giving us the opportunity to witness sporting history first-hand and to celebrate some of the science that makes it all possible.
Ruth E Gilligan is a 4th year PhD student in the Department of Chemistry
Leila Haghighat is an MPhil student in the Department of Medicine
Maja Choma is a 3rd year PhD student at the Cambridge Institute for Medical Research
Anand Jagatia is a 3rd year undergraduate in the Department of Neuroscience
Vicki Moignard explores how approaches to contraception have evolved over time.
It’s impossible to know when we made the connection between sex and conception. It has been suggested that the domestication of animals during the 10,000 years since the last ice age alerted us to our own biology, while the Bible speaks of Eve’s pregnancy after lying with Adam, indicating that it was well established by the time of the Old Testament.
One thing seems certain: for as long as humans have known the secret of pregnancy we have tried to prevent it. While the cellular basis of conception, the sperm and egg, would not be understood until the Renaissance, history is rife with our attempts to prevent their union.
There is circumstantial evidence that population control occurred even in ancient times, with family sizes being lower than expected given the level of medicine at the time, but the Kahun Gynaecological Papyrus provides the first written proof. Dating to around 1800 BC, the Egyptian document is the oldest known medical text and describes several pessaries—barrier methods to prevent the entry of semen into the uterus. These concoctions frequently, and rather alarmingly, contained crocodile or elephant dung, but may actually have been quite effective by altering the pH of the vagina and killing sperm.
Although pessaries have been described by many cultures through the ages, they did not become commercially available until the 1880s, when London chemist Walter Rendell marketed a mixture of cocoa butter and quinine as a contraceptive. Although effective as an anti-malarial, quinine has since been shown to have no effect on pregnancy, and at high doses may actually cause renal damage in women. However, simply blocking the passage of sperm may have been sufficient to prevent many pregnancies, and early pessaries paved the way for modern spermicides and barrier methods, such as the diaphragm.
Despite these attempts to control conception, it may initially have been easier to terminate a pregnancy than to prevent it. The Roman chronicler Pliny the Elder wrote of the herb Silphion in his Historia Naturalis, which grew along a narrow coastline in what is now Libya. Though described as a contraceptive, it was taken monthly and so more likely acted as an abortifacient, or drug that induces abortion. Other plants of the Silphion family have since been shown to be high in the female hormone oestrogen, which is an active ingredient in some modern emergency contraceptives. There are occasional reports of herbs still being used where abortions are prohibited or difficult to obtain, including in the USA, despite often being highly toxic. Many are now known to act by encouraging pelvic blood flow and promoting menstruation, or to contain oxytocin, the hormone that induces contractions and labour. Although abortion is a contentious issue today, Silphion was literally worth its weight in silver in Roman times, and is believed to have been harvested to extinction.
Herbal knowledge was lost in Medieval Europe, with suggestions that governments suppressed information in order to accelerate population growth in the wake of the plague epidemics. However, condoms were already in use by that time, though the first generation were intended to prevent the spread of syphilis, which had recently emerged as a sexually transmitted disease (STD), rather than pregnancy.
Things changed in 1677 with the discovery of sperm by microscopist Antoine van Leeuwenhoek, and condoms became popular contraceptives. The infamous Venetian lover Giacomo Casanova discussed their use as birth control in his memoir, Histoire De Ma Vie. He favoured those made of lamb intestine over earlier linen varieties, but also made use of lemons as a barrier method for women, the acidity of which probably made them remarkably effective.
Early condoms were expensive and unreliable, but a breakthrough came with the development of vulcanisation, the polymerisation of rubber to make it elastic. Unfortunately these rubber condoms were still prohibitively expensive for the lower classes, and had to be kept and reused. However, the invention of Latex in the 1920s made manufacture easier and the quality better, and is still the main material used today. This centuries-old technology remains among the best methods for preventing STDs.
With an increase in sexual education and demand for birth control, there also came a rise in opposition. Condoms were attributed with a decrease in birth rate in Britain during the 17th century which led to their condemnation by the Catholic Church and a witch-hunt against midwives who promoted birth control. In the USA, the Comstock Act of 1873 placed a complete ban on contraception and many other countries followed suit.
However, by this time conception was much better understood, following the discovery of the human egg in 1827, and its fertilisation by sperm in 1843. Later came the realisation that one egg was released during each menstrual cycle, corresponding to a woman’s most fertile period. This formed the basis of the rhythm method, in which the sperm and egg are prevented from meeting by only allowing sex during the least fertile period of the menstrual cycle. Though an imperfect science, it remains the only form of contraception accepted by the Catholic Church.
Despite this new information, the rhythm method was not deemed effective enough for a woman named Margaret Sanger. A nurse by training, she had watched her mother die from the exhaustion of carrying 18 children in the wake of the Comstock Act, and dreamt of a pill to prevent other women from suffering her mother’s fate. In the first half of the 20th century she founded, and was arrested for, the first birth control clinics in the US.
Sanger recruited scientist Gregory Pincus to work on her pill, and a Catholic physician, John Rock, who was working on hormonal treatments for infertility. Together they established that progesterone, a recently discovered female hormone, suppressed the release of an egg and, ironically for Rock, consequently prevented conception. As birth control was still illegal, their drug was marketed for ‘female disorders’ until the Food and Drug Administration approved it as a contraceptive, named Enovid, in 1960. Although initially limited by law to married women wishing to control family sizes, the Pill’s usage has since exploded. In the 2000–2010 period, World Health Organisation statistics indicated that it was used by 62 per cent of women in sexual relationships worldwide. In addition many alternative methods have arisen from the original hormonal research, including contraceptive injections and implants, and the morning after pill.
Now the focus has turned towards men, as researchers work on developing a male pill to block the hormone testosterone from stimulating sperm production. Chemicals have also been developed to physically block the release of sperm, while in work published this year, ultrasound was shown to reduce sperm counts in rats. Further research is required to establish the long-term safety and efficacy of these techniques, but new contraceptives are on the horizon.
Contraception is among mankind’s most important, but controversial, inventions. It has enabled us to manage population sizes and has been instrumental in reforming gender roles and promoting women’s rights. It has been blamed for the breakdown of marriage and societal values, and even now, people in some nations still have to fight for what the West sees as a basic human right. But controversy aside, we have found a way to overcome our most basic biological purpose, to reproduce, and for that scientific achievement alone it is worth recognition.
Vicki Moignard is a 1st year PhD student in the Department of Haematology
Beth Venus looks at how thought experiments have explained scientific phenomena.
It is a misconception that the poet is more of a dreamer than the scientist. Yet a huge range of crucial and inspired thought experiments—the exquisite dreams of scientists—have signposted scientific progress in almost every field. In particular, insights gleaned from mental laboratories have had world-changing consequences in physics and are helping to provide an understanding of our own minds.
Galileo, one of the first modern scientists, unveiled a number of pivotal thought experiments fundamental to classical physics. Prior to Galileo, it was argued that Earth must be stationary. According to proponents of this argument, if Earth rotated to the east, a ball dropped from a tower would land to the west. In reality, though, we never see this happen, so Earth must be stationary. Galileo countered the argument for a still Earth by considering a man below decks on a ship moving with uniform velocity. The man can pace around his compartment and be completely unaware of the movement of the ship. From this thought experiment came the principle of relativity, which states that uniform motion cannot be distinguished from rest. Galileo’s ship informs us that it is too hasty to conclude that Earth is stationary—it could be rotating uniformly and the ball would plummet to the base of the tower nonetheless.
In his most legendary thought experiment, Galileo dropped a heavy cannonball and a lighter musket ball from the Leaning Tower of Pisa. Galileo’s contemporaries believed that the heavier cannonball would fall faster. He righted this by testing the case of a musket ball attached to a cannonball: based on accepted belief, since the combined mass of these two balls is greater than that of the cannonball alone, the compound object should fall to the ground faster than the cannonball alone. Yet, as the musket ball is attached to the cannonball, it should also slow the cannonball down. This implies that the compound object must fall faster than the cannonball yet also more slowly. The only way to avoid this contradiction is if both balls fall at the same speed. Through this thought experiment, Galileo revealed a stark truth about reality that is not obvious from our day to day experience—that is, the acceleration of bodies falling to earth is constant regardless of whether they are heavy or light. Like any good thought experiment, Galileo’s contemplations brought about a re-conceptualisation of reality, allowing science to switch tracks and divert from serious misconceptions.
Enlightening thought experiments are not confined to classical mechanics. In the field of quantum mechanics, Albert Einstein, Boris Podolsky and Nathan Rosen (EPR) constructed a thought experiment that suggests reality is much stranger than our physical expectations. The three were unable to accept the intrinsic uncertainty in the states of elementary particles and attempted to present it as an absurdity. This thought experiment is known as the EPR paradox and is based on a property of particles called spin. The EPR paradox describes a particle with zero spin decaying into two particles with opposing spins. When the particles are produced they would not have a spin in one direction or the other, but both at the same time. It is not until one of the particles is measured that this ‘superposition’ collapses and both particles have definite and opposing spins. Say the particles move off in opposite directions and travel for light years before we measure their spins. Upon measurement, their spins become fixed. One is up and one is down. According to one interpretation of quantum mechanics, the particles must communicate instantaneously across space so that each ends up in the spin state opposite to the other. Believing that such instantaneous interactions are impossible, Einstein concluded that the particles must have possessed the spin states all along. Therefore, quantum mechanics cannot completely describe reality.
It was down to John Bell to prove Einstein wrong. He set up an inequality which, if violated in nature, would show that the ‘action at a distance’ Einstein rejected must in fact occur. Following this, Alain Aspect showed experimentally that Bell’s inequality is violated. In the end, what was intended as a thought experiment to question quantum theory paved the way for establishing a fundamental, unsettling fact about the nature of reality—non-local interactions between particles can and do happen.
Einstein was a master and great advocate of the art of thought experiments. At 16, he lighted upon his famed thought experiment of chasing a light beam. When the observer, in superhero style, reaches the speed of light, the beam would appear to be stationary. The young Einstein, however, realised that such a phenomenon is never seen. After a period of gestation, this insight gave rise to special relativity, which states that the speed of light is the same for all uniformly moving observers. If you pursue a light beam, it will still streak away from you at the same speed it would have if you were standing still relative to it. This demonstrates the way in which thought experiments can identify loose threads in our thinking and tug out the most apt models of our universe.
For the mind to comprehend itself would be as astounding as comprehending the fundamental nature of reality. In attempting to grapple scientifically with consciousness and whether mind is more than matter, philosophers have also proposed many thought experiments. One puzzling kind is where one experiment is accompanied with another that undermines it. Consider, for example, the thought experiment about colour vision dreamt up by the philosopher Frank Jackson. Mary, a neuroscientist, is locked away in her laboratory and can only see in black and white, yet she knows every physical fact about colour, including how the brain processes it. Upon escaping the lab for a walk one spring day with the flowers all in bloom, Mary sees colour for the first time. Jackson proposed that she would exclaim, “Now I know what it feels like to see colours!” She appears to have learnt facts about colours that are not physical, suggesting that an explanation for consciousness cannot be coaxed out by science. In response, philosopher Daniel Dennett proclaimed that Mary would remark, “Colour perception is exactly as I thought!” suggesting that science does explain how the mind works. Now we have a dilemma: what, in truth, would Mary say? Only real experimentation could show us whether consciousness is amenable to a physical, scientific explanation or not.
The subtle power of thought experiments to shed light upon tough problems has itself been the subject of research. It is believed that a scientist constructs a narrative mental model of a physical situation and applies logical reasoning to follow the situation through to its end point. As thought experiments are based on our experiences of the world, the inferred outcome is the one we expect based on the physical information defining the mental model. Subsequently, this outcome can be used to theorise and experiment in the real world, making thought experiments an incredible aid to comprehending the physical world. Thus, our quintessential image of the scientist should be of one catching up with a streak of light.
Beth Venus is a 1st year undergraduate studying Natural Sciences
Helen Gaffney explores the many-sided life of Cambridge scientist Joseph Needham.
In 1952 Joseph Needham, along with a team of five other internationally respected scientists, was commissioned by the World Peace Council to investigate the allegation that the US was using biological weapons in China and Korea. Whilst the majority of the Western world put it down to nothing more than Chinese whispers, the commission gathered evidence from doctors and local citizens as well as American prisoners of war. Its final report concluded that the American military were indeed experimenting with biological weapons, although the US continues to deny this. In peace time, as in war, the relationship between the Eastern and Western worlds came to consume Needham’s work and he is now widely regarded as the greatest sinologist to be spawned from the West.
After a somewhat turbulent childhood, Needham secured a place to study Chemistry at the University of Cambridge. When he arrived at Gonville and Caius College in 1918 he intended to follow his father into the medical profession. However, under the guidance of Frederick Hopkins, he became ensnared by the chemistry of biological processes. The recruitment of Needham and other promising young scientists was part of Hopkins’s attempt to establish biochemistry as a field distinct from either medical physiology or organic chemistry. In 1924 the Dunn Institute of Biochemistry (now renamed after Hopkins) was opened, and Biochemistry became its own department with Hopkins at the helm. In the same year, Needham married fellow biochemist Dorothy Moyle, now acclaimed for her work on muscle contraction. Needham focussed on embryo development, searching for the chemical agents that enable a single cell to develop into a complex and differentiated organism. Needham was so interested in the field that he wrote a million word survey, entitled Chemical Embryology, detailing its historical and latest findings.
Cambridge’s scientific community nurtured much more than Needham’s academic development, it also provided him with political allies. In the context of straining international tensions, the Cambridge Anti-War Council was set up. An adamant pacifist, Needham took the chair for its first public meeting in 1932, in the company of physical scientist and socialist JD Bernal and chemist Dorothy Hodgkin. The council organised a demonstration on Armistice Day in 1933 to protest against the militarisation of research and in favour of peace. The demonstrators soon found themselves the targets of projectile attack, after being ambushed by the Cambridge University Conservative Association. As they continued along their planned route towards the town war memorial, the remnants of eggs and rotten tomatoes seeped into their clothing. The Evening Standard reported the day’s events under the headline ‘Hooligans in Cambridge’, but the protesters were surprised to read that the headline was intended to describe them rather than their Conservative opponents.
In 1937, with a return to war looking ever more likely, the Cambridge Department of Biochemistry prepared to welcome three new visitors from abroad, the Chinese scientists Lu Gwei-djen, Wang Ying-lai and Chen Shi-zhang. Lu became Needham’s assistant, and he developed a strong attachment to her and her interest in the history of Chinese civilization. In 1942, Needham jumped at the chance to visit Lu’s homeland when he was tasked by the British Council to establish a Sino-British Scientific Cooperation Bureau in Chongqing; this aimed to facilitate the provision of scientific equipment and literature to universities and laboratories across Western China. Needham spent four years in China, practising the language that Lu had taught him, and was always keen to discuss the history of Chinese science.
On returning to Cambridge, Needham resolved to continue delving into China’s scientific past. He discovered evidence of historically neglected advancements; the Chinese were the first to have knowledge of magnetic polarity, the earliest to manufacture cast iron, and should have been credited with the discovery of gunpowder. Needham began work on an extensive record of his findings. The first volume of Science and Civilisation in China was published in 1954 and a further seventeen had been added by the time Needham died in 1995 at the age of 94. Work on the project continues today, based at the Needham Research Institute in Cambridge.
One issue troubled Needham particularly—why, after achieving so much in ancient times, did scientific advancement fizzle out in China while in Europe it began to accelerate, laying what we identify as the foundations of ‘modern science’? This has become known as Needham’s Grand Question.
In developing a grand answer, Needham emphasised the impact of Confucianism and Taoism on the pace of Chinese scientific discovery, and contrasted what he called the Chinese ‘diffusionist’ approach with the ‘inventive’ approach favoured in the West. Scientific progress did continue in China, but simply could not keep up with the exponential growth of scientism sparked by the European Renaissance. However, it seems that the tables are on the verge of turning once more as China is rapidly becoming a scientific powerhouse; while scientific activity in many nations stagnates, China’s share in scientific publishing has more than doubled over the past decade, now second only to the USA. The rich tradition of Chinese science highlighted by Needham looks set to continue.
A polymath academically, Needham’s personal life also reveals his multifaceted identity. He sustained a lifelong attachment to religion, attending church in the Essex town of Thaxted, where the revolutionary socialist priest Conrad Noel presided, though Needham converted late in life to Daoism. He was also a keen Morris dancer and joined the Cambridge Morris Men soon after arriving at University—he was reportedly a skilled performer, light on his feet and a renowned accordionist.
Known throughout China as Li Yuese, Needham was a truly remarkable person and his achievements are many and varied. He was Master of Gonville and Caius from 1966 to 1976, bestowed the title of Companion of Honour, and elected a fellow both to the Royal Society and the British Academy. However, his lasting legacy will be his work in the field of the history of science; Needham’s love of China started with an attempt to bring knowledge of Western science to the East, but ultimately inspired him to bring knowledge of Eastern science to the West.
Helen Gaffney is a 3rd year undergraduate in the Department of History and Philosophy of Science
Nicola Stead reveals what we have learnt from a decade of the human genome.
The decade since the publication of the human genome sequence has seen an explosion in the sequencing of genomes. Prior to its release in 2001 only 42 other genomes, mostly of low complexity, had been sequenced, and of these only 4 were non-bacterial. As of 2011 this number is over 60 times higher, with over 250 non-bacterial genomes now available, including dog, mouse and chimpanzee.
The publicly funded Human Genome Project (HGP) took 10 years to complete, at a cost of $400 million. In 2000, US President Clinton announced that the publication of the human genome “will revolutionize the diagnosis, prevention, and treatment of most, if not all, human diseases.” A year prior to that Dr Francis Collins, who led the public effort, predicted that within a decade patients would be able to undertake prophylactic drug regimes based on predictive genetic tests. Understandably, such bold claims generated great anticipation and heralded the dawning of a new medical era.
Ten years on, there is a definite feeling of public disappointment; Matt Ridley, of the Wall Street Journal, wrote disparagingly that “genomics has always been sold as a medical story, yet it keeps underdelivering useful medical knowledge.” Collins has also conceded that the genome has not yet yielded as many clinical successes as predicted. A group of researchers in Switzerland even argue that in hindsight, the HGP could be described as an economic ‘social bubble’ where investment far outstrips any rational expectation of financial return.
With severe austerity measures in place globally and subsequent science funding cuts, we might ask if continued genome sequencing is of value—or are we merely embarking on more ‘bubbles’? 2011 saw the publication of both the wild strawberry and naked mole rat genomes, whilst the genomes of the woolly mammoth and South American opossum were published in 2008 and 2007 respectively. How do such obscure genomes benefit society or science?
Continued sequencing has driven revolutions in sequencing technology. When the HGP started in 1990, traditional techniques could read up to 25,000 DNA bases—the subunits that make up our DNA—in a week; at its conclusion this had increased to a dramatic 5 million bases. Current next-generation sequencing techniques, which were used to assemble the wild strawberry genome, can now sequence an astounding 250 billion bases per week with a simultaneous 100,000 fold decrease in cost. However, these developments have brought other hurdles with them. Assembling an unknown genome is like putting together a jigsaw puzzle without being able to see the picture on the box. If the HGP was a 100-piece puzzle then the new projects are 1000-piece puzzles—the pieces are more numerous and a lot smaller, making it harder to put them together. Naturally, as more genomes are published, the easier assembly will become with the availability of similar ‘reference genomes’. The wild strawberry provides a basis for genomes of commercially important crops of the same family, including peaches and cherries. It will also aid sequencing of commercial strawberries which, like many crops, has been severely inbred and possesses several genome copies each—making sequence assembly even harder.
Despite many readily available plant genomes, crop breeding has not been able to take advantage of the genomic revolution. Traditionally, traits that improve crops are difficult to breed in from wild varieties, and it can take up to 10 years of crossing to gain desired traits and breed out unwanted ‘wild’ characteristics. Fortunately, this is set to change with a recent study published in Nature Biotechnology. To speed up the process, researchers made use of both the rice genome reference, published in 2005, and the new, quicker sequencing techniques. They were able to take a natural rice strain and mutate it, causing random changes within the genome that changed the plant’s characteristics. They were able to find exactly where the mutations occur by re-sequencing new plants and comparing their results to the reference genome. They created the ‘MutMap’, which maps mutations with different plant features. This aids the tracking of characteristics during breeding and thus reduces breeding times to one year. It is already being used to breed more salt-resistant rice strains for growth in Japanese paddy fields affected by salt water from the 2011 tsunami.
Plant genomes are not alone in benefiting science and society. Methods for handling ancient DNA through sequencing the pre-historic woolly mammoth helped with the sequencing of the Neanderthal genome, which could give us an idea of what it is that makes us human. Other non-human genomes also helped in the two to three years following the human genome publication. At the time it was only 90 per cent complete and filling in the gaps was difficult; however comparing sequences with other mammals significantly helped complete it. Today, comparative genomics can suggest a gene’s function. For example, researchers at the Babraham Institute have used genomes from marsupials, which have primitive placentas, and compared them to the human genome to identify factors important in the growth of a developing placenta. Moreover, as many mammals suffer from the same diseases as humans, having multiple genomes to compare will be invaluable. In this light, it is hoped that the naked mole rat will help us understand ageing.
Despite the perpetrated “under deliverance” of the HGP, it has still fuelled the discovery of more than 1,800 disease genes and over 2,000 genetic tests are available. A map similar to the ‘MutMap’ called the ‘HapMap’ has also been created, mapping all human mutations and is indispensable in identifying disease-causing genes. In the late 1980s, the gene and mutation associated with cystic fibrosis took many years and $50 million to find; nowadays with the genome and ‘HapMap’ it could take mere months.
The first law of technology states we invariably overestimate the short-term impacts of new technologies and underestimate their longer-term effects. This is undoubtedly true for genomics. The last decade has perhaps seen more genomes published than medical cures, but the technology and reference genomes gained through international collaboration will certainly yield huge social and academic benefits in the coming decades.
Nicola Stead is a 4th year PhD student at the Babraham Institute
Tim Middleton examines risk and uncertainty in policy-making.
“There are known knowns…there are known unknowns…but there are also unknown unknowns—there are things we do not know we don’t know.”
— Donald Rumsfeld
Donald Rumsfeld was talking about weapons of mass destruction, but his remarks are just as pertinent in other spheres of policy-making. In 2009, the swine flu pandemic killed at least 18,000 people; in 2010, the eruption of the Icelandic volcano Eyjafjallajökull severely disrupted air traffic in northwest Europe; and in 2011, the tsunami that followed the Japanese Tohoku earthquake killed tens of thousands and precipitated a nuclear crisis at the Fukushima power plant. Were these known unknowns or unknown unknowns? Should we have been able to predict these disasters? Or could we have been better prepared for the unpredictable?
Risk and uncertainty regularly crop up in the field of science and policy. Risk is the product of the likelihood of a certain event and the severity of its consequences should it occur. Last year, the Government Office for Science published the “Blackett Review of High Impact Low Probability Risks”. The review presents a number of ways in which such risks can be assessed and quantified. Unfortunately, though, it is not always possible to assess the relevant probabilities and consequences; what you’re left with is uncertainty. So what can we do in the face of such uncertainty?
One proposed solution is the precautionary principle, namely that in the absence of scientific consensus, the burden of proof that an action is not harmful falls on those taking the action. The principle has proved increasingly popular and is enshrined in much of international law, but it remains a slippery concept. As many as 14 different definitions of the precautionary principle have been found in the legal literature; as a result, different parties interpret and apply the principle in different ways. A nagging problem also remains: the precautionary principle does not allow for the risk of doing nothing. For example, the side-effects of a vaccine may not be understood well enough to justify its use, but if it is not employed the disease remains a threat.
Another approach is to ‘ask the experts’. Again, though, this is not a perfect solution: if the science is inherently uncertain and there is little evidence to call on, then what use is a scientist’s gut feeling? Worse still, the public will be angry if the scientists prove to be wrong. Italian scientists are currently on trial, charged with manslaughter, for failing to communicate the risk before the 2009 L’Aquila earthquake. If scientists are asked to make pronouncements in cases where evidence is scant and the cost of getting it wrong is so high, then they are unlikely to be forthcoming.
A third possibility is to build so-called ‘resilient systems’. A resilient system can maintain operations despite suffering from unpredictable faults. The internet is a good example: computer scientists have been pretty adept at constructing a system that does not break too often and is readily fixed. But how does one go about developing resilience in natural systems—how can we protect against unpredictable outbreaks of disease?
The only real way to proceed is with humility, transparency and through open discussion; but admitting that you simply do not know is rarely politically simple. The way in which risk and uncertainty is perceived and communicated is therefore vital. Managing the public’s fears is, in many senses, as important as tackling the disaster in hand.
As Niels Bohr is alleged to have said, “Prediction is very difficult, especially about the future.” The rather frustrating challenge that remains for policy-makers is what to do about it.
Tim Middleton is a 4th year undergraduate in the Department of Earth Sciences
Hugo Schmidt talks to Pierre Dutrieux and Paul Holland about science at the South Pole.
Although the British Antarctic Survey (BAS) no longer has to worry about Nazi raiders, life in it is still tough. From its conception as a World War II survey post, to studying melting caused by global warming, the BAS calls for unusual researchers. Two such scientists—Pierre Dutrieux, an observational oceanographer and Paul Holland, a computer modeller—spoke to BlueSci about the unique nature of their work.
Finding out what happens beneath five hundred metres of ice in pitch-black darkness is not easy. Dutrieux describes his work with Autosub 3, a submarine designed to travel beneath the ice. The little submarine observes everything from ice thickness to water type by sonar, and is significantly autonomous. Not that it is infallible—the two men reminisce about a near disaster:
“It’s a bit like sending a robot to the moon.”
“It got stuck in a crevasse.”
“Sixty metres into the ice, away from any form of human life […] for two minutes, it was crawling along this wall.”
Such instances are not unusual. There are many researchers who ‘go south’ just to find that weather conditions make science impossible, and remain in tents for the whole trip.
Observation time is strictly limited as the Antarctic is only accessible for a few months a year. Britain lacks icebreaker ships and their object of study, the Amundsen Sea, is one of the most inaccessible regions, taking two weeks on a ship just to get there. The journey is rough on the nerves.
“In general you have sixty people cooped up on the boat, including only something like twenty-two to thirty scientists, and the rest just make the boat run. Most crews I’ve been with have been able to forget their egos for two months and get on with the job. Though there are some people who find it very, very difficult to adjust,” Dutrieux notes.
Holland agrees, “It was really, really long hours, no days off, and a night shift every day. I found it really hard to be stuck in a small ship with everyone every day. You learn to have to hide your emotions, but you get to see some amazing things. Seals, icebergs…”
That the modeller has also been to the Antarctic is surprising. “There’s a strong feeling that the observations should not just be treated as ‘the truth’. The people here felt that I needed to go South to understand the difficulties involved in what they were doing every year. And to learn how far you could trust the measurements.”
By some accounts, these experiences are still mild compared with the experience of other BAS staff. Rothera Ice Station “supports SCUBA diving through the entire winter period”. The inhabitants of Sky-Blu base refer to it as a penal colony. The non-scientific staff at the bases stay for 6–18 months at a stretch. “One Antarctic winter and two Antarctic summers is the traditional [amount of] time before people become completely insane,” Dutrieux notes wryly.
So how does one get this ultimate ‘away from the bench’ experience? Ironically, it is through the most in-the-library and at-the-blackboard skills. “For the job that we do, the critical skills are maths and physics. This is something many people suffer from, people who have done degrees in meteorology and geography will often not get a job in preference to someone who has no experience [but] has a background in physics, because people take the view that physics is hard to teach people while oceanography is easy to teach people. So, study maths.”
Hugo Schmidt is a 4th year PhD student in the Department of Biochemistry
Scientists from Oregon state University have found that placing oestrogen capsules in male snakes makes them attractive to other males and even preferred over the smaller females. This gives an important insight into how the male snakes seek out a partner. Every spring, red-sided garter snakes emerge from limestone caves to form their unique ‘mating balls’, which involve one female becoming swarmed by several males during mating. Oestrogen is important for producing the female sex pheromones released into the air by females. By flickering their tongues to sense the pheromones, males can assess the species, sex, size, age and reproductive conditions of the female, helping them choose their mate. Surprisingly, the oestrogen capsules were able to fool the snakes into believing they had found a suitable partner. This link between oestrogen and mating helps explain the phenomenon of ‘she-males’—males who are found to produce female sex pheromones in response to exposure from oestrogen mimicking pollutants in the environment. Martha Stokes
Studying the consequences of weightlessness is no longer confined to experiments in space. Scientists from Nottingham University have successfully studied the effects of weightlessness on fruit flies without leaving their lab. The researchers created their own microgravity environment using an extremely powerful super-conducting magnet. Fruit flies and other organisms are diamagnetic, which means they are repelled by magnetic fields. Normally this is too weak to be noticed but inside the hollow core of the scientists’ magnet, the magnetic field was just strong enough to balance out gravity, making the flies essentially weightless. The potential to use this ‘diamagnetic levitation’ for studying microgravity was first shown in 2000 when Dutch researchers levitated several small animals, including a live frog. The fruit fly study now shows that this technique effectively mimics conditions in space, as the flies’ responses inside the magnet corresponded perfectly with that of flies living in the International Space Station. Understanding the consequences of weightlessness is very important for enabling long-term space stays, but investigating it has been very expensive. The new method greatly reduces the research costs, so these levitating flies may well represent an important step towards deep space exploration. Emma Bornebroek
That’s a Rap
Researchers from Purdue University have designed a medical implant for monitoring bladder and blood pressure. However, this is no ordinary device; instead of using batteries, it is powered by the acoustic energy of rap music. The mini device contains a small lever capable of converting vibrations into electrical power. The lever vibrates and charges a capacitor while the music plays at the low frequencies usually found in rap. When the lever stops vibrating, the stored energy triggers a pressure sensor to take a reading and the data is transmitted back to a receiver via a radio signal. If hip-hop is not your thing, then slight tweaks to the lever length or thickness would allow it to respond to a range of musical genres. Previous devices required precise alignment between sensor and receiver, short transmission ranges and complicated circuitry. This novel device overcomes these challenges and has the bonus of being powered by your favourite tunes. Yvonne Collins
- Editorial: Issue 24 – Easter 2012
- Cover: Deducing Diffractions
- News: Issue 24
- Reviews: Issue 24
- Feature: Staying Alive
- Feature: Symmetry in Science
- Feature: Big Ideas, Small Beginnings
- Feature: Type ‘L’ for Love
- Feature: Turbocharged Thinking
- Focus: Higher, Faster, Stronger
- History: From Herbs to Hormones
- Arts & Science: Dreaming up Science
- Behind the Science: The Grand Question
- Perspective: The Genome Generation
- Science & Policy: Preparing for the Unknown
- Away from the Bench: Science on Ice
- Weird & Wonderful: Issue 24