Search Results
76 results found with an empty search
- Sloths and the Apparent Imperfections of Natural Selection
Jasmine Gunton Natural selection is a concept often misinterpreted as being a result of some divine destiny; a process ensuring the best adaptations for survival in a given environment. Some animals, however, defy this perception and display how evolution more often functions as a mechanism that just ensures the longevity of a few individuals, with seemingly random consistency. Sloths are perceived as slow, non-threatening, and generally quite useless at survival. When compared to other organisms that inhabit the sloth’s environment, this impression is not too far from the truth. This is not to say that sloths are unimportant - they are essential to the functioning of the ecosystems which they inhabit, and are objectively adorable. Impractical animals such as the sloth are in fact my favourite group of all, as their populations should not be able to thrive, yet they continue to persist for millions of years. Image by Selina Bubendorfer from Unsplash Evolutionary History One thing should first be made clear when discussing the sloth’s biology. There are two distinct genera of sloth, the two-toed sloth (Choloepus), and the three-toed sloth (Bradypus). This may seem insignificant to mention, but the two and three-toed sloths are in fact likely less related to each other than humans are to chimpanzees [1, 2]. The two-toed sloth is more closely related to a family of extinct ground sloths (Scelidotheriidae), and is mainly nocturnal. Three-toed sloths are mostly diurnal, and are more closely related to the giant extinct Megatheriidae sloth family [3]. Additionally, the three-toed sloth has nine vertebrae, while the two-toed sloth only has six [4]. However, the fact that the two extant species of sloth evolved to appear similar to each other suggests that the sloth body plan is beneficial to survival in a tropical rainforest biome. For the purpose of simplicity, I will narrow the majority of my discussion of sloth biology to the two-toed sloth. Heat Regulation Although naturally abundant, sloths are only typically found in the tropical rainforests in Central and South America [5]. This is because although sloths are endotherms [6], they cannot afford to spend too much metabolic energy on heat generation, so require a relatively hot environment. Instead of regulating their body heat like other mammals through mechanisms such as shivering, the sloth prefers to rely on the absorption of radiant heat energy from the sun. When the sloth gets too cold, it shuts down its metabolism and enters a state of torpor [7]. Torpor describes a metabolic state in which the sloth decreases its physiological activity and energy expenditure, a process similar to hibernation in other mammals. However, having a very slow metabolism has significant costs. Photo by Alexander Schimmeck on Unsplash Diet The primary diet of the two-toed sloth consists of leaves, buds, and twigs. This plant material is often quite low in nutrients, as the sloth requires little energy due to its low metabolism. The sloth would certainly not be able to catch any substantial animal prey due to its extremely slow movements. Additionally, the sloth is almost blind and has very little hearing, mostly relying on its sense of smell and touch [8]. However, because the sloth has such a slow metabolism, it can die from starvation because it is unable to extract sufficient nutrients from the food it has eaten [9]. To me, an animal that can die from starvation even with a full stomach is one that should not be able to exist. Yet, the sloth perseveres. Locomotion So just how slow is the sloth? Sloths tend to move only when necessary and move at speeds around 4-4.5 metres per minute on land [10]. They cannot walk and instead move by dragging themselves across the ground. In comparison, the herbivorous capybara in which the sloth shares its habitat can run at speeds up to 35 miles per hour, or 938 metres per minute [11]. However, the sloth does have one locomotive adaptation that puts the animal at somewhat of an advantage. Surprisingly, the sloth has a substantial swimming capability, in the water reaching speeds of up to 13.5 metres a minute [12]. This adaptation could be useful for escaping predators, but is useless for finding food, as the majority of the sloth’s diet is derived from arboreal browsing. Behaviour The swimming adaptation of sloths appears even more confusing considering that sloths are often quite reluctant to leave their trees. In fact, in some cases, infant sloths will die from falling because the mother is unwilling to leave the tree canopy to retrieve their young [13]. One of the main reasons sloths do not want to go on the ground is that they will be vulnerable to predation from animals, such as jaguars and ocelots [14]. Typically, the only time that a sloth will leave its tree is once a week to defecate, which is apparently the most dangerous thing that a sloth can do [15]. Afterwards, the sloth will (very slowly) bury its waste, and then climb back up the tree. Sloths are thought to bury their waste underneath the tree which they occupy as part of a complex symbiotic relationship with native moths. Moths lay eggs in the sloth’s faeces, which then mature and fly up to the sloth. The moths will live in the sloth’s fur, promoting the growth of algae, which the sloth then feeds on [15]. Reproduction Partially due to their reluctance to move, sloths live mostly solitary lives, only meeting to mate. Although able to reproduce once a year, a female sloth may take longer than one year to find a fertile male, despite their abundance in equatorial rainforests [16]. The sloth is an example of a k-selected species, because it has a relatively long gestation period, and only produces one offspring at a time [17]. This seems ineffective as opposed to having multiple offspring where at least one of them should be more likely to survive. However, having such a slow metabolism, the sloth would not be able to provide sufficient care and nutrients for more than one baby. With a low reproductive rate, it is difficult to imagine how the sloth could have survived for around 60 million years [18]. Conclusion The contradicting ecology and tenacity of the sloth shows us that natural selection can work in ways that are not often clear, and we still have limited understandings as to what allows a population to survive. Sloths just like to hang out for most of their lives, and it obviously works very well for them. Only two of the six species of sloth are categorised as threatened by the IUCN (mostly as a result of habitat loss), with the other species being classified as of ‘least concern’ [19]. So who are we to judge their puzzling adaptive strategies? References [1] S. R. Pant, A. Goswami, and J. A. Finarelli, “Complex body size trends in the evolution of sloths (Xenarthra: Pilosa),” BMC Ecology and Evolution, vol. 14, pg. 184, Sept. 2014. [Online]. Available: https://doi.org/10.1186/s12862-014-0184-1 [2] S. Kumar, A. Filipski, V. Swarna, A. Walker, and S. B. Hedges, “Placing confidence limits on the molecular age of the human-chimpanzee divergence,” PNAS, vol. 102, no. 52, pp. 18842-18847, Dec. 2005, doi: 10.1073/pnas.0509585102. [3] S. Presslee et al. “Palaeoproteomics resolves sloth relationships,” Nature Ecol. Evol., vol. 3, pp. 1121-1130, June. 2019. [Online]. Available: https://doi.org/10.1038/s41559-019-0909-z [5] J. E. Mendoza, M. Z. Peery, G. A. A. H. Heteren, and J. A. Nyakatura, “Homeotic transformations reflect departure from the mammalian ‘rule of seven’ cervical vertebrae in sloths: inferences on the Hox code and morphological modularity of the mammalian neck,” BMS Evolutionary Biology, vol. 18, pg. 84, June. 2018. [Online]. Available: https://doi.org/10.1186/s12862-018-1202-5 5] J. E. Mendoza, M. Z. Peery, G. A. Gutiérrez, G. Herrera, and J. N. Pauli, “Resource use by the two-toed sloth (Choloepus hoffmanni) and the three-toed sloth (Bradypus variegatus) differs in a shade-grown agro-ecosystem,” Journal of Tropical Ecology, vol. 31, no. 1, pp. 49-55, Oct. 2014, doi:10.1017/S0266467414000583. [6] B. K. McNab. (May. 1975). Energetics of Arboreal Folivores: Physiological Problems and Ecological Consequences of Feeding on a Ubiquitous Food Supply. Presented at The Ecology of Arboreal Folivores Symposium. [Online]. Available: https://www.biodiversitylibrary.org/item/111333#page/165/mode/1up [7] A. L. Gardener. “Sloth.” Britannica.com. https://www.britannica.com/animal/sloth. (accessed Aug. 30, 2021). [8] A. P. Sánchez-Chavez, “Diet of Hoffmann’s two-toed sloth (Choloepus hoffmanni) in Andean forest,” Mammalia, to be published. [Online]. Available: https://www.degruyter.com/document/doi/10.1515/mammalia-2021-0016/html [9] D. Challinor, “Canopy of tropical rain forest and vertebrate leaf eaters: sloths (three-toed and two-toed), folivorous birds (hoatzin), green iguana,” Smithsonian. [Archived]. [Online]. Available: https://repository.si.edu/handle/10088/1818 [10] J. A Nyakatura and E. Andrada, “A mechanical link model of two-toed sloths: no pendular mechanics during suspensory locomotion,” Acta Theriologica, vol. 58, pp. 83-93, Jan. 2013. [Online]. Available: https://doi.org/10.1007/s13364-012-0099-4 [11] D. Attenborough, “The Life of Mammals,” BBC One, Nov. 2002. [Documentary] [12] M. Goffart, Function and Form in the Sloth. Oxford, New York, USA: Pergamon Press, 1971. [13] C. A. Soares, and R. S. Carneiro, “Social behavior between mothers × young of sloths Bradypus variegatus SCHINZ, 1825 (Xenarthra: Bradypodidae),” Brazilian Journal of Biology, vol. 62, pg. 2, May. 2002. [Online]. Available: https://www.scielo.br/j/bjb/a/7jYdyhtVSTtFsyBhh5GZ5Zt/?lang=en [14] R. S. Moreno, R. W. Kays, and R. S. Samudio, “Competitive Release in Diets of Ocelot (Leopardus pardalis) and Puma (Puma concolor) after Jaguar (Panthera onca) Decline,” Journal of Mammalogy, vol. 87, no. 4, pp. 808-816, Aug. 2006. [Online]. Available: https://doi.org/10.1644/05-MAMM-A-360R2.1 15] J. N. Pauli, J. E. Mendoza, S. A. Steffan, C. C. Carey, P. J. Weimer, and M. Z. Peery, “A syndrome of mutualism reinforces the lifestyle of a sloth,” Proc. R. Soc. B., vol. 281, no. 1778, March. 2014. [Online]. Available: https://royalsocietypublishing.org/doi/10.1098/rspb.2013.3006 [15] J. N. Pauli, J. E. Mendoza, S. A. Steffan, C. C. Carey, P. J. Weimer, and M. Z. Peery, “A syndrome of mutualism reinforces the lifestyle of a sloth,” nline]. Available: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0051389 [17] M. F. Garcés-Restrepo, M. Z. Peery, B. Reid, and J. N. Pauli, “Individual reproductive strategies shape the mating system of tree sloths,” Journal of Mammalogy, vol. 98, no. 5, pp. 1417-1425, Oct. 2017. [Online]. Available: https://academic.oup.com/jmammal/article/98/5/1417/4091355?login=true [18] M. A. O’Leary et al. “The Placental Mammal Ancestor and the Post–K-Pg Radiation of Placentals,” AAAS Science, vol. 339, no. 6120, pp. 662-667, Feb. 2013, doi: 10.1126/science.1229237. [19] M. Superina, T. Plese, N. Moraes-Barros, and A. M. Abba, “The 2010 Sloth Red List Assessment,” Edentata, vol. 11, no. 2, pp. 115-134, Dec. 2010. [Online]. Available: https://doi.org/10.5537/020.011.0202
- Modern Random Number Generation
Liam Quinn Image by Robert Stump on Unsplash Introduction Random number generators existed long before computers. All civilizations throughout the years have felt the need to produce random selections for a variety of applications. The Romans had a name for the process of flipping a coin to determine a choice between two outcomes: 'navia aut caput', or "boat or heads" [1]. Dice have been found even earlier than this, with examples as far back as 5,000-years old being found in the Middle East. Random number generation has since advanced past simple rolling of dice or shuffling of cards. It is an even more critical part of our society than ever before. This article will describe random numbers, how to generate them, and how we can test whether sequences are genuinely random. What are Random Numbers, and Why are they Important? Random number generation describes a system that produces a sequence of numbers for which the next value in the sequence cannot be predicted more reliably than by guesswork. The output sequence may contain detectable patterns in hindsight, but they cannot contain patterns that are more or less likely in the future. Observing heads on a coin flip 10 times in a row does not necessarily mean that the next toss is more likely to be a heads again for example. A good random number generator makes values that are entirely independent of all previous numbers. Rolling a dice would be a poor example, as minor imperfections and the initial position of the dice in your hand will affect the final output value. Rolling dice is also a poor choice because it is too slow for practical uses. The need for random numbers arises in many applications today. One application is the simulation of physical models, such as in Condensed Matter or Solid-state physics [2, 3]. Monte Carlo simulation methods for studying lattices and fluid mechanics is one example of this [4]. Possibly the most widespread use is in the field of cryptography, which underpins security in modern communications [3, 5,6]. Other areas of interest include banking, gambling and statistical sampling [7, 8]. Cryptography requires a secure random number generator (RNG) for cryptographic keys and communications, and the use of imperfect RNG's can have severe consequences. The predictability of a random number generator to any extent can lead to a backdoor which an attacker can use to break the encryption [9]. For example, this year's blockchain-based platform Poly Network was attacked in early August of this year, with over $600 million in various cryptocurrencies stolen, highlighting the need for secure network processes. Random numbers generated using computer algorithms as shown above can never be regarded as truly random. Image by Markus Spiske on Unsplash. ‘True’ Random Numbers Random numbers generated using computer algorithms as shown above can never be regarded as truly random. Image by Markus Spiske on Unsplash. There are two main methods for generating chains of random numbers. The first uses a physical phenomenon that is known to be random. The most common example of these hardware random number generators uses thermal noise [10]. By measuring random fluctuations in the temperature, we can generate a string of numbers that is, in theory, random. The problem with these methods is that the devices used to measure the randomness often themselves have asymmetries and biases that compromise the randomness [11]. Random number generators based on quantum effects can, to some extent, bypass these limitations and are a leading technique for RNG's [12, 13, 14, 15, 16]. In quantum mechanics, a system can be prepared in a superposition of two states — like Schrodinger’s cat being both alive and dead at the same time. According to Born's rule, the measurement outcome of such a prepared state is intrinsically random. Therefore, in theory, quantum measurements can be used to generate genuinely random numbers [18,19]. The second and more common method uses computer algorithms to generate long strings of numbers that appear to be random. A much shorter series of numbers known as a key is fed into a pseudo-random number generator and determines the values of the output chain. The problem with this method is that it is still deterministic; the entire sequence can be reconstructed if the initial key is known. This type of generator is called a pseudo-random number generator (PRNG) and is what your calculator or computer uses to simulate randomness [5, 17]. PRNG's cannot be regarded as 'true' random number generators. However, they are sufficient for most applications. For example, computer games use pseudo-random number generators, as they are fast and easy to implement. More complicated PRNG's are used by coding languages such as Python, and these generators are often well suited for numerical modelling. Due to the innate predictability of PRNG data streams, they are potentially vulnerable to cryptanalysis — the process of attempting to breach cryptographic security systems even without the initial key. For this reason, hardware random number generators can be a powerful tool, as the ideal system will be genuinely random. The lack of predictability is the main reason the field of hardware random number generators remains such a large area of research, even though PRNG's are faster, easier and simpler to implement. Testing Randomness The big question now is this: if we have a stream of random numbers, how can we test if they are random? The answer is to perform a series of statistical tests that tell us how unusual the observed result was. For the following discussion, we will consider a stream of numbers that only take on the values '1' or '0'. Each of these results is, in theory, independent of each other, and future outputs are unpredictable. The NIST Statistical Test Suite for Random Number Generation lays out in great detail 15 tests that can determine if a stream of numbers appear to be random or not [17]. Each of these tests will return a probability that the result could be observed under the assumption that the data is random (the null hypothesis). This probability is known as a p-value. A low p-value implies that it is unlikely that the source of data is truly random. The low hanging fruit for a test of randomness is the monobit test. This test compares the total number of '1's and '0's observed. If you were tossing a coin to make decisions and landed on heads 100 times and tails 20, you would probably want to get a new coin. We choose to use a threshold of 1/100, which means any event that is more unlikely than 1 in 100 is considered to provide evidence against randomness. To see how this would work with quantification, the bit chain 1011010101 has a p-value of 0.527 for this test. There is, therefore, no evidence under this test that the chain is non-random. On the other hand, a chain of 1111101111 has a p-value of 0.004. Observing a chain, this 'unusual' would happen around 1 time out of every 250 under the assumption that the data source is genuinely random, so we would say that we have evidence to suggest that the data is not random. The monobit test is the most straightforward test used to measure this randomness, and there are many other tests in use. They can measure anything from the expected runs (the number of times a number occurs in a row) to testing if specific patterns come up more or less than expected. In order to truly test the randomness of an RNG source, however, it is best to use millions of data points to spot any very weak patterns throughout the data. If '0' appears an extra time every 1000 numbers, it will generally take at the very least several thousand numbers tested in order to find this weak source of non-randomness. Using Polarised Laser Light to Generate Random Numbers There are many exciting future technologies for hardware random number generation. Methods that rely on quantum effects as their source of randomness remain the most exciting area of research due to the 'pureness' of the quantum process. This purity is limited by the quality of experimental setups however. One potential source of quantum randomness is polarised light, which has led to the work I have completed with my colleague Gang Xu under the supervision of Miro Erkintalo and Stephane Coen. We generate millions of random numbers every second using pulses of laser light in a resonator. The figure above shows the results from several hundred numerical simulations highlighting how once we cross a threshold, there becomes two possible states that the laser light can exist in. These states are chosen at random. By pumping light into a loop of optical fibre, we can achieve high intensities inside the cavity. Under these high powers, nonlinear effects start to come into play, whereby the effect of the fibre on the light depends on the intensity of the light itself. These nonlinear effects are what allow us to create the random number generation [20]. The figure above shows the results from several hundred numerical simulations highlighting how once we cross a threshold, there becomes two possible states that the laser light can exist in. These states are chosen at random. The crucial part of our specific design is that light within the cavity can exist in either one of two states (called polarisation modes). At low enough pump strengths, we find that these two polarisation modes will have identical intensities and are perfectly balanced. However, as we slowly increase the power in the system, we find that the two polarizations’ intensities can no longer be the same, and the system undergoes a bifurcation [21, 22]. This means that one of the polarizations must have a high intensity, while the other mode has a low intensity. As these modes are initially identical; which one ends up with the high or low intensity is entirely random, thus leading to the generation of random numbers. We can randomly generate a 0 (low intensity) or a 1 (high intensity) by measuring only one polarisation mode as the bifurcation occurs. We can generate a new set of random bits by resetting the system and increasing the power again. As light moves at literally a billion kilometres an hour, we can repeat this process very rapidly, allowing us to generate several million bits per second. One nice aspect of this setup is that the dynamics of our system have a two-round trip periodicity. What this means in practical terms is that any imperfections in our system get averaged out. If the x-polarisation is favoured in one roundtrip of light, in the next, the y-polarisation will be favoured instead. Therefore, asymmetries or biases are automatically averaged out in our setup, and the random selection of a high intensity and a low-intensity polarisation mode is genuinely random. Figure 2: Experimental data obtained by measuring the intensity of one polarisation mode. Each spike represents a single light pulse in a high or low intensity state. Conclusion Random number generation is an integral part of our society, whether we are aware of it or not. This process is essential to global security, and the need for secure random number generation will only increase with the developments in modern computing and cryptanalysis. Modern hardware random number generators will continue to have great importance in security and numerical modelling as sources of 'true' randomness in years to come. References [1] Pierre L'Ecuyer. “History of uniform random number generation.” In 2017 Winter Simulation Conference (WSC), pages 202{230, Las Vegas, NV, December 2017. IEEE. [2] Alan M. Ferrenberg, Jiahao Xu, and David P. Landau. “Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model.” Physical Review E, 97(4):043301, 2018. [3] Toni Stojanovski and Ljupco Kocarev. “Chaos-based random number generators-part I: Analysis [cryptography].” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 48(3):281{288, 2001. [4] A Baumgartner et. al, “The Monte Carlo method in condensed matter physics,” (71), 2012 [5] Melissa O'Neill. “PCG: A Family of Simple Fast Space-Ecient Statistically Good Algorithms or Random Number Generation.” page 58. [6] Gwangmin Kim, Jae Hyun In, Young Seok Kim, Hakseung Rhee, Woojoon Park, Hanchan Song, Juseong Park, and Kyung Min Kim. “Self-clocking fast and variation tolerant true random number generator based on a stochastic mott memristor.” Nature Communications, 12(1):2906, May 2021. [7] Adam Levinthal and Michael Barnett. “The Silicon Gaming Odyssey Slot Machine.” In Proceedings IEEE COMPCON 97. Digest of Papers, pages 296{301. IEEE, 1997. [8] Ralf Korn, Elke Korn, and Gerald Kroisandt. “Monte Carlo Methods and Models in Finance and Insurance.” CRC press, 2010. [9] Matthew Green. “The Many Flaws of Dual EC DRBG,” September 2013. [10] Cheng Wu, Bing Bai, Yang Liu, Xiaoming Zhang, Meng Yang, Yuan Cao, Jianfeng Wang, Shaohua Zhang, Hongyan Zhou, Xiheng Shi, Xiongfeng Ma, Ji-Gang Ren, Jun Zhang, Cheng-Zhi Peng, Jingyun Fan, Qiang Zhang, and Jian-Wei Pan. “Random Number Generation with Cosmic Photons.” Physical Review Letters, 118(14):140402, April 2017. [11] Pierre L'ecuyer. “Ecient and portable combined random number generators.” Communications of the ACM, 31(6):742{751, 1988. [12] Christian Gabriel, Christofer Wittmann, Denis Sych, Ruifang Dong, Wolfgang Mauerer, Ulrik L. Andersen, Christoph Marquardt, and Gerd Leuchs. “A generator for unique quantum random numbers based on vacuum states.” Nature Photonics, 4(10):711{715, October 2010. [13] Tommaso Lunghi, Jonatan Bohr Brask, Charles CiWen Lim, Quentin Lavigne, Joseph Bowles, Anthony Martin, Hugo Zbinden, and Nicolas Brunner. “Self-Testing Quantum Random Number Generator.” Physical Review Letters, 114(15):150501, April 2015. [14] Yoshitomo Okawachi, Mengjie Yu, Kevin Luke, Daniel O. Carvalho, Michal Lipson, and Alexander L. Gaeta. “Quantum random number generator using a microresonator-based Kerr oscillator.” Optics Letters, 41(18):4194, September 2016. [15] Bruno Sanguinetti, Anthony Martin, Hugo Zbinden, and Nicolas Gisin. “Quantum Random Number Generation on a Mobile Phone.” Physical Review X, 4(3):031056, September 2014. [16] Yanbao Zhang, Hsin-Pin Lo, Alan Mink, Takuya Ikuta, Toshimori Honjo, Hiroki Takesue, and William J. Munro. “A simple low-latency real-time certiable quantum random number generator.” Nature Communications, 12(1):1056, December 2021. [17] Andrew Rukhin, Juan Soto, James Nechvatal, Elaine Barker, Stefan Leigh, Mark Levenson, David Banks, Alan Heckert, and James Dray. “A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications.” page 131. [18] J. E. Jacak, W. A. Jacak, W. A. Donderowicz, and L. Jacak, “Quantum random number generators with entanglement for public randomness testing," Scientic Reports, vol. 10, no. 1,p. 164, Jan. 2020. [19] X. Ma, X. Yuan, Z. Cao, B. Qi, and Z. Zhang, “Quantum random number generation," npj Quantum Information, vol. 2, no. 1, pp. 19, Jun. 2016. [20] S. Coen and M. Erkintalo, “Temporal Cavity Solitons in Kerr Media," in Nonlinear Optical Cavity Dynamics, P. Grelu, Ed. Weinheim, Germany: Wiley-VCH Verlag GmbH & Co. KGaA, Dec. 2015, pp. 11{40. [21] J. Fatome, G. Xu, B. Garbin, N. Berti, G.-L. Oppo, S. G. Murdoch, M. Erkintalo, and S. Coen, “Self-symmetrization of symmetry-breaking dynamics in passive Kerr resonators," p. 13. [22] B. Garbin, J. Fatome, G.-L. Oppo, M. Erkintalo, S. G. Murdoch, and S. Coen, “Asymmetric balance in symmetry breaking," Physical Review Research, vol. 2, no. 2, p. 023244, May 2020.
- Mysteriously Massive Muon Magnetic Moment Might Mean Missing Maths
Kevin Stitely Image from Wallpaperbetter In 2013, a huge effort was taken to transport a giant 15 m wide superconducting electromagnet over 5,000 kilometres from Brookhaven National Laboratory in New York, to FermiLab National Accelerator Laboratory in Illinois, in the United States. The trip took 35 days of painstaking care, as even a few degrees of bending the extremely sensitive equipment would cause irreparable damage. The magnet was being moved in preparation for a series of experiments aimed at probing a fundamental particle called the muon, the electron’s more massive cousin. The magnet is used to maintain an extremely uniform magnetic field inside of which a beam of muons travels in a circle at nearly the speed of light. The experiments aim to measure a tiny wiggling motion that the particles undergo when subjected to a magnetic field, which might tell us something about the fundamental structure of matter. Fig. 1. The Muon g-2 storage-ring superconducting electromagnet nearly arriving at FermiLab National Accelerator Laboratory. Photo from FermiLab Creative Services. According to the theory of quantum electrodynamics (QED), particles with a peculiar property called “spin” behave as tiny, tiny bar magnets, with a strength described by their so-called magnetic moment, which is often quantified in terms of their “g-factor.” Particles which possess spin will undergo a sort of spinning-top-like motion when in the presence of a magnetic field, called precession. The magnetic moment of the particle then dictates the speed at which the particles precess. The prediction and subsequent measurement of the g-factor of the electron then became one of the first precision tests of the theory of QED. The first theoretical calculation was performed by Paul Dirac nearly a century ago, where he noted a value of exactly two [1]. Figure 2. Schematic of the Muon g-2 experiments.ure The calculation depends on how electrons interact with photons, quantum particles of light. Dirac used the simplest (relatively speaking) possible interaction, but as quantum theory was further developed it became clear to physicists that there was more to the story. As well as the particles already being considered, namely an electron and a photon, there are also “virtual” particles that can, briefly, appear out of the fabric of space itself to provide additional interaction pathways. Interactions with virtual particles of the so-called vacuum field cause the g-factor of particles to be ever so slightly larger than two. The part of the g-factor that is the result of these virtual particles is then g-2, which quantifies the extent of the particle’s interaction with the vacuum field. The corresponding modification to the magnetic moment is called the anomalous magnetic moment. The theoretical calculation of the anomalous magnetic moment of the electron was carried out by Julian Schwinger in 1948 [2], and stands as a contender for the most accurate prediction ever performed in all of science, with over ten significant figures of agreement with experimental results. The result is engraved on Schwinger’s tombstone. The story is similar for the muon. The bare calculation without the interactions of virtual particles again yields g = 2, and the inclusion of virtual particle effects slightly increases the value. However, the muon is much more massive than the electron, and therefore interacts with virtual particles in the vacuum more strongly. This is the key reason behind the interest in the magnetic moment of the muon specifically. Whilst the virtual particles that the electron interacts with are primarily photons, the muon features virtual interactions with a much wider class of particles that result in the weak nuclear force, called neutrinos and W and Z bosons. These particles are responsible for the stability of atomic nuclei. Interactions of the muon with virtual W and Z bosons cause a further increase to the anomalous magnetic moment, and make the theoretical calculations of the g-factor of the muon much trickier. Figure 3. Example Feynman diagrams of muon interactions via QED (a), weak interactions (b) and (c), and interaction with virtual hadrons (d). Illustrations of some of the most basic types of particle interactions that the muon can experience are shown in Fig. 3. The diagrams, called Feynman diagrams, show particles as lines incoming and undergoing interactions where lines meet. The simplest virtual particle interaction in QED is shown in Fig. 3(a). Here a muon mu and an antimuon mu(bar) (the muon's antiparticle) exchange a virtual photon gamma before colliding and annihilating, creating another photon gamma. These are the types of interactions that cause the anomalous magnetic moment in the electron. The muon, on the other hand, is also more strongly affected by interactions of particles belonging to the weak nuclear force, W and Z bosons. Two of these possible interaction pathways are shown in Fig. (b) and (c). As well as interactions mediated by the weak nuclear force, there are also effects brought about by another fundamental force - the strong nuclear force. This force is associated with composite particles called hadrons, such as protons and neutrons. As shown in Fig. 3(d), the strong nuclear force contributes to the anomalous magnetic moment of the muon via the creation of virtual hadrons. The first results of the experiments were published this April [3]. The results of the muon g-factor, along with the currently accepted theoretical value [4], are g theory = 2.00233183620 +/- 0.00000000086 g experiment = 2.00233184121 +/- 0.00000000082 While the difference between the theoretical prediction and the experimental result may seem extremely close, the experiments are so incredibly precise that the observed difference here is significant. The important question that scientists are asking now is: is the difference significant enough? The difference observed here is 4.2 standard deviations, meaning that the probability of observing such an extreme result by chance is roughly 1 in 40,000. However, the currently accepted standard in particle physics to mark the discovery of new physics is 5 standard deviations difference, which represents a probability of about 1 in 2 million. The currently observed discrepancy between theory and experiment, which is in agreement with previously obtained experimental values from Brookhaven National Laboratory [5], is very exciting for physicists hoping that this could be an early indication of physics beyond the Standard Model. The Standard Model of particle physics is our current de facto understanding of the quantum world. It includes the quantum theory of electromagnetism (QED), the weak nuclear force, and the strong nuclear force. The idea here is that the theoretical calculation, which includes all known interactions given by the Standard Model, could be missing either some interactions between particles already known to exist, or is missing contributions from particles as yet unknown. Either case would give rise to a wealth of new physics and would glean new insight into the fundamental constituents of matter. There is, however, another camp of theorists which contends that the experimental results of Brookhaven and FermiLab can indeed be calculated via the Standard Model. The issue here comes down primarily to the muon interactions with virtual hadrons, such as illustrated in Fig. 3(d). In any theoretical calculation of interactions of fundamental particles with the vacuum field, each and every interaction pathway cannot be accounted for, as there are actually infinitely many of them. Instead, only the most important types of interactions are considered. This approach, called perturbation theory, works excellently in the case of the electron because the more complicated the interaction pathway, the smaller the contribution. As a result, only finitely many of them need to be calculated and the rest can be thrown away if the role they play is so small they won't be detectable anyway. However, this situation is more complicated in the muon because the strong nuclear force, which governs how the hadrons interact, cannot be calculated in a perturbative manner. Instead, theorists resort to using data-driven approaches that use results gathered from previous experiments to estimate contributions of hadronic interactions [4]. This is the main source of uncertainty in the theoretical predictions of the muon magnetic moment. In a paper published the same day the FermiLab experimental results were unveiled [6], a team of theorists known as the BMW collaboration (so-called because of the cities most of the physicists are from - Budapest, Marseille, and Wupppertal) revealed a new theoretical calculation of the anomalous muon magnetic moment based on the Standard Model using lattice quantum field theory. Usually quantum field theories treat spacetime as a continuum which is infinitely divisible — at least to wherever our current theories fail. Instead, lattice quantum field theory treats spacetime as an extremely fine mesh of gridpoints, with particles only allowed to exist on points, not the spaces in between. This allows for forces such as the strong nuclear force to be simulated on a (very large) computer in a brute-force fashion. Using this procedure, the BMW team calculated a value for the muon magnetic moment that appears to be much closer to the experimental results than the currently accepted theoretical prediction using the data-driven approach. This suggests that the currently observed discrepancy between theory and experiment could be reduced with Standard Model physics alone. In conclusion, the results as they currently stand are inconclusive. They recently divulged measurements of the muon magnetic moment of FermiLab align remarkably with previously established results taken at experiments in Brookhaven. The results indicate a discrepancy between the experiments and the theoretical predictions offered by the Standard Model, but not to an extent that can definitively be called statistically significant to the standard set in particle physics. Nonetheless, there are high hopes that there is physics outside the Standard Model to be found in the muon. The situation is further muddled by suggestions that with the different computational methods offered by lattice quantum field theory, the current experimental observations can be adequately explained by the Standard Model. As of right now, the scientific community is yet to reach a consensus until theoretical and experimental techniques are honed over the coming decades. In either case, the stage is set for a wealth of new physics to be discovered as a result of the curiously large muon magnetic moment. References [1] P. A. M. Dirac, “The quantum theory of the electron. Part II,” Proc. R. Soc. A, vol. 118, no. 351, 1928. [2] J. S. Schwinger, “On quantum electrodynamics and the magnetic moment of the electron,” Phys. Rev., vol. 73, no. 416, 1948. [3] Muon g-2 Collaboration, “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm,” Phys. Rev. Lett., vol. 126, no. 141801, 2021. [4] T. Aoyama et al., “The anomalous magnetic moment of the muon in the Standard Model,” J. Phys. Rep., vol. 887, pp. 1-166, 2020. [5] Muon g-2 Collaboration, “Final report of the E821 muon anomalous magnetic moment measurement at BNL,” Phys. Rev. D, vol. 73, no. 072003, 2006. [6] S. Borsanyi, et al., “Leading hadronic contribution to the muon magnetic moment from lattice QCD,” Nature, vol. 593, pp. 51-55, 2021.
- Janus Nanoparticles and Self Assembly
Caleb Todd Scientific and technological progress is not unrestrained. There are bottlenecks that slow or even prevent progress in certain fields of research, like funding or the number of researchers working in the field. One constraint which I think we are less aware of is the materials we have available. Edna Mode in The Incredibles makes super suits that allow the protagonists to utilise their powers without inhibition. Elastigirl’s can stretch, Violet’s turns invisible, and so on. In the same way, real-world scientists couldn’t realise their ideas without suitable materials to work with. An example close to all of our hearts is the internet, which uses laser light travelling through fibre optic cables to transmit information. Fibre-optic cables are glass, but you can’t just heat up sand and expect to get something usable. Impurities have to be removed, new impurities have to be added (called dopants), and stretching and layering to give cables the desired internal structure are all essential to keeping the light going where you want it to with as little loss as possible [1]. Lasers are even more remarkable. Though their principle of operation was theorised by Einstein in 1916, the first laser was only operational in 1960 [2]. Getting a laser with a specific colour requires tuning properties at an atomic level. The internet is completely dependent on the production of materials with the exact properties we need, and it is not the only such example. Some materials we have developed seem nearly miraculous in their utility, and more are being invented all the time. Living at the intersection between physics, chemistry, and engineering, materials scientists generate these new and interesting materials and study their properties. However, some material properties simply cannot be achieved with our current understanding. For example, one of the biggest roadblocks on our route to quantum computing is the absence of feasible room temperature superconductors. Superconductors are materials that can carry electricity without energy loss [3], and they are central to many quantum computer architectures [4]. However, materials generally only exhibit superconductivity at very low temperatures which are expensive to create and maintain. Those that don’t will require incredibly large pressures that are equally impractical. There are many other examples of the material properties we can’t access, limiting our technology, which is why materials science is so pivotal to scientific advancement. An exciting paradigm in material development is so-called nanoparticle self-assembly. A nanoparticle is an object on the scale of one billionth of a metre — made of only tens of atoms. If you construct nanoparticles with the right properties and under the right conditions, their interactions with each other can cause them to spontaneously order themselves [5]. Two might join to become a pair, then the pairs will join together, and so on, forming larger and larger structures. A ubiquitous example would be lipid molecules with one hydrophobic (water-repelling) end and one hydrophilic (water-attracting) end. When many lipids are immersed in water, they will clump together to hide the hydrophobic ends from the surrounding fluid [6]. The exact configuration depends sensitively on the specifics of the inter-particle interactions and the conditions to which they are exposed. Figure 1: Different patterns of formation into which lipids can spontaneously develop. Image taken from [6]. If the nanoparticles are designed well the self-assembly process can result in macroscopic materials. An attractive part of this approach to materials science is sustainability. Since the materials are built up through a spontaneous process, they are easily recyclable. Should one product come to the end of its use, its material can be broken down and sent through the self-assembly process again to build an entirely new product. Even better, the properties of these materials (like stiffness or density) can be customised by tweaking the nanoscale structures. In this way, nanoparticle self-assembly offers an exciting route towards developing materials with new and desirable properties, unlocking new regimes for technology and scientific experimentation. The question, then, is this: how do we predict the macroscopic properties of a self-assembled material, and how do we align them with our desires? The answer, usually, is to return to those nanoscopic interactions that gave rise to the self-assembly in the first place. We study the bonds which form between two nanoparticles and build up from there. Last year, I took part in a research project which analysed Janus nanoparticles in exactly that manner. Janus is the Greco-Roman god with two faces, so Janus nanoparticles are nano-scale objects with two distinct sides. For example, you might have one side positively charged and the other negatively charged. I was studying an amphipathic Janus nanosphere: a spherical nanoparticle that is half hydrophilic and half hydrophobic. Just like lipids, if there is a pair of amphipathic Janus nanospheres in a fluid, their hydrophobic parts will make contact to hide each other from the surrounding water (as shown in Figure 2). Such a pairing is called a Janus dimer, and it forms the first building blocks in a self-assembly process [7]. Figure 2: Molecular dynamics simulations of two Janus spheres immersed in a fluid. The blue atoms are hydrophobic, red are hydrophilic, and turquoise are the fluid. Image taken from [7]. In my project, I analysed these Janus dimers, studying the orientations in which they preferentially configure. We might expect the dimer to most frequently sit end-on-end. This would seem, in some sense, to hide the hydrophobic surfaces the most. Another valid guess may be for any orientation to be equally probable, as long as the contact point was between hydrophobic sides. As we know, however, reality is often disappointing; the answer is more complex. There are two competing effects dictating what configurations Janus dimers will prefer to reside in: energy and entropy. Energy is linked to my colloquial description of hydrophobic surfaces ‘wanting’ to hide from the water. In more rigorous terms, configurations where the hydrophobic sides are exposed to fluid have higher energies, which is less favourable. Entropy can be thought of as the number of ways a given configuration can form. To illustrate this, consider two coin flips. The possible outcomes are HH, HT, TH, and TT, and all are equally probable. However, a configuration with one head and one tail can be reached in two different ways, HT and TH, and is consequently more probable than the configuration with two heads, or that with two tails. Janus nanoparticles (in fact, all physical systems) exhibit similar properties. Some configurations can be formed in more than one way — so-called ‘degenerate’ states — and are consequently more probable. The energetic factor alone would, in fact, prefer the end-on-end configuration. That is the state where the hydrophobic surfaces are furthest from the fluid on average, which is energetically desirable. However, entropy ruins this heuristic. Contact points nearer the equator of the Janus sphere (where the hydrophobic and hydrophilic hemispheres meet) have a larger degeneracy because there is more available surface area than at the poles. Indeed, there is only one point at which the spheres could make contact at the poles: when they are perfectly end-on-end; but if the spheres meet at the equators, they have an entire circle of points available to them. So, energy makes the Janus spheres want to make contact end-on-end, and entropy wants them to meet at 90 degrees away at their equators. As you can imagine, these two competing influences balance out somewhere in the middle. Figure 3 shows that the preferred state is about 45 degrees turned out from the poles. Figure 3: The configuration probabilities as the two spheres (i and j) are rotated out from their connecting axis. The bottom left corner (ii) is perfectly end-on, and the top right corner would be side-on. The most probable configuration is at (iii), in-between these two extremes. Image taken from [7]. But let’s move back to the big picture. Once you understand these dimer interactions in a deep and fundamental way, you can scale them up. Thousands or millions of Janus particles’ interactions can be built as the aggregate of all pairwise interactions. Using this information, we can run simulations that predict what properties self-assembled Janus materials might have, and how those properties can be controlled. And that isn’t unique to Janus spheres; the same process can be replicated for other types of nanoparticles. With new materials comes new experimental capacity, new technologies, and a more sustainable future. Whether Janus spheres will unlock new regimes in physics or biology is yet to be seen, but the underlying principles certainly have that potential. References [1] T. Li, Optical fiber communications: fiber fabrication. Elsevier, 2012. [2] A. J. Gross and T. R. Herrmann, “History of lasers,” World journal of urology, vol. 25, no. 3, pp. 217–220, 2007. [3] M. Cyrot, “Ginzburg-landau theory for superconductors,” Reports on Progress in Physics, vol. 36, no. 2, p. 103, 1973. [4] H.-L. Huang, D. Wu, D. Fan, and X. Zhu, “Superconducting quantum computing: a review,” Science China Information Sciences, vol. 63, no. 8, pp. 1–32, 2020. [5] M. Grzelczak, J. Vermant, E. M. Furst, and L. M. Liz-Marźan, “Directed self-assembly of nanoparticles,” ACS nano, vol. 4, no. 7, pp. 3591–3605,2010. [6] S. Yang, Y. Yan, J. Huang, A. V. Petukhov, L. M. Kroon-Batenburg,M. Drechsler, C. Zhou, M. Tu, S. Granick, and L. Jiang, “Giant capsids from lattice self-assembly of cyclodextrin complexes,” Nature communications, vol. 8, no. 1, pp. 1–7, 2017. [7] S. Safaei, C. Todd, J. Yarndley, S. Hendy, and G. R. Willmott, “Asymmetric assembly of Lennard-Jones Janus dimers,” Phys. Rev. E, vol. 104, p. 024602,Aug 2021.
- Explained: Space Debris
Celina Turner Space debris in orbit can become dangerous obstacles for satellites and astronauts. Artist’s rendition from Dotted Yeti, Shutterstock. As depicted in the 2013 thriller Gravity, space debris is a growing problem that can and does put active missions in danger. Space debris is defined by NASA as "any man-made object in orbit about Earth which no longer serves a useful purpose" [1]. This category contains just about everything ever launched off Earth that is still in orbit and defunct. Since the very first rocket, Sputnik, launched off our planet in 1957, we have been leaving behind a scatter of parts no longer needed for their missions, with, unsurprisingly, no intentions of cleaning up after ourselves. While some pieces of space debris have been left behind far enough away that they won't find their way back to us, most debris is still in orbit at varying altitudes. Although everything in orbit will eventually succumb to Earth's gravity and return to the ground (or more likely burn up on its way there), depending on the altitude, mass of the object, and other factors, this can take anywhere from months to decades, or even centuries. The only way to ensure that debris orbiting our planet does not cause any damage to spacecraft, active or inactive, is if Active Debris Removal (ADR) technologies are implemented to remove it. As our society progresses closer to the days of a space-faring civilisation, addressing the issue of space debris, and developing ways to remove it paves the way for a more successful future in space exploration. According to the most recent data that the European Space Agency (ESA) has released, almost ten thousand tonnes of rocket casings, paint chips, decommissioned satellites, and other objects deemed unneeded have been dumped in different stages of orbit [2]. They estimate that these will result in upward of 570 "break-ups, explosions, collisions, or anomalous events resulting in fragmentation" [2]. Most debris is orbiting at velocities that lead to an average impact speed of 10km/s [1]; to put this in perspective, debris travels about seven times faster than a bullet. Given these speeds, the size of any specific piece of debris becomes almost unimportant as they all pose a risk – such as when a paint fleck only 0.2mm in size was able to put a crack into the windshield of a space shuttle [3]. Image showing the current space debris around earth. Credit to European Space Agency In 1978, Donald J Kessler from NASA published an article in the American Journal of Geophysical Research describing "a self-sustaining cascading collision of space debris in LOE (low Earth orbit)" [4]. Now known as the Kessler Syndrome, it describes how a positive feedback loop would be formed by shattering debris as it collides into increasingly infinitesimal pieces from each collision – with each new piece being a danger that exponentially increases the risk of another collision. Kessler warned that with no intervention, these objects would go on to create a debris belt around the Earth sooner rather than later: "Under certain conditions, the belt could begin to form within this century and could be a significant problem during the next century. The possibility that numerous unobserved fragments already exist from spacecraft explosions would decrease this time interval" [5]. It took nearly twenty years after Kessler’s article before the first collision with a still functional satellite occurred. In July of 1996, a by-then 10-year-old fragment of the European Ariane rocket struck the French spy satellite Cerise [6]. Despite the damage, Cerise remained operational. The first collision to fully destroy a satellite didn’t occur until 2009, when the inactive Russian military satellite Kosmos 2251 ran into the American communications satellite Iridium 33. The US Space Surveillance Network (SSN) was able to track down 217 pieces belonging to Iridium 33 and 457 pieces from Kosmos 2251 in order to monitor them for the foreseeable future, but it is likely that there are more scattered about [7]. Although it is difficult to estimate just how many collisions there have been to date, what is considered the worst collision so far was actually an intentional one. In 2007, as a political tactic, China’s military launched an anti-satellite (ASAT) missile, destroying a non-operational weather satellite: Fengyun-1C. “The destruction created a cloud of more than 3,000 pieces of space debris, the largest ever tracked, and much of it will remain in orbit for decades, posing a significant collision threat to other space objects in Low Earth Orbit” [8], and accounts for over 20% of all space debris [9]. Despite the repercussions, ASAT missions have still been carried out by several countries as a show of force. What Cerise, Kosmos, and Fengyun-1C elucidate is the impact of not intervening, implementing methods that have not been safely tested, and no clear global regulations for LOE can have on the long-term future of space exploration. Concept photo of RemoveDEBRIS from Aerospace Testing International Over the years, there have been many proposed ADR projects to remove defunct satellites and collision shards. One way is through capture technology: sending a satellite into orbit armed with a net or a harpoon that will then deploy and capture the debris and pull it in, so it can then be brought back down along with the satellite. Designed by the University of Surrey, RemoveDEBRIS has successfully designed and launched a CubeSat (a miniaturised satellite for research purposes) aboard a SpaceX rocket to test the two methods in 2018 [10]. The CubeSat was able to eject a net at a 7m distance to its target. “Once the net hits the target, deployment masses at the end of the net wrap and entangles the target and motor-driven winches reel in the neck of the net preventing re-opening of the net” [10]. It did so with ease, despite the fact that “the target was spinning like you would expect an uncooperative piece of junk to behave” [11]. In its harpoon experiment, the CubeSat fired a harpoon at a distance of 1.5m on a 10cm by 10cm target. This was also easily accomplished: “the harpoon was fired on 8th February 2018 at a speed of 20 metres per second and penetrated a target made of satellite panel material” [12]. The RemoveDEBRIS mission proved successful, but as to its future — whether a fleet of them be released into orbit, or they are redesigned to capture more or larger debris — is still to be decided. Another discussed method has been through the use of magnets. In March of 2021, a CubeSat designed by Astroscale that is able to attach itself to debris magnetically, was carried into space on a Soyuz rocket. “Using a series of maneuvers, Astroscale will test the CubeSat’s ability to snatch debris and bring it down toward the Earth's atmosphere, where both servicer and debris will burn up” [13]. However, this method limits the options of what can be retrieved to satellites that have the compatible magnetic plate for the two to dock. The CubeSat was sent up with a “test dummy” satellite for it to practice catching in orbit; if the mission is successful, it may warrant a universal magnetic plate for future satellites to have so that they can be retrieved if they end up in orbit longer than expected. Astroscale’s mission is an example of how future debris could be avoided, but there would still be a need for another method to clean up the debris that it cannot recover. A final method worth mentioning has been proposed by the Australian National University’s Space Environment Research Centre: to destroy space junk with a laser. The idea stemmed from the original theory that lasers could be used to push debris into another trajectory if it were going to collide with something else, when it was realized that “if they wanted to actually destroy the space junk, they could push it into a lower orbit until it fell into the atmosphere and burned up like a meteor”[14]. Although trajectory changes with a laser can be made from the ground, in order to push debris enough for it to fall would likely require a laser in orbit due to the physics behind the method. Lasers are able to move debris “using photon pressure — the ability of light to exert force” [15]. This pressure is not very strong, but it is enough to push debris out of its destruction path if implemented early enough. This would be a very low-cost method as the correct equipment is already available in many places, but as previously stated, in order to actually destroy debris a laser would need to be placed in orbit. As the laser makes its way through the different layers of the atmosphere, the light is distorted and ends up becoming unfocused, and thereby, similar to cutting with a blunt knife rather than a sharp one, not quite as effective. This would still arguably cost less than the other ADR projects presented as the laser would not be carrying the debris, and therefore would have a longer life cycle as it does not need to carry the debris into the atmosphere. This concept has not had its own mission as of yet, and will require some testing to see how effective it is, but in theory, laser ADR seems the most sustainable practice. It is imperative that we create and implement efficient ADR technologies and sustainable practices for space travel. In the long run, these will lead to a much safer future while saving money for companies whose spacecraft would be lost to debris. Low Earth Orbit is already resembling a sci-fi battleground, with parts of spacecrafts scattered throughout, but this does not need to be an omen for every satellite or rocket that passes through. Ultimately, it is the responsibility of every nation or company that has launched into space to help in the effort to clean up Earth’s orbit again, so that future generations will not need to face unnecessary dangers. References [1] B. Dunbar, “Frequently asked Questions: Orbital debris,” NASA, 02-Sep-2011. [Online]. Available: https://www.nasa.gov/news/debris_faq.html. [Accessed: 29-Aug-2021]. [2] “Space debris by the numbers,” ESA, 12-Aug-2021. [Online]. Available: https://www.esa.int/Safety_Security/Space_Debris/Space_debris_by_the_numbers. [Accessed: 28-Aug-2021]. [3] W. Ailor, “Two Space Debris Issues,” United Nations Office for Outer Space Affairs, Feb-2011. [Online]. Available: https://www.unoosa.org/pdf/pres/stsc2011/tech-43.pdf. [Accessed: 31-Aug-2021]. [4] L. de Gouyon Matignon, “The Kessler Syndrome and Space Debris,” Space Legal Issues, 27-Mar-2019. [Online]. Available: https://www.spacelegalissues.com/space-law-the-kessler-syndrome/. [Accessed: 28-Aug-2021]. [5] D. J. Kessler and B. G. Cour-Palais, “Collision frequency of artificial satellites: The creation of a debris belt,” Journal of Geophysical Research: Space Physics, vol. 83, no. A6, pp. 2637–2646, Jun. 1978. [6] M. Ward, “Satellite injured in Space Wreck,” New Scientist, 23-Aug-1996. [Online]. Available: https://www.newscientist.com/article/mg15120440-400-satellite-injured-in-space-wreck/. [Accessed: 29-Aug-2021]. [7] T. S. Kelso, “Iridium 33/ Cosmos 2251 Collision,” CelesTrak, 11-Mar-2009. [Online]. Available: https://web.archive.org/web/20090317043727/http://celestrak.com/events/collision.asp. [Accessed: 29-Aug-2021]. [8] B. Weeden, “2007 Chinese Anti-Satellite Test Fact Sheet.” Secure World Foundation, 2012. [9] E. Gregersen, “space debris,” Encyclopædia Britannica, 30-Jul-2021. [Online]. Available: https://www.britannica.com/technology/space-debris. [Accessed: 29-Aug-2021]. [10] “RemoveDEBRIS,” Surrey Space Centre. [Online]. Available: https://www.surrey.ac.uk/surrey-space-centre/missions/removedebris. [Accessed: 31-Aug-2021]. [11] J. Amos, “RemoveDebris: UK Satellite Nets 'Space Junk',” BBC News, 19-Sep-2018. [Online]. Available: https://www.bbc.com/news/science-environment-45565815. [Accessed: 31-Aug-2021]. [12] J. Amos, “Space harpoon skewers 'orbital debris',” BBC News, 15-Feb-2019. [Online]. Available: https://www.bbc.com/news/science-environment-47252304. [Accessed: 31-Aug-2021]. [13] S. Mathewson, “Tiny Astroscale satellite will test space junk cleanup tech with magnets,” Space.com, 08-Apr-2021. [Online]. Available: https://www.space.com/astroscale-launches-space-junk-cleanup-mission. [Accessed: 31-Aug-2021]. [14] N. Savage, “Lasers Could Clear Space Junk From Orbit,” IEEE Spectrum, 14-May-2021. [Online]. Available: https://spectrum.ieee.org/laser-could-clear-space-junk-from-orbit. [Accessed: 31-Aug-2021]. [15] “Shoving space junk out of the way. With lasers.,” Curious, 27-Oct-2017. [Online]. Available: https://www.science.org.au/curious/space-time/shoving-space-junk-out-way-lasers. [Accessed: 01-Sep-2021].
- Photon Statistics of an Open Quantum System with a Quantum Feedback Loop
Alex Chapple Individual particles of light, known as photons, are instrumental in developing quantum information technologies and quantum computers, the last of which I'm sure you've heard of. Quantum technologies promise a lot (which you can read all about in the article I wrote in our first edition, "The state of quantum computing today"), but they are not without their challenges. In particular, photons are fragile and hard to confine, which is a problem because many of these applications rely on confined photons (e.g. between near-perfectly reflecting mirrors). Furthermore, we still don't have a reliable way of emitting a single photon on-demand, which is crucial for these applications [1]. To do information processing with a small number of photons, we would like to control when and how photons are emitted. Being able to manipulate quantum systems predictably is a huge challenge we face today. In my research with Professor Howard Carmichael, we're studying a particular driven, two-level open quantum system with a quantum feedback loop to study its behaviour and possible applications to predictable photon pair productions. A two-level system describes the energy levels of an atom, where the lower level is its ground state, and the higher level is its excited state. The driven part describes a laser that can excite the atom, and this laser takes the atom that is initially in its ground state to its excited state. The atom can spontaneously emit a photon to bring itself back down to the ground state, so we have a situation where the atom continuously gets excited by the laser, emits a photon, and is re-excited again. This is known as resonance fluorescence of a driven two-level atom and has been studied extensively and experimentally realised. This type of resonance fluorescence is seen when the atom is in free space, with no interaction other than the laser. Figure 1 We're adding a little more complexity to this system by implementing a quantum feedback loop in our research. The photons the atom has previously emitted are reflected by a mirror and interact with the atom once again. The general setup is confined in a waveguide (Fig. 1). You can think of the waveguide as simply a chamber that encloses the system, with an output channel. The photon is emitted by the two-level atom, travels to the right, gets reflected by the mirror and travels back to interact with the atom once again before leaving the waveguide to be detected. The entire system is enclosed inside this waveguide. Even in a relatively simple setup like this, we can observe fascinating behaviours. This is the beauty of physics, where simple systems can result in very elegant physics. One of the most notable behaviours is that we can trap photons in our waveguide and create photon pairs that are emitted together. However, before we can discuss its nature, we first need a way to simulate this quantum system. Due to the inherently probabilistic nature of quantum systems, we cannot observe the atom directly and make conclusions. To study the behaviour of this open quantum system, we employ a few methods. Firstly we use Quantum Trajectory Theory [2] to simulate these photon emissions. This theory uses a Monte-Carlo type approach to characterising the open quantum system, and we average over many simulations to build a statistical average to understand the system's inner workings. Quantum trajectory theory is a powerful tool because it builds this statistical average by only looking at the system's output, i.e., its environment¹ [3, 4]. We also assume that there are no more than two photons in the waveguide at any given moment. This dramatically reduces the complexity of the simulation to something feasible to compute in an appropriate amount of time. Finally, we use a space-discretised waveguide model (SDWM), where we split the space between the atom and the mirror into small boxes and claim we can not have two photons in any box simultaneously. [5] The basic principle of this simulation is that we evolve the system, and every time step, we shift the boxes by one and look at what is inside the Nth box as seen on Figure 1. The condition for the next time step depends on whether or not a photon was found in the Nth box in the previous time step. We typically average over 50,000 to 100,000 simulations. If the photons emitted have a pi phase shift relative to the two-level atom, it will destructively interfere with the atom on its round trip back. This creates a system where no photon can leave the waveguide, i.e., photon trapping. However, this is only for the case in which there is only one photon in the waveguide. The story becomes more interesting when two photons are in the waveguide at the same time. In this situation, both photons are let out. In a sense, it is like the second photon in the waveguide lets out the first trapped photon. Both photons leave, and because they were in the waveguide at the same time, they are at most separated by the round trip time. Hence we now have a system that emits photon pairs, which are emitted in a very short time apart. Figure 2 These behaviours can be summarised mainly in photon counting distributions and photon waiting time distributions. Photon counting distributions show a histogram of the number of photons detected in each of the simulations. With the pi phase destructive interference regime, the photon counting distributions mainly only have peaks at even numbers of photons due to the photon pair production. An example of this is shown in Figure 2. There are some odd peaks, which are mostly due to a rare triple photon emission where three photons are emitted together under the round trip time. The physics behind this is yet to be studied. The waiting time distributions show the time distributions between photon emissions, i.e. when a photon is detected, how long do you have to wait until you see another photon? Again in this pi phase shift regime, there are strong peaks below the round trip time and a decaying tail for the rest of the time. Figure 3 shows an example of this waiting time distribution, where there’s a clear peak for waiting times below the round trip time. However, we are still yet to understand the shape of this distribution and the physics behind it properly. Figure 3 Photons are the universe's fundamental information carriers. We are now transitioning (slowly) to a world where quantum technologies become more ubiquitous. The manipulation and controlling of individual or few photons will be key in utilising the unique properties that the quantum realm brings to the table such as quantum entanglement and quantum tunnelling. By studying this system, we hope to find ways to use this kind of setup for future quantum information processing applications. Footnote Quantum trajectory theory has been recently for the first time experimentally realised by a group at Yale. For the last 100 years or so, physicists have thought that the “quantum jump”, that particles make to go to the next energy level, is instantaneous. Quantum trajectory theory predicts that it is in fact possible to predict the path through phase space the atom takes. In this experiment, the group was able to detect an atom changing energy levels, catch the transition mid-flight, and push it back to its original state. If that’s not cool, you’re not cool. References Lvovsky, A., Sanders, B. and Tittel, W., Optical quantum memory. Nature Photonics, 3(12), pp.706-714, 2009. H. Carmichael, “An Open Systems Approach to Quantum Optics,” Berlin Heidelberg: Springer-Verlag , 1993. P. Ball, “How a new twist on quantum theory could solve its biggest mystery,” New Scientist, 25-Mar-2020. [Online]. Available: https://www.newscientist.com/article/mg24532750-700-how-a-new-twist-on-quantum-theory-could-solve-its-biggest-mystery/. Z. K. Minev, S. O. Mundhada, S. Shankar, P. Reinhold, R. Gutiérrez-Jáuregui, R. J. Schoelkopf, M. Mirrahimi, H. J. Carmichael, and M. H. Devoret, “To catch and reverse a quantum jump mid-flight,” Nature, vol. 570, no. 7760, pp. 200–204, 2019. S. Arranz Regidor, G. Crowder, H. Carmichael, and S. Hughes, “Modeling quantum LIGHT-MATTER interactions in waveguide Qed With retardation, Nonlinear interactions, and a time-delayed feedback: Matrix product states versus a Space-discretized Waveguide model,” Physical Review Research, vol. 3, no. 2, 2021. Captions Figure 1. A schematic of the open quantum system we are simulating. Figure 2. Photon counting distribution of our system. The high peaks at even numbers of photons suggest photon pair productions. Figure 3. Waiting time distribution of our system. The large peaks seen at times below the round trip time suggest photons are coming out in pairs.
- Seabird stress as a conservation tool
Maira Fessardi Fish (blue mackerel) work-up with shearwaters and prions. Photo: Edin Whitehead. Seabirds and climate change We often hear about climate change and how its consequences might affect our lives. Ocean temperatures are rapidly rising, causing a range of problems to the incredible biodiversity that inhabits that ecosystem. Seabirds are one group of animals that rely on a healthy ocean to thrive, and they represent the most threatened group of birds in the world [1, 2]. With approximately half of seabird species experiencing population declines, it becomes important to understand what is driving this trend and how it relates to the drastic changes in the marine environment [1]. New Zealand, and particularly the Hauraki Gulf, has been identified as a hotspot of international interest for seabird diversity [1]. New Zealand has the highest number of unique seabird species in the world, with over 70 species visiting the gulf. However, 78% of those are either threatened or at risk of extinction [3]. That is particularly problematic because seabirds play a very important role in the ecology of our terrestrial ecosystems. They are classified as “Ecosystem Engineers”, meaning their behaviour and ecology play a crucial role in the health of the terrestrial environment they inhabit [4]. Even though these birds spend most of their lives feeding in the ocean, they come to land (or “breeding sites”) to find their pairs and breed. Their nesting habits include digging through soil, building a safe burrow for courtship, incubation, and chick-rearing. This physical disturbance brings in crucial nutrients from the ocean to our forests, carried in their faeces, dead tissues, and eggs [4]. The disappearance of seabird populations is often associated with a loss of biodiversity, causing a cascade effect of ecological loss in terrestrial communities [2, 4]. Additionally, they occupy the top of the marine food chain, being carnivores that feed on fish and invertebrates. That means their population success is highly dependent on variations in ocean conditions and biodiversity [6]. Higher ocean temperatures, for instance, may cause a decline in seabirds’ food availability and quality, which impacts their ability to survive and fulfil their biological duties [6, 7]. Thus, observing their lives and breeding outcomes can provide incredible amounts of information on the health of their environment, becoming powerful environmental indicators [5]. A better understanding of how climate shifts affect ocean conditions allows decision-makers to put more well-informed plans in place. Monitoring ocean health, however, can be expensive and logistically challenging. The good news is, we may be able to investigate those changes by watching and quantifying their effects on seabird’s population processes (i.e. breeding and physiology). Their habit of coming to land to breed provides an opportunity to observe their condition, constituting an accessible indicator of environmental health. Seabird population monitoring may give us powerful insights into the ocean dynamics in remote marine environments that the birds use as feeding grounds [7, 8]. Black Petrel nesting in burrow - Aotea/Great Barrier Island. Photo: Maira Fessardi Out of all seabird species breeding in the Hauraki Gulf, one stands out: The Grey-faced petrel (Pterodroma macroptera). This species can only be found breeding in New Zealand and, unlike its seabird relatives, remains widespread to this day, with many successful colonies around the Gulf [9]. They are long-lived, with high fidelity to their breeding sites and partners, feeding close to the coast [10, 11]. That combination of factors facilitates access to birds and allows for long-term monitoring efforts, earning them the status of key indicator species [12]. Monitoring Grey-faced petrels may reveal important information about what is happening where they feed in the ocean, which could help us predict future population declines [10]. Although seabirds as environmental indicators may sound like an ideal solution for some of our climate conundrums, there is still a long way to go in using their population processes as a reliable and precise monitoring tool. Black Petrel chick in what can be a successful breeding event. The eggshells will decompose and integrate nutrients into the soil. Photo: Maira Fessardi How Does it Work? It is understood that the relationship between environmental conditions and stress physiology is stronger than ecological processes, such as breeding [13, 14]. Chicks raised in poor environments may still survive stressful periods as youngsters and go on to become adults. That would be classified as a successful breeding event for monitoring [7, 13]. However, an entire generation of seabirds facing stress may suffer life long impacts, affecting their ability to become good parents and fulfil their duty to sustain their population legacy [7, 13]. In this case, information on the ecological process of breeding is misleading as evidence of success and population numbers, which might eventually start declining. Like humans, seabirds undergoing stressful events will ultimately exhibit a physiological response to these stressors [8, 10]. Stressful events may be, for example, situations where food supply is limited by poor ocean conditions, either inducing nutritional stress or requiring longer travel distances and a waste of energy to find a good meal [7, 8]. Animals that are more stressed will produce higher levels of stress hormones that can indicate something is not right in their environment. Very stressed parents can also transmit some of their stress hormones to their eggs, and be forced to reduce feeding trips to chicks. That means chicks growing in stressful scenarios often produce higher levels of stress hormones than adults, imposed both by stress in their environment (such as predation) and their stressed parents [15]. Long-term stress and high amounts of stress hormones circulating in their body may cause health issues, lower immunity, and result in lower fitness [7, 13]. Thus, stress hormones in seabird populations may have the power to connect events in their life throughout space and time, and record that information [7, 8]. Looking at a population of bird’s stress response can represent an exciting and integrative new monitoring tool. In birds, the predominant stress hormone is called corticosterone (or CORT, for short) and their physiological responses to environmental stressors often result in high levels of CORT circulating in their bodies [16]. Black Petrel chick in what can be a successful breeding event. The eggshells will decompose and integrate nutrients into the soil. Photo: Maira Fessardi What are the Research Gaps? Even though stress hormones in seabirds sound like an excellent new tool for conservation, some gaps still need to be filled. Traditionally, research in environmental stress relies heavily on blood or faecal samples, which can be challenging to collect and are invasive for the bird [17, 18]. They also reflect only short-term stress undergone by seabirds (one day for blood hormones, one week for faecal hormones). If we are facing the fast-changing consequences of climate change, we will be better equipped using long-term, integrative information on how variations in ocean conditions are affecting seabird populations [17, 18]. The good news is there is a potential new tool that ticks all the boxes: stress hormones found in feathers. CORT produced by seabirds during stress events is continuously deposited in growing feathers [7]. More stressed birds are expected to have higher levels of CORT in their developing feathers, which can be extracted in a quick and non-invasive procedure, with no risk of degradation across time [16, 17]. Using feather stress to monitor ocean conditions and population health has great potential and deserves more attention. The patterns of variation in feather CORT and how stress hormones affect population quality, however, varies among species and their environment. To use feather CORT as a reliable conservation tool, it is necessary to advance our knowledge of the specific physiological stress response that different species have to changing conditions in their respective oceanic feeding grounds. [6, 7, 16]. Changes in the environment affect seabird chick fitness, and this has consequences for seabird populations in both the short term and long term. Addressing both the future of seabird populations and oceanic ecosystem health can be aided by monitoring of bird feather CORT Next Steps My novel research looks at the Grey-faced petrel stress hormone deposited in feathers to help with one piece of a complex puzzle and validate feather CORT as a conservation tool. Specifically, it is important to understand the pattern of deposition of feather CORT in Grey-faced petrel chicks, and how that compares to variables of changes in ocean conditions. I selected variables that influence food quality, such as temperature, to statistically compare with stress hormone levels. My research also aims to unravel whether feather CORT can predict breeding success and, therefore, population success over time. That more detailed knowledge will help to improve this method as a monitoring tool and test its ability for application conservation and environmental management. Where is the Information Coming From? This validation study focuses on a population of Grey-faced petrels that inhabits Te Henga (Bethells Beach), with a substantial colony nesting off the intertidal island Ihumoana. This population is ideal, as it has been subject to previous monitoring projects, and it is stable and thoroughly understood [19]. Feathers and oceanic data have been collected over four years, with adult birds being captured when arriving or leaving the colony to feed, while chicks are extracted from burrows and held inside a bag for measurements of mass and length. Morphological information provides an understanding of the bird’s condition and health. Feathers are processed in the laboratory to extract CORT for analysis. The analysis focuses on differences in stress hormone levels between different years where there has been a variation of ocean conditions, and the relationship between feather CORT and breeding success in those years. Plots showing the predicted pattern expected for the study results. They only show the expected relationship between feather CORT and environmental variables, with no real data involved. Designed by Maira Fessardi Expected Results Ideally, results would show evidence of a significant relationship between feather CORT and ocean temperature/breeding success. That means a population would show higher detectable levels of feather CORT in chicks for years under increased environmental stress, with poorer oceanic foraging conditions (higher temperatures). It is also predicted that higher CORT levels in adult feathers will result in lower quality offspring and relate to lower population breeding success. More detailed knowledge will help us improve the potential sensitivity of this method as a conservation tool, allowing for more accurate conclusions and predictions [20]. Validating this technique across different species may allow for the development of future models to alert us to dangerous variations in the ocean and population oscillation. Novel technologies increase our chances to fight climate change and implement changes that will hopefully give seabirds, and all the biodiversity in their surrounding environment, a more realistic fighting chance. References [1] J. P. Croxall et al., “Seabird conservation status, threats and priority actions: A global assessment,” Bird Conservation International, vol. 22, no. 1, pp. 1–34, 2012, doi: 10.1017/S0959270912000020. [2] Ç. H. Şekercioğlu, G. C. Daily, P. R. Ehrlich, G. C. Daily, and P. R. Ehrlich, “Ecosystem Consequences of Bird Declines,” vol. 101, no. 52, pp. 18042–18047, 2016. [3] Hauraki Gulf Forum, State of our Gulf 2020 - full report. 2020. [Online]. Available: https://www.aucklandcouncil.govt.nz/about-auckland-council/how-auckland-council-works/harbour-forums/docsstateofgulf/state-gulf-full-report.pdf [4] J. L. Smith, C. P. H. Mulder, and J. C. Ellis, “Seabirds as Ecosystem Engineers: Nutrient Inputs and Physical Disturbance,” Seabird Islands: Ecology, Invasion, and Restoration, 2011, doi: 10.1093/acprof:osobl/9780199735693.003.0002. [5] D. K. Cairns, “Seabirds as Indicators of Marine Food Supplies,” no. November 1986, 2016. [6] J. F. Piatt et al., “Seabirds as indicators of marine food supplies : Cairns revisited,” vol. 352, pp. 221–234, 2007, doi: 10.3354/meps07078. [7] A. Will et al., “Feather corticosterone reveals stress associated with dietary changes in a breeding seabird,” Ecology and Evolution, vol. 5, no. 19, pp. 4221–4232, 2015, doi: 10.1002/ece3.1694. [8] A. Harding et al., “Does location really matter? An inter-colony comparison of seabirds breeding at varying distances from productive oceanographic features in the Bering Sea,” Deep-Sea Research Part II: Topical Studies in Oceanography, vol. 94, pp. 178–191, 2013, doi: 10.1016/j.dsr2.2013.03.013. [9] C. P. Gaskin and M. J. Rayner, “Seabirds of the Hauraki Gulf,” p. 143, 2013. [10] J. R. Welch, “Variations in the breeding biology of the grey-faced petrel Pterodroma macroptera gouldi,” vol. 1994, 2014. [11] M. J. Imber, “Breeding Biology of the Grey-faced petrel,” no. 1952, pp. 51–64, 1976. [12] J. C. Russell et al., “Developing a national framework for monitoring the grey-faced petrel (Pterodroma gouldi) as an indicator species. DOC RESEARCH AND DEVELOPMENT SERIES 350,” p. 19, 2017. [13] W. H. Satterthwaite, A. S. Kitaysky, and M. Mangel, “Linking climate variability, productivity and stress to demography in a long-lived seabird,” Marine Ecology Progress Series, vol. 454, pp. 221–235, 2012, doi: 10.3354/meps09539. [14] A. S. Kitaysky et al., “Food availability and population processes: severity of nutritional stress during reproduction predicts survival of long-lived seabirds,” Functional Ecology, vol. 24, no. 3, pp. 625–637, 2010, doi: 10.1111/j.1365-2435.2009.01679.x. [15] J. J. Fontaine, E. Arriero, H. Schwabl, and T. E. Martin, “Nest predation and circulating corticosterone levels within and among species,” Condor, vol. 113, no. 4, pp. 825–833, 2011, doi: 10.1525/cond.2011.110027. [16] C. P. Fischer, R. Rao, and L. M. Romero, “Exogenous and endogenous corticosterone in feathers,” Journal of Avian Biology, vol. 48, no. 10, pp. 1301–1309, 2017, doi: 10.1111/jav.01274. [17] G. R. Bortolotti, T. Marchant, J. Blas, and S. Cabezas, “Tracking stress: localisation , deposition and stability of corticosterone in feathers,” pp. 1477–1482, 2009, doi: 10.1242/jeb.022152. [18] G. D. Fairhurst, T. A. Marchant, C. Soos, K. L. Machin, and R. G. Clark, “Experimental relationships between levels of corticosterone in plasma and feathers in a free-living bird,” Journal of Experimental Biology, vol. 216, no. 21, pp. 4071–4081, 2013, doi: 10.1242/jeb.091280. [19] T. J. Landers, Muriwai Beach to Te Henga (Bethells) 2016 grey-faced petrel and little penguin survey, no. November. 2017. [20] G. H. Sorenson, C. J. Dey, C. L. Madliger, and O. P. Love, “Effectiveness of baseline corticosterone as a monitoring tool for fitness: a meta-analysis in seabirds,” Oecologia, vol. 183, no. 2, pp. 353–365, 2017, doi: 10.1007/s00442-016-3774-3.
- Synesthesia: The Direct Cross-Activation Hypothesis
Stella Huggins Our world boasts a plethora of stimuli to engage with. Without getting too deep down the hole of whether an objective external reality exists, I think it’s fairly uncontroversial to say that we do not perceive all of the world in the same way. An internal world, just as rich, exists within and mediates how we interact with our environment. Higher cognitive processes and brain structures are the culprits of how we feel about the information intake of daily life. Our limbic system is involved in behavioural and emotional responses, while the prefrontal cortex, basal ganglia and thalamus (among other things) regulate our capacity for executive function [1]. It’s easy to accept that individual differences exist at these levels. What is harder to conceptualise is the individual differences in sensory experiences. Nevertheless, multiple fields of psychology have done so, beginning with Francis Galton’s observations in the 19th century. Galton dubbed the experience of sensory blending as synesthesia [2]. A fascinating and broad condition, synesthesia has captured the curiosity of neuroscience in particular, as researchers map and locate brain regions where activity crosses over. Numerous contemporary artists have the condition — Lorde, Beyonce, Kanye West and Billie Eilish, to name a few. It’s understandable that synesthesia would be a condition that’s conducive to a life dedicated to art. However, it’s not just famous creatives that have the condition — some studies report the prevalence of grapheme-colour synesthesia for example, to be between 0.8% and 2.8% of the population [3]. Information that ‘should’ affect just one of your sensory organs (if sensory experiences can even be considered so isolated), in synesthetes, evokes a joint experience [4]. An enormous variety of synesthesia sub-categories exist, so that several sensory modalities are able to be combined. For example, someone may see the number two as always being pink. They may taste caramel when they hear the word ‘town’. They may smell freshly cut grass when they hear a violin being played. The associations between senses and specific sensory inputs aren’t thought to be pathological, nor is the occurrence of the associations in the first place. The structures that build an individual’s perceived reality are two-fold. Primary sense organs (the eyes, ears, tongue, nose and skin) exist purely to receive information about what’s going on outside ourselves. It is the brain that interprets this information in its multiple forms into meaningful and interpretable signals off of which we function[5]. The process of information travelling from the exterior world, through sensory circuits from the primary sense organs, is just the beginning of perception. It is important to note that synesthesia is not considered to be a condition of bodily dysfunction. It is not an issue with the ability of the eyes to take in information, or the capacity of your taste buds to taste accurately. The condition is postulated to be a cognitive quirk that is involuntary, with stable associations across time [6]. As is often the case for psychological conditions, an enormous amount of hypotheses, each with their own nuances and levels of evidence, exist to try to explain the cause of synesthesia. Here, I will discuss the direct cross-activation hypothesis, as well as more broad hypotheses about brain function and development that tie into the cross-activation hypothesis [7]. Direct cross-activation refers to the idea that the messages intended for one section of the brain activate tissue in neighbouring regions. Cell firing in cortical tissue has electrical outputs, and this output can travel. Excitation can be simply described as a signal [8]. This hypothesis relies on the idea that the electrical signals moving from one cortical area affect another in such a way that it produces a consistent mixing of sensory experiences for the individual. It is true that the human brain experiences large amounts of cortical folding [9]. This essentially means that brain regions are very close to one another, and neuropsychologists, in general, are hesitant to prescribe one function to one brain region. This makes finding the cause of synesthesia even more complex. Brain function is already considered to be highly collaborative, rather than a delegation of tasks to specific tissue — so how do we isolate function and prescribe meaning to it? In some researchers' eyes, the proximity of brain regions, particularly in grapheme-colour synesthesia, holds significance. An experience of some synesthetes with this subtype of the condition is associating numbers with colours; for example, ‘2’ may always take on a pink hue. Despite the hesitance to categorically prescribe tasks to tissue, there are particular brain areas that are known to be correlated with specific processing tasks. This is significant when considering the cross-activation hypothesis. Intriguingly, the visual-word form area (VWFA) or ‘grapheme area’ is directly adjacent to the V4 colour processing area [10]. V4 describes the third cortical area in the ventral stream, and, according to the widely accepted hypothesis of two-stream visual processing, this stream carries information about object forms from the primary visual cortex to the temporal lobe [11]. The cross-activation hypothesis ties neatly into other postulated causes at this point: for example, the broad idea of neuroplasticity hosts many nooks and crannies for cross-activation to tuck nicely into. It’s well in line with the widely accepted concept of neuroplasticity [12]. Neuroplasticity is something of a buzzword for the field and there is considerable evidence that we possess huge amounts of connections and neurons at birth. Think of tasks or behaviours like strengthening a muscle or a connection in the brain — as I do more of one thing, it becomes stronger. Acording to this school of thought, we grow and strengthen connections for some activities, and lose the strength for others. This is referred to as neural pruning [13] — the process of getting rid of connections that aren’t being used (yes, if you don’t use it, you really do lose it). It has been suggested that abnormalities in the pruning process where excess synapses aren’t eliminated as per usual could lead to hyper-connectivity between regions [14]. In the case of grapheme-colour synesthesia, connectivity occurs in the fusiform gyrus, an area responsible for object identification [15]. The cross-activation hypothesis comes in when you consider the evidence that synesthesia is genetic [16]. This suggests that maybe synesthesia could have something to do with pruning in infancy: maybe these individuals simply have more connections in their brains. With more connections comes more potential for cross-activation. Another take on the neuroplasticity route is the neonatal theory [17], which asserts that we are all synesthetes at birth, and the experience of synesthesia is actually a failure in modality separation. Modularity theory explains that the mind separates cognitive processes into ‘modules’, each with its own distinct properties and abilities [18]. All of this seems like an enormous amount of background information for a simple claim: that excitation in one region of the brain, due to proximity to another, causes excitation in that region, and thus regions intermingle, causing a dual-sensory experience. A somewhat unsatisfying conclusion lies in amongst the competing theories; we aren’t sure of what causes synesthesia. What we do know is a variety of experiences with the condition exist: from mildly bothersome, to not troublesome at all, to advantageous. Synesthesia represents what we already knew about the world — everybody’s experience is unbelievably varied. It’s the way these differences are framed that dictates how positively or negatively somebody relates to their perceived internal and external world. References [1] Isaacson, R. L. (1980). A perspective for the interpretation of limbic system function. Physiological Psychology, 8(2), 183–188. https://di.org/10.3758/bf03332849 [2] Ramachandran, V., & Brang, D. (2008). Synesthesia. Scholarpedia, 3(6), 3981. https://doi.org/10.4249/scholarpedia.3981 [3] Chiou, R., & Rich, A. N. (2014). The role of conceptual knowledge in understanding synaesthesia: Evaluating contemporary findings from a “hub-and-spokes” perspective. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.00105 [4] Cytowic, R. E., & Cytowic, R. E. (2002). Synesthesia: A union of the senses. MIT Press. [5] Raichle, M. E. (2010). Two views of brain function. Trends in Cognitive Sciences, 14(4), 180–190. https://doi.org/10.1016/j.tics.2010.01.008 [6] Simner, J., & Bain, A. E. (2013). A longitudinal study OF Grapheme-color synesthesia in CHILDHOOD: 6/7 years to 10/11 years. Frontiers in Human Neuroscience, 7. https://doi.org/10.3389/fnhum.2013.00603 [7] Hubbard, E. M., Brang, D., & Ramachandran, V. S. (2011). The cross-activation theory at 10. Journal of Neuropsychology, 5(2), 152–177. https://doi.org/10.1111/j.1748-6653.2011.02014.x [8] Dichter, M. A. (1978). Rat cortical neurons in cell culture: Culture methods, cell morphology, electrophysiology, and synapse formation. Brain Research, 149(2), 279–293. https://doi.org/10.1016/0006-8993(78)90476-6 [9] Garcia, K. E., Kroenke, C. D., & Bayly, P. V. (2018). Mechanics of cortical folding: Stress, growth and stability. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1759), 20170321. https://doi.org/10.1098/rstb.2017.0321 [10] Jäncke, L., Beeli, G., Eulig, C., & Hänggi, J. (2009). The neuroanatomy of grapheme-color synesthesia. European Journal of Neuroscience, 29(6), 1287–1293. https://doi.org/10.1111/j.1460-9568.2009.06673.x [11] Kar, K., Kubilius, J., Schmidt, K., Issa, E. B., & DiCarlo, J. J. (2019). Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nature Neuroscience, 22(6), 974–983. https://doi.org/10.1038/s41593-019-0392-5 [12] Grafman, J. (2000). Conceptualizing functional neuroplasticity. Journal of Communication Disorders, 33(4), 345–356. https://doi.org/10.1016/s0021-9924(00)00030-7 [13] Chechik, G., Meilijson, I., & Ruppin, E. (1998). Synaptic pruning in development: A novel account in neural terms. Computational Neuroscience, 149–154. https://doi.org/10.1007/978-1-4615-4831-7_25 [14] Rouw, R. (2013). Synesthesia, Hyper-Connectivity, and diffusion tensor imaging. Oxford Handbooks Online. https://doi.org/10.1093/oxfordhb/9780199603329.013.0025 [15] Bar, M., Tootell, R. B. H., Schacter, D. L., Greve, D. N., Fischl, B., Mendola, J. D., Rosen, B. R., & Dale, A. M. (2001). Cortical mechanisms specific to explicit visual object recognition. Neuron, 29(2), 529–535. https://doi.org/10.1016/s0896-6273(01)00224-0 [16] Brang, D., & Ramachandran, V. S. (2011). Survival of the synesthesia gene: Why do people hear colors and taste words? PLoS Biology, 9(11). https://doi.org/10.1371/journal.pbio.1001205 [17] Simner, J., Maurer, D., Gibson, L. C., & Spector, F. (2013). Chapter 3: Synesthesia in infants and very young children. In Oxford Handbook of synesthesia (pp. 46–58). essay, Oxford Univ. Press. [18] Segal, G. (1996). The modularity of theory of mind. Theories of Theories of Mind, 141–157. https://doi.org/10.1017/cbo9780511597985.010
- Unravelling the Evolution of Us
By Emelina Glavaš Image by Colin Lloyd on Unsplash As biologist Kenneth R. Miller once said, "our own genomes carry the story of evolution, written in DNA, the language of molecular genetics, and the narrative is unmistakable" [1]. The emergence of DNA is arguably one of the most significant evolutionary events throughout history, leading us all to where we are now, right in this moment. Biologists are continually striving to understand our evolutionary pathway, by use of molecular evolution, biochemistry, molecular genetics, and microbial experimental evolution. It is suggested that the story of DNA began with its molecular cousin, a genetic material known as RNA. But what changes were involved in its evolution, and what selective advantage did these provide over other genetic systems? RNA and U-DNA-based relics are present in the modern world in the form of viruses. If these organisms coexist with us, can we really say DNA has provided a better basis for life? What is DNA? DNA, standing for deoxyribonucleic acid, has become somewhat of an icon for modern biology. It is defined as a molecule allowing for the storage and maintenance of genetic information, present in almost all living organisms. Our understanding of DNA has increased rapidly since its discovery, beginning with its initial observation by biochemist Friedrich Miescher in 1869, where early biochemical methods were used to isolate the molecule from sperm and white blood cell samples [2]. This sparked interest from many scientists, including Phoebus Levene and Erwin Chargaff, who carried out experiments to reveal more about this mystery molecule [3]. Years later, chemist Rosalind Franklin further advanced the knowledge of DNA, discovering what is known as the B form, determining there were two states of the DNA molecule, and providing a basis for understanding the structure of the molecule [4]. These contributions led to the development of Watson and Crick’s biological breakthrough, concluding the structure of DNA to be a three-dimensional double helix, composed of a series of nucleotides containing a phosphate group, a deoxyribose sugar, and a nitrogen-containing base, known cumulatively as deoxyribonucleotides [4]. There are four bases in DNA known as adenine (A), thymine (T), cytosine (C) and guanine (G), held together in corresponding pairs by hydrogen bonds [5]. Where did DNA Come From? The deoxyribonucleotide synthesis pathways within all cells allude to the emergence of DNA through a two-step evolutionary event, beginning with its molecular cousin RNA [5]. There are two structural differences between DNA and RNA — due to the presence of a hydrogen, DNA contains deoxyribose sugars, whereas the sugars in RNA have a hydroxyl group in its place, resulting in ribose. Secondly, RNA contains no T, but instead a different nucleotide base known as uracil (U). Thus, DNA comprises A, T, C, and G while RNA contains bases A, U, C, and G. Before RNA fully evolved into DNA, it is hypothesised that an intermediate form known as U-DNA existed. Like both RNA and DNA, this molecule successfully stored genetic information, and therefore can provide the basis for life. Similar to a fossil, U-DNA represents a transitional form between RNA and DNA, containing the U nucleotides seen in RNA, but the deoxyribose of DNA [6]. This thinking is formally referred to as the RNA world hypothesis [7]. How did it Evolve? Modern DNA arose through steps involving multiple key enzymes, with the functions of ribonucleotide reductase and thymidylate synthase being most notable. Ribonucleotide reductase replaced the ribose sugars of RNA with deoxyribose, creating the intermediate form, U-DNA. DNA then evolved after the replacement of U with T nucleotides via thymidylate synthase through subsequent steps [6]. For DNA to be so widely distributed among modern organisms today, common thought is that it must have provided some form of benefit. Despite few biochemical differences, DNA is often considered superior to both RNA and U-DNA, as it allowed for an increase in the storage and maintenance of genetic information when compared to other already present genetic systems. Another advantage of DNA over RNA is that the latter can carry out a function known as autolytic selfcleavage [8], simply meaning it chops itself up more. For these reasons, DNA could provide a backbone for larger, more complex genomes, without having a detrimental number of errors or mutations [9]. Is DNA Really the Best Basis for Life? Extant relics of RNA-based genetic systems exist in the form of RNA viruses [10]. These include the now widely recognised coronaviruses, with relatively large RNA-based genomes. Extant U-DNA viruses are also present in the modern world [11]. These relics may be an example of a ‘frozen accident’ [12], where a genetic aspect has been retained not because it is an optimal state, but instead because other biological systems rely on it remaining present [13]. Alternately, these relics may not be evolutionary throwbacks, but rather represent organisms where non-DNA-based genetic systems have re-evolved. Or, perhaps modern RNA and U-DNA viruses were just successful in evolving alongside us. If this were the case, DNA — and all the organisms that have evolved to use it — should not be considered so revolutionary after all. References [1] Miller, K. (2009). Seals, evolution, and the real 'missing link'. [2] Dahm, R. (2005). Friedrich Miescher and the discovery of DNA. Developmental biology, 278(2), 274-288. [3] Pray, L. (2008) Discovery of DNA structure and function: Watson and Crick. Nature Education 1(1):100 [4] Klug, A. (1968). Rosalind Franklin and the discovery of the structure of DNA. Nature, 219(5156), 808-810. [5] Watson, J. D., & Crick, F. H. (1953, January). The structure of DNA. Cold Spring Harbor symposia on quantitative biology (Vol. 18, pp. 123-131). Cold Spring Harbor Laboratory Press. [6] Poole, A., Penny, D., & Sjöberg, B. M. (2001). Confounded cytosine! Tinkering and the evolution of DNA. Nature Reviews Molecular Cell Biology, 2(2), 147-151. [7] Gilbert W. 1986. Origin of life: The RNA world. Nature, 319: 618. [8] Sharmeen, L., Kuo, M. Y., Dinter-Gottlieb, G., & Taylor, J. (1988). Antigenomic RNA of human hepatitis delta virus can undergo self-cleavage. Journal of virology, 62(8), 2674. [9] Lazcano, A., Guerrero, R., Margulis, L., & Oro, J. (1988). The evolutionary transition from RNA to DNA in early cells. Journal of molecular evolution, 27(4), 283-290. [10] Wintersberger, U., & Wintersberger, E. (1987). RNA makes DNA: a speculative view of the evolution of DNA replication mechanisms. Trends in Genetics, 3, 198-202. [11] Takahashi, I., & Marmur, J. (1963). Replacement of thymidylic acid by deoxyuridylic acid in the deoxyribonucleic acid of a transducing phage for Bacillus subtilis. Nature, 197(4869), 794-795. [12] Crick, F., H., C. (1968). The origin of the genetic code. Journal of Molecular Biology, 38:367–379 [13] Jeffares, D. C., Poole, A. M., & Penny, D. (1998). Relics from the RNA world. Journal of molecular evolution, 46(1), 18-36.
- Is the Gulf Stream Slowing Down? A Mathematical Perspective
By John Bailie The ocean’s global currents are constantly circulating energy and nutrients worldwide. Photo by Andreas Lindgren on Unsplash. The Atlantic Meridional Overturning Circulation (AMOC) is a large conveyer belt of water responsible for the Gulf Stream, generating a warmer Europe by northward heat transport. Warm surface currents transport water northward to the Labrador and Nordic seas. The water becomes denser and sinks in deep water formation regions. Mixing occurs in these regions between surface and deep waters. A deep cold current transports deepwater southward via the North Atlantic Deep Water (NADW) current. Southern upwelling closes the circulation; for more information, see Fig. 1(a) and [6]. Observed in the last century is an unprecedented slowdown of the AMOC [11]. The slowdown has been linked to North Atlantic freshening in the late 1900s in an event called the Great Salinity Anomaly (GSA) [3]. Moreover, increased freshwater influx resulted in the Labrador sea convection shutting down from 1969 to 1971, weakening the AMOC. Freshwater in the North Atlantic could become comparable to the GSA if the current fresh-water trend from melting of the Greenland Ice Sheet continues [1]. As a result, a weaker AMOC and another shutdown of deepwater formation in the Labrador Sea is possible [11]. A slower AMOC could weaken the Gulf Stream and result in temperature drops in Europe, which would be more severe during winter. Precipitation in Europe could decrease, and possibly due to drier conditions, vegetation would also fall [5]. Moreover, the North Atlantic marine ecosystem could decline, and global plankton production could decrease significantly [12]. My Master’s project with Prof. Bernd Krauskopf at the Mathematics Department aims to understand the effect of freshwater entering the AMOC as represented by a much simplified conceptual climate model. The project is in collaboration with Prof. Henk A. Dijkstra at Utrecht University. Modelling Climate systems are large, with many variables that determine the overall state. Modelling approaches vary and consider different inner processes. General Circulation Models (GCMs) are large and discretise the full climate system on a fine-scale but are black boxes that are very hard to study. Zonally Averaged Models use far fewer variables than GCMs but still account for wind stress and the Earth’s rotation. While more tractable, it is still hard to understand the system’s underlying mechanisms [4]. Figure 1: (a) The North Atlantic component of the AMOC with deep water formation sites at L and N in the Labrador and Nordic seas. The background is generated with the software from [8] and ocean currents are illustrated following from [10]. (b) Two-box model for temperature and salinity in the surface and deepwater layers at sites L and N. Box models only consider a few variables in a relatively small number of boxes of concern. They are generally not used for prediction but are relatively simple and can be readily analysed. Box models are useful for understanding the underlying mechanisms of a physical process in isolation but are still a part of the larger climate system. Welander’s model [13] is a box model for temperature and salinity in only two boxes; a surface box and a deep water box that interact through mixing. A surrounding basin also interacts with the surface box; see Fig. 1(b). For certain ranges of freshwater intake, there are oscillations between temperature and salinity. Welander’s model was re-examined in [2] with the surrounding basin being replaced with a stochastic freshwater influx. More recent work was performed in [7] to formalise Welanders result by using a modern approach to piecewise smooth dynamical systems. However, a comprehensive classification of the dynamics with respect to freshwater influx is still missing, and this is the subject of my project. As a starting point, we look at a limiting case of two decoupled differential equations, one for weak and one for strong mixing between two boxes. Because transitions between mixing states are assumed to be instantaneous, the overall model is piecewise smooth. As a result, we use numerical methods adapted from [9] to study its bifurcations and dynamics. Results So far, we have obtained a full description of all possible dynamics, organised by changes in the freshwater influx and the density difference between boxes. All possible states the AMOC can take in our model are described, and the mechanisms by which the AMOC transitions between them. When the density of the surface box is much larger than the density of the bottom box, then strong mixing occurs between the layers, resulting in a stronger AMOC. Conversely, due to freshwater influx, the density of the surface box may become small enough for weak mixing to occur between the layers, resulting in a weaker AMOC. The oscillations between temperature and salinity predicted in the literature [13, 2, 7] exist for ranges of freshwater and densities; see Fig. 2 for an illustration. During these oscillations, temperature and salinity both rapidly increase to a maximum value, then switch instantaneously to a slower relaxation; the process then repeats. The long-term behaviour of temperature and salinity from any initial condition approaches this type of oscillation. The green curve in Fig. 2(b) illustrates the oscillations in temperature and salinity. Figure 2: Oscilations of temperature T and salinity S, (a) as a time series and (b) in the (T,S) plane. Conclusion and Outlook The classification of the limiting case opens up the door to the study of the related smooth model. This smoothed case involves transitioning between strong and weak mixing in a slow-fast way rather than instantaneously. In particular, we expect the periodic behaviour between temperature and salinity to persist. The question is how these oscillations arise and disappear as parameters are varied in this more realistic context. Future research will focus on expanding the current box model to make it more realistic. A first step will be adding a seasonal freshwater influx that changes periodically to account for seasonal fluctuations. The AMOC also displays delayed feedback loops of temperature and salinity. Incorporating these is an interesting challenge because it leads to a model in the class of delay differential equations. The analysis of which is more involved and results in a closer representation of the AMOC. References [1] Jonathan Bamber et al. “Recent large increases in freshwater fluxes from Greenland into the North Atlantic”. In: Geophysical Research Letters 39.19 (2012). [2] Paola Cessi. “Convective adjustment and thermohaline excitability”. In: Journal of Physical Oceanography 26.4 (1996), pp. 481–491. [3] Robert R Dickson et al. “The “great salinity anomaly” in the northern North Atlantic 1968–1982”. In: Progress in Oceanography 20.2 (1988), pp. 103–151. [4] Henk A Dijkstra. Nonlinear physical oceanography: a dynamical systems approach to the large scale ocean circulation and El Nino. Vol. 28. Springer Science & Business Media, 2005. [5] LC Jackson et al. “Global and European climate impacts of a slowdown of the AMOC in a high resolution GCM”. In: Climate dynamics 45.11 (2015), pp. 3299–3316. [6] Till Kuhlbrodt et al. “On the driving processes of the Atlantic meridional overturning circulation”. In: Reviews of Geophysics 45.2 (2007). [7] Julie Leifeld. Nonsmooth Homoclinic Bifurcation in a Conceptual Climate Model. 2016. arXiv: 1601.07936 [math.DS]. [8] Met Office. Cartopy: a cartographic python library with a matplotlib interface. Exeter, Devon, 2010 - 2015. URL: http://scitools.org.uk/cartopy. [9] Petri T Piiroinen and Yuri A Kuznetsov. “An event-driven method to simulate Filippov systems with accurate computing of sliding motions”. In: ACM Transactions on Mathematical Software (TOMS) 34.3 (2008), pp. 1–24. [10] Stefan Rahmstorf. “Risk of sea-change in the Atlantic”. In: Nature 388.6645 (1997), pp. 825–826. [11] Stefan Rahmstorf et al. “Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation”. In: Nature climate change 5.5 (2015), pp. 475–480. [12] Andreas Schmittner. “Decline of the marine ecosystem caused by a reduction in the Atlantic overturning circulation”. In: Nature 434.7033 (2005), pp. 628–633. [13] Pierre Welander. “A simple heat-salt oscillator”. In: Dynamics of Atmospheres and Oceans 6.4 (July 1982), pp. 233–242. DOI: 10.1016/0377-0265(82)90030-6.