Search Results
76 results found with an empty search
- Why Does Biological Ageing Occur?
By Jasmine Gunton Although morbid to consider, the fact of death is preordained and a constant across all forms of life on Earth. Much has been discussed about the subject, including what is likely to kill us, and what exists beyond death. However, biologists are still unsure about one of the most basic aspects of our life cycle: Why does biological ageing occur, and what is its purpose? An image of cancerous human colorectal cells. Photograph by the National Cancer Institute on Unsplash. Scientists already know why death is important in our current ecosystems. The death and following decomposition of organisms allow nutrients to be efficiently recycled within an ecosystem, increasing its overall net productivity [1]. Nevertheless, one must wonder why organisms would evolve to slowly decrease in fitness over their lifespan. Additionally, why do some organisms live for far longer than others? As is common with many questions in science, several theories have been proposed to explain this process. What makes the study of ageing more complicated is that ageing rates vary across both species and related individuals. In addition, different parts of the body can age at different rates depending on several environmental and genetic factors. Much of the research associated with this subject has been focused on eukaryotes, of which humans are a part of. Natural Immortality In eukaryotes, there exist two broad theories as to how we age. These causes include the programming of ageing within our genome and the accumulation of damage to our cells [2]. Both of these theories can be grouped as sources of the phenomenon known as senescence, which refers to the inevitable decay of all eukaryotic organisms. However, a partial exception to this biological law exists in species with negligible senescence. Species under this category are seemingly able to avoid degeneration and are, therefore, potentially immortal. An example of negligible senescence is displayed by Turritopsis dohrnii, also known as the immortal jellyfish. The life cycle of Turritopsis dohrnii includes four distinct stages, or morphoses, which the jellyfish can cycle through several times. In this way, Turritopsis dohrnii can be considered biologically immortal. However, this phenomenon has only been observed under laboratory conditions, as the process of morphosis occurs very quickly. Additionally, many medusae jellyfish in a natural environment will be killed by predators. Under laboratory conditions, it was found that only 20-40% of the mature medusae jellyfish transformed back into polyps [3]. It is not yet understood why only a small percentage of Turritopsis dohrnii display this phenomenon, and why similar organisms have not yet evolved this mechanism. Telomeres Turritopsis dohrnii is not the only organism thought to exhibit immortality. Previous research has suggested that lobsters may not weaken in strength or lose fertility with age like most other organisms. However, this does not mean that lobsters are truly biologically immortal, as it is known that lobsters are increasingly likely to die from shell moulting as they age. Although not immortal, lobsters do not senesce in a typical sense, most likely due to the presence of the enzyme telomerase. Telomerase is capable of repairing DNA sequences at the end of chromosomes — known as telomeres. Unlike other vertebrates, in lobsters, telomerase is expressed beyond the embryonic stage into the adult stage [4]. Therefore, lobsters are able to avoid most consequences of DNA damage and live to an estimated 45-50 years in the wild [5]. When considering lobsters’ longevity, one may wonder whether we can somehow adopt the lobster’s technique. Biological Issues To understand biological ageing, it is essential to link the discussion back to the species of which we have the most understanding: humans. Research has shown that as we age the body loses its ability to repair DNA damage. It is also known that telomeres shorten with age due to this damage, resulting in senescence and death [6]. What if we were able to develop a technique to prevent this process, lengthening our life spans indefinitely? Unfortunately, human immortality is almost impossible due to the nature of our cells and their biological processes. In cellular biology there are two main certainties: the function of cells shut down, and cells become more likely to turn cancerous with age. Attempting to alter either of these processes enhances the other, meaning that humans are certain to die of either organ failure or cancerous growths [7]. Philosophy of Death We now know why and how we age, but this leaves an important question: why has natural selection not selected for immortality in all eukaryotes? The answer exists in how natural selection affects the macroevolution of a species. Natural selection not only selects for traits that increase the survival of the individual, but also the survival of the species that it belongs to. This means that for a species to survive, generations of individuals must constantly reproduce offspring, age, and then die so that the next generation can continue this cycle. This explanation is relatively simple and also greatly unsatisfying. Although, from a philosophical point of view, we can view death as essential to our existence, as it allows us to appreciate life more, and for many more individuals to experience the joys that life can bring than would be possible with the existence of immortality. Nevertheless, with the development of new technology, we may be able to extend the period in which we experience life and the consciousness to value it. Fibres and microtubules in human breast cancer cells. Photograph by the National Cancer Institute on Unsplash. References [1] E. Benbow, J. P. Receveur, and G. A. Lamberti, “Death and Decomposition in Aquatic Ecosystems,” Frontiers in Ecology and Evolution, vol. 8, pg. 17, Feb. 2020. [Online]. URL: https://www.frontiersin.org/articles/10.3389/ fevo.2020.00017/full [2] J. Vina, C. Borras, and J. Miquel, “ Theories of Aging,” IUBMB Life, vol. 59, no. 4-5, pp. 249-254, Jan. 2008. [Online]. URL: https://iubmb.onlinelibrary.wiley.com/ doi/abs/10.1080/15216540601178067 [3] S. Piraino, F. Boero , B. Aeschbach, and V. Schmid, “Reversing the Life Cycle: Medusae Transforming into Polyps and Cell Transdifferentiation in Turritopsis nutricula (Cnidaria, Hydrozoa),” The University of Chicago Press, vol. 190, no. 3, pp. 302-312, June. 1996. [Online]. URL: https://www.journals.uchicago.edu/ doi/10.2307/1543022 [4] W. Klapper, K. Kuhne, K. K. Singh, K. Heirdon, R. Parwaresch, and G. Krupp, “Longevity of lobsters is linked to ubiquitous telomerase expression,” FEBS Press, vol. 439, no. 1-2, pp. 143-146, Dec. 1998. [Online]. URL: https://febs.onlinelibrary.wiley.com/doi/full/10.1016/ S0014-5793%2898%2901357-X# [5] T. Wolff, “Maximum Size of Lobsters (Homarus)(Decapoda, Nephropidae),” Crustaceana, vol. 34, no. 1, pp. 1-14, Jan. 1978, doi: https://doi. org/10.1163/156854078X00510 [6] M. Blasco, “Telomere length, stem cells and aging,” Nature Chemical Biology, vol. 3, pp. 640-649, Sept. 2007. [Online]. URL: https://www.nature.com/articles/ nchembio.2007.38 [7] J. Mitteldorf, “Telomere biology: Cancer firewall or aging clock?,” Biochemistry (Moscow), vol. 78, no. 13, pp. 1054- 1060, Sept. 2013. [Online]. URL: https://link.springer. com/article/10.1134/S0006297913090125
- Opinion: Science and Religion
By Caleb Todd Preface The interface between science and culture is a contentious topic. Debates about the position of science in society — its role, its generality, and what it is in the first place — span diverse fields and connect in complex ways. One aspect of this garnered attention recently when seven University of Auckland academics published an open letter in the Listener magazine that dismissed mātauranga Māori, suggesting that it, as a knowledge system, is less valid or valuable than what they called science. Discussions (both constructive and otherwise) have flared up, the powers that be have shifted, and many a departmental email chain has ensued. I had finished this article only days before the letter’s publication, which I couldn’t help but find funny. What I discuss here is more than tangentially related to mātauranga Māori’s relationship with science; indeed, I mention it directly (though briefly). Nonetheless, mātauranga Māori is not the principal focus of my discussion, and I was worried that this article could be taken as a poorly veiled commentary in a way that I had not intended. Rest assured that I had no intention of doing such a disservice to an important topic. I wanted to write this preface to clarify the relationship between my article and the recent controversy, since it was not originally written with that specific debate in mind. The case I have tried to make is that we cannot treat science as being divorced from our humanity. Science as the West knows it is not all-encompassing; it cannot answer every question that matters. We have to recognise what different approaches have to bring to the table if we are to be good people, and indeed good scientists. I centred my discussion on religion and its partnership with science. Still, much of what I say carries over to indigenous ways of knowing (although the phraseology would be too imprecise for that topic). A picture of science that dismisses out of hand the knowledge systems built up by Māori (and others) is an incomplete one. I am becoming increasingly convinced that ‘science’ is a meaningless word. To place physics, biology, and psychology (and to some, even economics or sociology) under the same umbrella while excluding mātauranga Māori is patently ridiculous. Each discipline has utterly different methods, systems, and “validities”, and they certainly do not all follow the same ‘scientific method’. None is the same as another, and they are all necessary to a complete picture of our world — mātauranga Māori included. A few weeks ago, my mind was wandering during a lecture, as a student’s mind is wont to do. In my distraction, a question flitted into my brain: ‘Who else here believes in God?’ The question itself is, perhaps, not all that interesting — just a matter of statistics. What is more interesting to me is the natural reaction I had to the question. My first inclination was to assume that the answer is virtually no one. Image by Markus Baumeler from Pixabay Consciously I know that quite a large proportion of New Zealanders are religious. Indeed, in the 2018 census, 37% of the population identifies as Christian and 1.3% as Muslim [1]. Even among younger generations, the proportion is substantial: 28% of New Zealanders aged 15-29 are Christian [1]. ‘No religion’ was the largest category in that census with 48.5%, but even if you chuck on atheism (0.15%), agnosticism (0.14%), and (heck why not) flying spaghetti monster-adoring Pastafarianism1 (0.09%), you still haven’t cracked the halfway mark [2]. All this is to say that religious people make up around half of New Zealanders, yet I sat in a lecture with 200 people and my socialised reaction was to assume that I am in the vast minority. Why is that? One answer may be that I am a science student, and religiosity in scientists is lower than in the general public. In a UK survey, 47% of the general population were religious, but only 27% of scientists2,3 [3]. Nonetheless, 27% of a room of 200 people is still 54 — a sizable number. So my knee-jerk assumption wasn’t really defensible. The question remains: why was it my assumption? My proposal is that our society frames religion and science as being, in some sense, opposed to each other. It is unusual to see them as coexisting or to conceptualise them in the same context. Some people, like Richard Dawkins, take that opposition to the extreme, while others just see the two spheres as being ultimately and utterly distinct. I want to dispute both of these stances. Religion and science are not enemies, nor are they unacquainted; rather, they are old friends, and we ignore one or the other to our detriment. Our two protagonists have a long history together. In ancient Greece, astronomers studied the celestial spheres. The motions of the Sun, Moon, planets, and stars were embedded on vast spheres rotating about the Earth and each other. Circles were perfect shapes; immutable and, therefore, divine. The Greeks’ study of the skies was deeply connected to their religion. Indeed, the Greco-Roman deities were directly tied to the forces of nature, and our long history of naming astronomical bodies after these gods is no mistake. To the Greeks, the study of nature was the study of the divine, and all the more so when studying the immutable heavens. The same pattern is found all over the world, where the forces of nature are promoted to godhood. Western science’s growing recognition of indigenous ways of knowing — mātauranga Māori in New Zealand — is demonstrating that deep scientific truths are found in ancient mythologies. Separating Māori science from spirituality and culture is impossible. I can go on. The Islamic golden age was a period of remarkable mathematical and scientific advancement. Spherical trigonometry was developed to help Muslims face Mecca when they prayed, wherever they might be on our spherical Earth.4 Widespread literacy, too, owes itself to religion in many ways. The entire Cyrillic alphabet (used in languages like Russian) was invented to bring the Bible to Slavic languages and is named after St. Cyril [4]. The first book to be mass printed on Gutenberg’s printing press was the Bible [5], and literacy rates spiked wherever the protestant reformation went because of its emphasis on each Christian’s responsibility to read their holy book. Without the ability to read and write, we could not have a populace that engages with science. If you haven’t fallen asleep yet, you might be thinking that this argument is all well and good for ‘ye olde dayes’, but we have the scientific method now. We can dissociate from our religion-steeped past. But I’m afraid you can’t even escape there. Francis Bacon, usually considered to be the first to lay out the hallowed scientific method, was devoutly Anglican and saw science and philosophy as a way of expressing and understanding God. He is famous for saying, “A little philosophy inclineth man's mind to atheism, but depth in philosophy bringeth men's minds about to religion” [6]. I quote that not to imply that the scientific method is a religious institution per se; rather, to show that the scientific process has never been seen as being divorced from the spiritual. Even science at its most rigorous was, to many, a religious endeavour. Theology, the study of the divine, used to be known as regina scientiarum, or ‘queen of the sciences’, because understanding the nature of God was so integral to Western science. One of my personal favourite Bible verses is Proverbs 25:2, which reads, “It is the glory of God to conceal a matter, but the glory of kings is to search out a matter.” It says that God has hidden great beauty and truth in our world and that we can participate in that by seeking and studying it. So it is with many religious scientists: they see their art as a means of engaging with the glory of God. Although I have spent a disproportionate amount of time on Christianity — it’s what I know best — what I’m saying is true across the board. As scientists, we are almost drowning in millennia of religious tradition. Even for those who don’t believe in God, or indeed disbelieve in God, it is difficult to ignore. How, then, did we get from regina scientiarum to hostis scientiarum — the queen of the sciences to the scientist’s enemy? Georges Lemaître, a physicist and priest, giving a lecture at the Catholic. University of Louvain in Belgium. Image from Encyclopædia Britannica. Despite the embedding of science in religion and spirituality, this relationship has become rocky in more recent history. I do want to point out that this is not only true of religious institutions. Indeed, religious, political, social, and even scientific institutions have given scientists problems, because any institution is made up of fallible people. Nonetheless, the development of Darwin’s theory of evolution, James Hutton’s old-earth geology, and Georges Lemaître’s big bang theory,5 among others, allowed scientists to understand and describe creation in a rigorous way without reference to God.6 Many religious institutions saw this as a threat and set themselves against these scientific theories. Conversely, those who stood against religion (or even just one religion) presumed to see a way of weaponising science against philosophy — physics against metaphysics. To my mind, this reframing of the relationship between science and religion began a positive feedback loop of the worst kind; one which drove the two modes of thought further and further apart. On the one hand, if a scientifically-minded person sees a religious person insulting or decrying science, what are they to conclude but that religion is anti-scientific? Similarly, if a pious individual sees a scientist claiming that science has disproved God, then of course they will think that science is flawed. In both cases, it is not that science and religion are truly clashing; instead, the illusion of a clash is continuously reinforced by toxic rhetoric on both sides. As time progresses, more people will become skeptical that the two can be reconciled. The ‘atheistic scientist’ and ‘religious quack’ stereotypes become self-fulfilling, since a scientific person will find it uncomfortable to mingle in religious communities, and a religious person will feel derided in scientific communities. Again, the less comfortably one group can engage with the other, the more that divide will reinforce itself and the harder it will be to reverse. Science is a powerful tool that has expanded our realm of knowledge at an unprecedented rate, but it is not all-encompassing. Not everything of importance can be scientifically derived. You cannot deduce experimentally a ‘correct’ value structure, yet most people would agree that it is important for you to spend time considering what you value. I am not suggesting that theism is the only pathway to morality, but I am certainly saying that science alone can tell you nothing about how you should act in the world. Nothing could make that clearer than the historical use of science to maximise destruction and suffering. Both science and religion are limited in scope. Both are necessary components in society. By viewing them as opposing doctrines, we risk constructing a society where academics are completely detached from broader society, and where piety requires sacrificing intellect. We cannot treat the two categories as being at war, nor even as utterly distinct, because they each have implications for the other. They both have contributions to make which cannot always be separated out. We have to hold them both in our purview, accept their shared history, and take what they each have to offer. Science has a place in religion, and religion has a place in science. Photo by Tony Sebastian on Unsplash. Footnotes 1 If you don’t already know about these guys, boy are you in for a wild ride. 2 To the surveyors, “scientist” meant physicist or biologist. Luckily, I am a physics student, so that sounds like a perfectly fine definition to me (although I’m not completely comfortable being in the same category as biologists). 3 Interestingly (I promise this is the last percentage), religiosity in Taiwanese scientists increased relative to the general population from 44% to 54%. 4 Everyone, can we drop the whole flat Earth thing now? We’ve known it’s round for ages. 5 No, not the TV show. Please stop talking to me about that every time I say I study physics. I am NOT Sheldon. 6 There are plenty of scientifically literate religious people who are able to reconcile these theories with their theologies one way or another (in fact, Georges Lemaître himself was a Catholic priest). How they do so is a story for another time, but suffice to say that these theories in no way sound the death knell of religion as some claim. References [1] Stats NZ, “Religious affiliation (total responses) by age group and sex, for the census usually resident population count, 2006, 2013, and 2018 Censuses (RC, TA, SA2, DHB),” Available at http://nzdotstat.stats.govt.nz/wbos/Index.aspx?DataSetCode=TABLECODE8289 (2021/07/15). [2] figure.nz, “Most common religious affiliations in New Zealand,” Available at https://figure.nz/chart/RfmHYb2IsMMrn9OC(2021/07/15). [3] Ecklund, Elaine Howard, David R. Johnson, Christopher P. Scheitle, Kirstin R. W. Matthews, and Steven W. Lewis. “Religion among Scientists in International Context: A New Study of Scientists in Eight Regions.” Socius, (January 2016). https://doi.org/10.1177/2378023116664353. [4] Britannica, T. Editors of Encyclopaedia. “Cyrillic alphabet.” Encyclopedia Britannica, May 20, 2020. https:// www.britannica.com/topic/Cyrillic-alphabet. [5] Wikipedia contributors, “Gutenberg Bible,” Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/w/index.php?title=Gutenberg_Bible&oldid=1034745315 (accessed July 15, 2021). [6] F. Bacon, The Major Works: Including New Atlantis and the Essays. Oxford University Press, 2002.
- Explained: The Rise and Fall of Gymnosperms
By Nina de Jong Gymnosperms are a division of plants that have a long and proud history of worldwide distribution and evolution. Historically, gymnosperms dominated the earth’s flora during the early and mid-Mesozoic era, around 250-150 million years ago. However, now there are only about 1,000 species of gymnosperms – this includes 5 orders of plants: the funky cycads (Cycadales), the strange Gnetales, the lonely ginkgo (Ginkgoales) and all of the conifers (Pinales and Cupressales). This history of being once widespread and dominant, and now being a much smaller species pool, indicates a fascinating evolutionary and ecological history for gymnosperms. Agathis australis is a beautiful example of the magnificence of gymnosperms. Photo by Jon Moore on Unsplash. Gymnosperms were some of the first plants to develop wood for mechanical support. Although wood had appeared before the gymnosperm lineage, ancient forms of wood served primarily to help plants with water conduction [1]. Gymnosperms were the first plants for which wood did not just provide efficient water conducting tissue (xylem), but also structural support. The gymnosperm lineage was, therefore, able to grow taller trees with larger canopies and has given rise to the first trees as we know them today [2]. Gymnosperms represent the evolution of the first seed plants. Before the development of the seed, plants had a very complicated reproductive routine, involving alternating generations of sporophytes (diploid individuals, with both sets of chromosomes) and gametophytes (haploid individuals, with only one set of chromosomes), where gametophytes release eggs and sperm that need to find each other through water, to produce diploid individuals (sporophytes). Plants such as ferns and mosses still go through this process, but gymnosperm seeds have compressed the gametophyte to microscopic size. This has allowed the haploid stage of the life cycle to be done in a safe environment while still attached to the parent plant, and with no need for water. The seeds can then be easily dispersed by various methods, including via wind and animals. This has meant that gymnosperms no longer have to grow in environments that have an abundance of water for reproduction, and many of today’s gymnosperms can survive in very harsh, extreme environments. Gymnosperms first evolved during the Carboniferous period, during which Pteridophytes (ferns and fernlike plants) were the dominant group of plants [3]. A major mass extinction event at the end of the Paleozoic (250 mya) meant that gymnosperms, with their new developments for growth and reproduction, were well placed to have a prominent position in the composition of plant communities worldwide during the Mesozoic era [4]. The first ancestors of today’s gymnosperms evolved about 311-212 mya, which is a long time ago when compared to the evolution of the other seed plants, the angiosperms, which evolved 125-100 mya [5]. Gymnosperms, despite undergoing some speciation and niche-shifting in response to environmental change, have a well conserved evolution and are quite similar to their ancestors [6]. This is especially the case for gymnosperms of the southern hemisphere, where the warmer and wetter conditions haven’t forced such extreme selective pressures on species [7]. As a result, gymnosperms remained dominant throughout the Mesozoic for about 100 million years. The stages of gymnosperm evolution through the last 350 million years. However, gymnosperm dominance was not to last, and along with decreasing atmospheric CO2 , the rise of a new lineage of plants is largely attributed to the decline of gymnosperms [8], and it is impossible to describe the history of gymnosperms without also talking about angiosperms. Today, the world is dominated by the angiosperms, a group of seed plants that far outstrip the gymnosperms in terms of modern species diversity, with over 300,000 species compared to the 1000 species of gymnosperms. The angiosperms are thought to have evolved from an extinct lineage of gymnosperms in the Cretaceous, and their evolution is characterised by a huge and abrupt diversification of species to occupy new ecological niches in the Late Cretaceous and Early Tertiary [8,5]. These new species managed to outcompete gymnosperms and supersede their dominance by the Tertiary period. But gymnosperms had been the dominant plant group for 100 million years, thriving across the world. So how did this abrupt and convincing switch of dominance happen? Angiosperms underwent an evolutionary radiation in the Late Cretaceous, 100 million years ago. In many ways, gymnosperms walked so that angiosperms could run. The developments gymnosperms had made, such as tracheid xylem and seeds, were the building blocks for angiosperms to develop even more efficient and creative methods of growth and reproduction. The gymnosperm cones and seeds are the evolutionary precursor to flowers and fruits, which are many people’s and animals’ favourite thing about plants. Angiosperm flowers and fruits are thought to be the drivers of much of the diversity among angiosperms [9]. Flowers and fruits are highly susceptible to selective pressures, as flowers serve as sites for pollination, which can be carried out by animals, wind, and other forces [10]. Animals, in particular, apply specific and strong selective pressures that promote diversification of flower shape and colour, and this allows angiosperms to diversify under co-evolution [9,11]. Flowers are also thought to allow angiosperms to persist in smaller populations, as pollination is more targeted and so the plants do not have to be as abundant to ensure pollination occurs [9,11]. Atmospheric CO2 declined during the Cretaceous, and this trend selected for plants that left their stomata open longer, to receive enough CO2 for photosynthesis. When stomata are open longer, plants need more water to make up for the water lost in transpiration. This led to selective pressure for more efficient water conductance [12,13]. While gymnosperm xylem, known as tracheids, function as both structural and conductive tissue, angiosperms differentiate these tissues into vessel elements and supportive fibres. This enables the specialisation of vessel elements for more efficient water conduction [14]. In turn, increased water availability allowed angiosperms to develop broader leaves that have a greater photosynthetic capacity [3]. Although podocarps with flattened leaves can compete with angiosperms in some tropical understoreys [15], today gymnosperms are largely out-competed by angiosperms in lowland tropical rainforests, habitats which select for plants with large leaves that have a high photosynthetic capacity [15]. With these adaptations, angiosperms were able to rapidly spread and dominate the world’s flora. And yet, the story of the gymnosperms is not over! Gymnosperms are still around today. If angiosperms were so unassailably dominant, there would be no extant gymnosperms. Somehow, this group continued to thrive successfully in the face of their seemingly unstoppable cousins. Mostly, gymnosperms thrive in the landscape by differentiating away from direct competition against angiosperms and their greater photosynthetic capacity [15]. For example, in Aotearoa, the coexistence of conifers and angiosperms can be attributed to a regeneration differentiation along a shade-tolerant versus stress-resistant niche differentiation [16]. Here, angiosperms outcompete conifers in both shady and light environments due to a higher photosynthetic capacity and growth rate, and so conifers occupy more stressful, exposed regeneration sites while angiosperms occupy more sheltered sites in the forest interior [16]. In the northern hemisphere, gymnosperm species are dominant in ecosystems in extreme conditions – in very cold climates such as boreal forests and coniferous forests in central continents. Gymnosperms are a fascinating and ancient group, and their history shows how groups can expand and contract in dominance and diversity as conditions and biotic competition changes. With the onset of climate change, warmer conditions at the poles and more unpredictable weather, what does the future hold for gymnosperms? References [1] Strullu-Derrien, C., Kenrick, P., Tafforeau, P., Cochard, H., Bonnemain, J. L., Le Hérissé, A., ... and Badel, E. (2014). The earliest wood and its hydraulic properties documented in c. 407-million-year-old fossils using synchrotron microtomography. Botanical Journal of the Linnean Society, 175(3), 423-437. [2] Wilson, J. P., & Knoll, A. H. (2010). A physiologically explicit morphospace for tracheid-based water transport in modern and extinct seed plants. Paleobiology, 36(2), 335-355. [3] Wilson, J. P. (2013). Modeling 400 million years of plant hydraulics. The Paleontological Society Papers, 19, 175-194. [4] DiMichele, W. A., Hook, R. W., Beerbower, R., Boy, J. A., Gastaldo, R. A., Hotton III, N., ... & Sues, H. D. (1992). Paleozoic terrestrial ecosystems. Terrestrial ecosystems through time. University of Chicago Press, Chicago, 205-325. [5] Berendse, F., & Scheffer, M. (2009). The angiosperm radiation revisited, an ecological explanation for Darwin’s ‘abominable mystery’. Ecology Letters, 12(9), 865-872. [6] Wang, X. Q., & Ran, J. H. (2014). Evolution and biogeography of gymnosperms. Molecular phylogenetics and evolution, 75, 24-40. [7] Leslie, A. B., Beaulieu, J. M., Rai, H. S., Crane, P. R., Donoghue, M. J., & Mathews, S. (2012). Hemisphere-scale differences in conifer evolutionary dynamics. Proceedings of the National Academy of Sciences, 109(40), 16217-16221. [8] Condamine, F. L., Silvestro, D., Koppelhus, E. B., & Antonelli, A. (2020). The rise of angiosperms pushed conifers to decline during global cooling. Proceedings of the National Academy of Sciences, 117(46), 28867-28875. [9] Lunau, K. (2004). Adaptive radiation and coevolution— pollination biology case studies. Organisms Diversity & Evolution, 4(3), 207-224. [10] Sauquet, H., & Magallón, S. (2018). Key questions and challenges in angiosperm macroevolution. New Phytologist, 219(4), 1170-1187. [11] Specht, C. D., & Bartlett, M. E. (2009). Flower evolution: the origin and subsequent diversification of the angiosperm flower. Annu. Rev. Ecol. Evol. Syst., 40, 217-243. [12] Lusk, C. H., Wright, I., and Reich, P. B. (2003). Photosynthetic differences contribute to competitive advantage of evergreen angiosperm trees over evergreen conifers in productive habitats. New Phytologist, 160(2), 329-336. [13] Brodribb, T. J., Feild, T. S., and Jordan, G. J. (2007). Leaf maximum photosynthetic rate and venation are linked by hydraulics. Plant physiology, 144(4), 1890-1898. [14] Sperry, J. S., Hacke, U. G., and Pittermann, J. (2006). Size and function in conifer tracheids and angiosperm vessels. American journal of botany, 93(10), 1490-1500. [15] Brodribb, T. J., Pittermann, J., and Coomes, D. A. (2012). Elegance versus speed: examining the competition between conifer and angiosperm trees. International Journal of Plant Sciences, 173(6), 673-694. [16] Lusk, C. H., Jorgensen, M. A., and Bellingham, P. J. (2015). A conifer–angiosperm divergence in the growth vs. shade tolerance trade-off underlies the dynamics of a New Zealand warm-temperate rain forest. Journal of Ecology, 103(2), 479- 488.
- Unpacking the Myers-Briggs Type Indicator and Its Criticisms
By Gene Tang When we talk about personalities and personality testing, what is the first thing that comes to our mind? For psychology students, it might be the Big Five (OCEAN) or the six-factor model (HEXACO). Usually, though, the Myers-Briggs Type Indicator (most of us know as 16 Personalities test) would be the first thing that people think of. MBTI is undoubtedly a very well-known personality test. The test analyses an individual's personality based on an introspective self-report questionnaire revolving around an individual's subjective interpretation, perception of themselves, and behavioural tendencies. The MBTI consists of four distinct dimensions, which give rise to 16 discrete types. These dimensions were identified in an attempt to explain individuals' preferences in terms of their favourite world (introversion-extroversion), their perception (sensing-intuition), decision-making (thinking-feeling), and the way they deal with the world (judging-perceiving) [1]. The MBTI is used in countless businesses and organisations and is widely available online for personal use [2]. In people's opinions, myself included, the results received from the test seemed to be impressively accurate in their description of individuals. Some popular online MBTI resources such as 16personalities.com will not only provide an overall description of a person but also include an extensive profile in each domain of life, such as romantic relationships, friendships, career paths, and even parenthood [3]. Overall, the idea of MBTI's usefulness and capability of describing a person is quite compelling. However, the use of MBTI is scarce in scientific research despite its glaring popularity. Why is this the case? Does MBTI, a prominent and accepted tool, really lack scientific validity and credibility? Before delving down into these questions, it is perhaps important to understand its history and theory to better understand the tool and the criticisms it has received. The History of MBTI MBTI was developed by Isabel Myers and her mother, Katherine Briggs, at the onset of World War II. Myers and Briggs recognised the value of a psychological instrument, as it provides us with understanding and appreciation of individual differences. Briggs spent several decades researching and developing the indicator, during which her tenacious and curious daughter, Isabel Myers, joined her. Their passion and interest were ignited and inspired by Carl Jung's work, especially his book, Psychological Types [4]. Isabel Myers consequently incorporated Jung's idea of psychological types into the MBTI instrument. This includes the concepts of extrovert and introvert, sensation and feeling, thinking and intuition [5]. The type indicator was developed with the intention to help people reconcile with each other in times of hardship during the second World War. After decades of development, the MBTI instrument was published in 1962 despite some objections. Several years later, Isabel Myers was still ceaselessly committed to her MBTI instrument and progressively re-standardised it, paying attention to every minuscule detail and refining the scoring methods. She continuously sought perfection with a strong ambition of developing a tool that would help people. How Does it Work? As previously mentioned, MBTI consists of four dimensions that, when combined, produce 16 unique personality types. The dimensions were based on Jung's four psychological functions — sensation, intuition, feeling, and thinking [6] — the two polar orientations (extroverts-introverts concept), and the addition of lifestyle preferences described by the terms 'judging' and 'perception'. These terms were paired into four dichotomies which is consistent with the notion of people’s preferences (that there is a clear preference of either one or another) in individuals. MBTI is often misunderstood as an instrument used to measure an individual's aptitude but in actuality, it measures human preferences [7]. We can understand these dichotomies by identifying them as types of preferences [8]. The extroversion-introversion dichotomy can be seen as an 'attitude' preference describing the world where an individual's cognitive functions prefer to operate in (e.g., external world or internal world). Sensing-intuition and thinking-feeling were identified as perceiving functions and judging functions, respectively. Perceiving functions describe how an individual perceives and interprets the world around them, while judging functions are associated with our inherent decision-making and factors influencing it (e.g. logic and reasoning). Lastly, the judging-perception dichotomy stands for lifestyle preferences or our life 'structure'. This one refers to an individuals' preferences when dealing with the outside world [1]. This means that, in different situations, a person with judging type (e.g., ENFJ) tends to prefer using judging functions (thinking or feeling). In contrast, a person with perception (e.g., INFP) type tends to prefer perceiving functions (sensing and intuition) [9]. Four different dichotomies are combined to produce a unique type/personality. We could understand that our type may result from the interactions between these four preferences and that each of us innately prefers a particular way of living. This is not to say that we cannot elicit behaviour particular to the other side of the continuum: much like left or right handedness, it is just harder to do so. Criticisms On the surface, the concept of MBTI seems to hold up reasonably well. It may explain why, for instance, INFP-type individuals are generally quiet, open-minded, and flexible. For many people who took the test, the interpretation seems to make sense. It seems to describe our personality at a satisfactory level. So why have there been so many harsh criticisms of the validity of the MBTI? Some even claim MBTI is meaningless [10]. One of the major criticisms highlighted the lack of academic psychology background from Myers and Briggs. Myers was homeschooled in her early life and later earned a degree in political science, while her mother, Katherine Briggs, earned a degree in agriculture [11]. Neither of them received any formal training on psychological assessments or psychometric testing. Because of this, it may be unsurprising why other scientists may have looked down on their work. Why would someone without a psychological academic background attempt to assess personalities, let alone develop professional scales? Additionally, Myers and Briggs were inspired by the type theory of Carl Jung — a scientist who is, to some other scientists, associated with mystical speculations that fall in the pseudo-philosophical realm [12]. Now, let's take a closer look at the MBTI itself as a concept. In doing so, we will explain the criticisms in terms of some theoretical and scientific qualities — validity, reliability, and comprehensiveness. Is MBTI valid? In Personality Theory and Research [13], validity refers to the extent to which observations reflect the subject of interest. Based on the four dichotomies, presumably, of an individual's preferences, we may understand that a person will fit into one side of the dichotomy. The classification will become categorical; a person will either be, for example, extroverted or introverted, perceiving or judging. This description method does not allow a person to be placed on a continuum. Thus, it does not express the degree of preferences a person may have, conflating an individual's preferences and behaviour [14]. If those personality dimensions are better described by discrete categories rather than a continuum, then we should expect a bimodal distribution (two identifiable bell curves) for the preferences. However, this is not the case. What researchers found was inconsistent with the concept. It was reported that MBTI data display a very near-normal distribution [14], meaning that we would expect that the majority of the population to lie in the middle of the 'continuum' and less on the extremes (rather than on either one of the extremes). The findings therefore raised the question of the instrument’s validity. "There is no such thing as a pure extrovert or a pure introvert. Such a man would be in the lunatic asylum." Carl Jung In some research, the reliability of MBTI is questionable. One main method for testing reliability is the test-retest procedure. This is when a person is given the test on two occasions. Pittenger observed that several studies showed an individual’s type changes over the short period of the test-retest interval [15]. If personality is a consistent pattern of feeling, thinking, and behaving [13], shouldn't the test-retest procedures (especially over a short period) show reasonable stability? Unlike MBTI, results in other psychometric tests such as the five-factor model (a personality theory proposed by McCrae & Costa widely used in contemporary research) show a high correlation in test-retest procedures, supporting the idea that personality has a heavy biological basis [16]. On the contrary, for almost 50% of the MBTI test participants, their type changed when retaking the test (within a short interval) [15]. So how come MBTI produces discrepancies in test-retest results? One factor that may account for this is the categorical approach of the test. As previously mentioned, the MBTI results categorise a person into one category when realistically, they should be on a continuous scale. If there is a cutoff point or threshold that divides two extremes (e.g., judging and perceiving), a slight change in a person who initially lies around the middle of the scale will result in a total type change (e.g., from ENFJ to ENFP). To put it simply, despite a slight shift in someone's preference/ personality, MBTI may display as a completely new type change. Whether or not the instrument is comprehensive is another factor to consider. Unlike the PEN, Big Five, and HEXACO models, MBTI doesn't have a scale that accounts for neuroticism. Neuroticism is a trait associated with anxiety, distress, and emotional instability [17], and while we all have it to some degree, this is nowhere to be seen in the MBTI. It may be possible that neuroticism is entangled with other dimensions in MBTI, which might reflect errors in factor analysis. Without the scale of neuroticism, MBTI may lack comprehensiveness as it may struggle to explain psychopathology. With all the criticisms that MBTI has received and the availability of other psychometric alternatives, we may understand why a large number of scientists disregard MBTI. This may explain why its ubiquity is not present in the area of scientific research. The scientific study of personalities has always been subjected to criticism and scrutiny. Some theories and concepts failed to hold up against them. Unfortunately, MBTI might be one of those. MBTI as a psychometric test is very simplistic, considering the intricacy of personality theory. However, we have to bear in mind that this does not mean MBTI is entirely unscientific. MBTI Applications: Are They Any Good? In Vox’s article 'Why the Myers-Briggs test is totally meaningless', the author suggested that "The MyersBriggs is useful for one thing: entertainment [18].” Of course, this includes fun tests and quizzes that we all have taken at some stage to pass some time, like a BuzzFeed quiz. Some might argue that, if it was only for entertainment, why are the types' descriptions so accurate, and in many people’s opinions, sound so convincing? In response to this answer, some articles suggested that this phenomenon results from the Forer Effect [18-20]. Forer effect refers to the phenomenon when a person believes that a specific description applies to them when the description is actually vague enough to apply to everyone. Forer effect is said to be used when writing horoscopes as well [20]. But is that really it? Is MBTI no better than for just entertainment? Perhaps this claim wasn't wholly true after all. In 2012, it was found that the MBTI profile was associated with success in project-based learning. Montequin et al. [21] concluded that a group’s composition and dynamics may be influential on the group's success or failure, which may be attributed to the type of leadership present within the group. Not only that, types/dimensions were also found to correlate with choice of communication media in a study published in 2006 [22]. The researcher showed that a person’s personality type might have a significant effect on the willingness to embrace online communication [22]. Extroversion-introversion dimension was observed to have a substantial impact while judging-perceiving, thinking-feeling effect was still significant but slightly lesser. Meanwhile, research conducted at Syrian University illustrated a clear relationship between the sensing-intuition dimension and the distribution of students among faculties, the sensing-intuition dimension and students’ GPA, or even the sensing-intuition dimension and whether or not the students like the subjects they selected [23]. Even though MBTI may not be as widely used in scientific research as in organisations and corporate industries, the benefits of MBTI are undoubtedly beyond just entertainment. It may not predict job performance, but it could provide us with a practical tool to observe individuals' preferences [24]. With that knowledge and understanding, we can optimise decisions that satisfy a person's preference, perhaps maximising their efficiency and productivity. Things to Consider Ultimately, the opinions on MBTI regarding its validity, reliability, applicability, and comprehensiveness are nowhere near one-sided. There is a mixture of findings and comments, both for and against the use of MBTI. Because of this, it is essential to take different perspectives and approaches to evaluate MBTI. So here are some points worth considering: Myers and Briggs may not have had an academic background in psychology, but they were astute observers, educated, and very passionate about understanding people. They worked collaboratively alongside other professionals who have helped them with the development and standardisation of the instrument. It may be understandable why categorising people into two discrete groups per dichotomy is problematic. The dichotomies on Jung's types may be dubious, but the concept of extroversion (and introversion) is accepted in the modern scientific era [11]. We still see the use of the term today but perhaps with slightly different and modified representation (e.g., introversion and extroversion becomes high and low levels of extroversion). Yes, MBTI may describe a person's preference towards a particular event/aspect but do the descriptions only account for central tendency (what individuals tend to behave on average)? As argued, MBTI descriptions may, indeed, be simplistic [14] and may be insufficient when explaining the variability of behaviour on a situation-to-situation basis. MBTI comprehensiveness was majorly criticised for its lack of scale/measurement on neuroticism, but do we actually need that? Is it possible to break down existing MBTI's dichotomies into facets that may allow us to describe neuroticism? Or perhaps it would be better to disentangle aspects of neuroticism embedded in introversion, creating a new dimension (The framework evolved from the classic MBTI used by 16personalities.com may have touched on this by adding the Assertive-Turbulent dimension). There is still research that observes reasonable reliability and validity in MBTI. Meta-analyses conducted by Capraro & Capraro in 2002 [25], and Randall and his colleagues in 2017 found that MBTI, in fact, has decent reliability and validity. Both studies found correlations in test-retest procedures of the MBTI [26]. Furthermore, reasonable construct validity was also found by Randall et al [27]. The research suggested that the MBTI does measure personal preferences consistent with Carl Jung's typology. With this in mind, Pittenger's claim of poor test-retest reliability and criticisms on the validity are now in question [15]. The arguments he made may have resulted from the inclusion of only old data and the omission of some test-retest scores [24]. As time passed, MBTI underwent several revisions and re-standardisation and because of that, the validity and reliability may have also improved. The use of MBTI may not be ubiquitous in modern-day scientific research compared to NEO-PI-R or HEXACOPI-R, but its application nevertheless has a wide popularity in other areas. MBTI as an instrument may not be flawless (nothing is), it may not be the best, but it has been used by numerous multinational organisations. Its usage has been accepted and might be helpful in various situations after all. "I dream that long after I'm gone, my work will go on helping people." Isabel Myers, 1979 References [1] The Myers & Briggs Foundation. “The purpose of the Myers-Briggs Type Indicator.” the Myers & Briggs Foundation. https://www.myersbriggs.org/my-mbtipersonality-type/mbti-basics/ (accessed Jul. 12, 2021) [2] J. Nguyen. “How companies use the Myers-Briggs system to evaluate employees.” marketplace.org. https://www.marketplace.org/2018/10/30/myers-briggs-systemevaluate-employees/ (accessed Jul. 12, 2021) [3] 16Personalities. “Personality Types”. 16Personalties. com. https://www.16personalities.com/personalitytypes (accessed Jul. 12, 2021) [4] Center of Application of Psychological Type. “The Story of Isabel Briggs Myers.” capt.rog. https://www.capt.org/mbti-assessment/isabel-myers.htm (accessed Jul. 12, 2021) [5] M. Vernon. “Carl Jung, part 5: Psychological types.” theguardian.com. https://www.theguardian.com/commentisfree/belief/2011/jun/27/carl-jungpsychological-types (accessed Jul. 12, 2021) [6] Team Lapaas. “Jung’s personality theory- 4 functions and 8 types.” lapass.com. https://lapaas.com/jungspersonality-theory-4-functions-and-8-types/#TheFour-Function-of-Human-Personality (accessed Jul. 12, 2021) [7] B. Dunning. “The Myers-Briggs Personality Test.”skeptoid.com. https://skeptoid.com/episodes/4221 (accessed Jul. 12, 2021) [8] Anon. “Personality Type Explained.” humanmetrics.com. http://www.humanmetrics.com/personality/type (accessed Jul. 12, 2021) [9] I. B. Myers and P. B. Myers, Gifts Differing. Mountain View, CA: Davies-Black Publishing, 1980. [10] A. Grant. “Say Goodbye to MBTI, the Fad That Won’t Die.” LinkedIn. https://www.linkedin.com/pulse/20130917155206-69244073-say-goodbye-to-mbti-the-fad-that-won-t-die/ (accessed Jul. 13, 2021) [11] J. A. Johnson. “Are Scores on the MBTI Totally Meaningless?” Psychologytoday.com https://www.psychologytoday.com/nz/blog/cui-bono/201603/arescores-the-mbti-totally-meaningless (accessed Jul. 13, 2021) [12] S. McLeod. “Carl Jung.” Simplypsychology.org. https://www.simplypsychology.org/carl-jung.html (accessed Jul. 13, 2021) [13] D. Cervone and L. A. Pervin, Personality: Theories and Research, 2011 ed. NJ: John Wiley & Sons, 2011. [14] Anon. “Myers Briggs Criticisms.” teamtechnology.co.uk. https://www.teamtechnology.co.uk/myers-briggs-criticisms.html (accessed Jul. 13, 2021) [15] D. J. Pittenger, “Measuring the MBTI...And Coming Up Short,” Journal of Career Planning and Employment, vol. 54, Jan 1993. [Online]. Available: https://www.researchgate.net/publication/237675975_Measuring_the_MBTI_and_coming_up_short [16] R. R. McCrae, A. R. Sutin, “A Five-Factor Theory Perspective on Causal Analysis,” Eur J Pers, vol.32, issue 3, pp. 151-166, Jan. 2018. [17] N. C. Weed. “Neuroticism.” britannica.com. https://www.britannica.com/science/neuroticism (accessed Jul. 13, 2021) [18] J. Stromberg and E. Caswell. “Why the Myers-Briggs test is totally meaningless.” Vox.com. https://www.vox.com/2014/7/15/5881947/myers-briggs-personality-test-meaningless (accessed July. 12, 2021) [19] M. Moffa. “A Critique of The Myers Briggs Type Indicator (MBTI)—Part Two: a Personal Review.” recruiter.com. https://www.recruiter.com/i/a-critique-of-the-myersbriggs-type-indicator-mbti-part-two/ (accessed July. 14, 2021) [20] A-L. L. Cunff. “The comforting pseudoscience of the MBTI.” nesslab.com. https://nesslabs.com/mbti (accessed July. 14, 2021) [21] V. R. Montequín, J. M. Mesa Fernández, J. V. Balsera, A. G. Nietp, “Using MBTI for the success assessment of engineering teams in project-based learning,” International Journal of Technology and Design Education, vol. 23, pp. 1127-1146, Nov. 2013, doi: https://doi.org/10.1007/s10798-012-9229-1 [22] V. P. Goby, “Personality and Online/Offline Choices: MBTI Profiles and Favored Communication Modes in a Singapore Study,” Cyberpsychology & Behaviour, vol. 9, issue 1, pp. 5-13, 2016. [Online]. Available: https://www-liebertpub-com.ezproxy.auckland.ac.nz/doi/ pdf/10.1089/cpb.2006.9.5 [23] R. M Ayoubi, B. Ustwani, “The relationship between student’s MBTI, preferences and academic performance at a Syrian university,” Education & Training, vol. 56, issue 1, pp. 78-90, 2014, doi: http://dx.doi.org.ezproxy.auckland.ac.nz/10.1108/ET-09-2012-0090 [24] A. M. Gordon. “In Defense of the Myers-Briggs: A comprehensive counter to anti-MBTI hype.” Psychologytoday.com. https://www.psychologytoday.com/us/blog/my-brothers-keeper/202002/in-defensethe-myers-briggs (accessed July. 14, 2021) [25] R. M. Capararo and M. M. Capraro, “Myers-Briggs Type Indicator Score Reliability Across: Studies a MetaAnalytic Reliability Generalization Study,” [26] Educational and Psychological Measurement, vol. 62, issue 4, pp. 590-602, Aug. 2002, doi: 10.1177/0013164402062004004 [27] K. Randall, M. Isaacson, C. Ciro, “Validity and Reliability of the Myers-Briggs Personality Type Indicator: A Systematic Review and Meta-analysis,” Journal of Best Practices in Health Professions Diversity, vol. 10, issue 1, pp. 1-27, 2017. [Online]. Available: https://www.jstor.org/stable/26554264
- The Loudness Wars
By Stella Huggins Music is an enormous part of human cultural patterns. The advent of records in the 1950s increased the ubiquity of music worldwide and consolidated human habits surrounding it. Literature and anecdotal evidence suggests that, in general, individuals enjoy music to be louder, especially in social contexts [1]. Record companies in the early 1950s decided to capitalise on this fact, commencing what has become known as the ‘Loudness Wars’ [2]. Photo by Tatonomusic on Unsplash. Sound engineering techniques such as compression and equalisation have been accosted to ensure the loudest possible auditory experience for the listener [3]. The intention is to make the record stand out in relation to others on offer. However the brilliant idea was snatched up by multiple producers, thus the term “war” — it indeed became a battle to make your record the loudest, and most noticeable. Metallica’s 2008 album ‘Death Magnetic’ is notorious for its employment of the tactic [4], and signified a cultural pushback against the idea that louder is better. Music critics and fans condemned the record, claiming the loudness degraded the sound quality, and made the experience unenjoyable. While that’s a fitting controversy for a death metal band, and may bode well for record sales initially, the knock-on effects for the population’s hearing quality have been fairly disastrous. Hearing is a sense that is overwhelmingly taken for granted. The devastating thing about it, is that once it is Photo by Tatonomusic on Unsplash. lost it is irreversibly so [5]. The only cure for hearing loss is prevention, and many individuals who experience hearing loss report distraction, anxiety and distress, especially in the instance of tinnitus [6]. Music and sound are some of the most enjoyable human experiences possible. Music as a therapy is effective for a number of conditions [7]. Music therapy has been shown to be one of the only treatments for dementia patients when other capabilities for language are degraded [8]. Music can aid mental health, and develop intelligence. Sound therapy in general can provide a myriad of benefits [9]. Sound is a valuable and finite sense, that we can only appreciate fully while we still have it. Integration of our five senses builds the world that we perceive. When one sense is depleted, our experience of the world becomes degraded in the short term while we adjust. Neuroplasticity aids this adjustment process, however it takes some individuals longer when their hearing is damaged to build a new perceived world. The hearing system is incredibly complex. The process begins with the external structure called the pinna; what most people think of when we talk about the ear. The pinna’s shape is designed to funnel sound and localise it to reach internal structures which perceive and subsequently code it into sensory information. These sound wave vibrations reach the eardrum, which vibrates against a set of tiny bones in the middle ear called the malleus, stapes and incus. These bones amplify the sound and travel to the cochlea, a curled up or ‘snail-shaped’ organ in the ear. The cochlea is filled with fluid, and tiny hair cells sit along the spiral shaped organ, moving with the fluid’s movement [10]. These hair cells are often the targeted structure when talking about noise-induced hearing loss (NIHL), but there are a number of postulated mechanisms of damage to inner ear structures. Mechanical damage to the organ of corti (an inner ear structure), excitotoxic damage to the auditory nerve or synapses (brain cells) involved in the perception of sound, loss of cochlear sensory hearing cells and loss of auditory nerve fibre are all possible causes of NIHL [11]. Damage to the inner ear structures that help us perceive sound is often done by excessive and prolonged exposure to noise [12]. In a world that is only getting noisier, The Loudness Wars themselves are easing, but their cultural and material remnants remain. Each subsequent generation faces a new hearing challenge. Baby Boomers and Generation X are dealing with the flow-on effects of The Loudness Wars and will most likely not rectify their behaviour in accordance with the new-found knowledge of its dangers. While record companies themselves employ the tactic less often (as a louder record actually degrades sound quality and often drowns out nuances in the track that enhance the song), personal listening devices such as headphones and speakers have increased the amount of time we’re exposed to sound. This poses a further challenge for Generation Z — and while new tools such as exposure measurements on iPhones are aiding awareness, the culture remains. The Loudness Wars revolutionised our attitudes to acceptable music volumes, and have caused us as a collective to become less in touch with the warning signs our body sends us. Often people mistake natural bodily adaptations after noise exposure such as gigs as indications of recovery. For example, the day after a concert you may experience temporary hearing loss which then recovers, usually. Some people experience tinnitus, a perceived ringing/roaring/buzzing in the ears that does not have a source in the external world. There are a number of tinnitus types, varying from acute, chronic, and more rarely, objective [13]. It can be a constant or intermittent experience, but either way it can be distressing for the patient. It is associated with hearing loss, and an underlying cause is postulated to be activation of brain pathways (within the auditory system) being activated at inappropriate times. Essentially; hyperactivity of pathways when there is no external stimulation present, leading to a ‘phantom’ noise (ringing). Hearing loss is not recoverable without hearing aids. Photo by Severin Candrian on Unsplash. When people intermittently experience tinnitus after periods of excessive noise exposure, or when things become quieter for them then return to normal, they may perceive this as a recovery from noise exposure. This is not the case. Hearing loss cannot be recovered [14]. What people are experiencing here is a result of the body’s incredible ability to adapt to external noise conditions. The inner ear structures will adjust to the external environment. The most commonly recognised type of tinnitus is noise-induced. The theory underlying this subjective type is that excessive and prolonged exposure to sound bends the stereocilia (hair cells) within the cochlea that receive sound so that they are flattened or bent (when they should be uniform and relatively straight). These hair cells operate by making small movements, which triggers neurotransmitter release, subsequently activating auditory neurons (in CN VIII); in short, the perception of sound. After damage, the structure and layout of the cochlea becomes maladaptive, as high-frequency sounds are perceived by the base of the cochlea. These hair cells are the first to be damaged by loud noise, as a result. Therefore the hypothesised cause of tinnitus is that these damaged hair cells are responding in a hyperactive manner, causing the high-frequency buzzing or ringing that so many patients describe [15]. Consider briefly how much you rely on noise to be in touch with your external world. Crossing roads, hearing nuances in conversation, picking up subtleties in music- the vibrations that travel through the air and into our ears, are so often taken for granted. Sound is so imperative to an already — hearing person, used to operating in a hearing world and presumably reliant on hearing to navigate their world. For this reason, we have got to get better at preserving, or at least mitigating the effects of hearing loss. A number of initiatives such as the Dangerous Decibels programme have been employed, but all show the most promise in younger individuals [16]. Whether this speaks to the resistance of adults to change or the optimism of children in adopting new habits and behaviours in the interest of their health, the outcome is the same: adults are losing their hearing from acquired hearing loss at rapid rates [17]. Societal attitudes always trudge in a slow, reluctant fashion after developing technologies and scientific findings. The knowledge that hearing loss is irreversible tends to create brief moments of panic for individuals, and then a steady continuation of behaviour that degrades the sense. And thus, attitudes must change rapidly if we are to avoid a crisis of sensory depletion. References [1] Welch, D., & Fremaux, G. (2017). Why Do People Like Loud Sound? A Qualitative Study. International Journal of Environmental Research and Public Health, 14(8), 908. https://doi.org/10.3390/ijerph14080908 [2] Clark, B. (2019, October 29). The Loudness War Explained. Musician Wave. https://www.musicianwave.com/the-loudness-war/. [3] De Man, B., & Reiss, J. D. (2013). A Semantic Approach To Autonomous Mixing . Journal on the Art of Record Production, (8). ISSN: 1754-9892 [4] Buskirk, E. V. (2008, September 16). Analysis: Metallica’s Death Magnetic Sounds Better in Guitar Hero. Wired. https://www.wired.com/2008/09/does-metallicas/. [5] Hong, O. S., Kerr, M. J., Poling, G. L., & Dhar, S. (2013). Understanding and preventing noise-induced hearing loss. Disease-a-Month, 59(4), 110–118. https://doi.org/10.1016/j.disamonth.2013.01.002 [6] Wilson, P. H., Henry, J., Bowen, M., & Haralambous, G. (1991). Tinnitus Reaction Questionnaire. Journal of Speech, Language, and Hearing Research, 34(1), 197–201. https://doi.org/10.1044/jshr.3401.197 [7] Davis, W. B., Gfeller, K. E., & Thaut, M. (2008). An introduction to music therapy theory and practice. American Music Therapy Assoc. [8] Brotons, M., & Koger, S. M. (2000). The Impact of Music Therapy on Language Functioning in Dementia. Journal of Music Therapy, 37(3), 183–195. https://doi.org/10.1093/jmt/37.3.183 [9] Hobson, J., Chisholm, E., & El Refaie, A. (2010). Sound therapy (masking) in the management of tinnitus in adults. Cochrane Database of Systematic Reviews. https://doi.org/10.1002/14651858.cd006371.pub2 [10] U.S. Department of Health and Human Services. (n.d.). How Do We Hear? National Institute of Deafness and Other Communication Disorders. https://www.nidcd. nih.gov/health/how-do-we-hear. [11] Le Prell, C. G., Yamashita, D., Minami, S. B., Yamasoba, T., & Miller, J. M. (2007). Mechanisms of noise-induced hearing loss indicate multiple methods of prevention. Hearing Research, 226(1-2), 22–43. https://doi. org/10.1016/j.heares.2006.10.006 [12] Nelson, D. I., Nelson, R. Y., Concha-Barrientos, M., & Fingerhut, M. (2005). The global burden of occupational noise-induced hearing loss. American Journal of Industrial Medicine, 48(6), 446–458. https://doi.org/10.1002/ajim.20223 [13] Esmaili, A. A., & Renton, J. (2018). A review of tinnitus. Australian Journal of General Practice, 47(4), 205–208. https://doi.org/10.31128/ajgp-12-17-4420 [14] Cheesman, M., & Steinberg, P. (2010). Health surveillance for noise-induced hearing loss (NIHL). Occupational Medicine, 60(7), 576–577. https://doi.org/10.1093/occmed/kqq125 [15] Kraus, K. S., & Canlon, B. (2012). Neuronal connectivity and interactions between the auditory and limbic systems. Effects of noise and tinnitus. Hearing Research, 288(1-2), 34–46. https://doi.org/10.1016/j.heares.2012.02.009 [16] Griest, S. E., Folmer, R. L., & Martin, W. H. (2007). Effectiveness of “Dangerous Decibels,” a School-Based Hearing Loss Prevention Program. American Journal of Audiology, 16(2). https://doi.org/10.1044/1059-0889(2007/021) [17] World Health Organization. (n.d.). Deafness and hearing loss. World Health Organization. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearingloss.
- The Use of Science in Art Conservation
By Louisa Ren Vincent van Gogh was known for his vibrant paintings. Photo by Fan Yang on Unsplash. Even in the earliest days of human existence, humans have always created art to express creativity. It is considered by some to be a hallmark of what makes humans human and often tells us about how past cultures developed, hence why art conservators want to preserve art for future generations [1]. Art comes in many forms, such as paintings, sculptures, architecture, literature, music, crafting, and more. What counts as art can be subjective, and what kind of art should be preserved can be even more so, but that is not the main topic of discussion for this article, nor does this article aim to discuss art history and ethics in great depth. There is a lot of science involved in art conservation too, and this article is a simple attempt to discuss a few of the many concepts involved in conserving paintings. Although the terms ‘art conservation’ and ‘art restoration’ are sometimes used interchangeably, they are actually not the same thing. Art conservation is more about examination and preventative care from further damage, whereas art restoration is more about trying to restore an artwork from damages to mimic its original state. Many art preservation projects will involve a combination of both [2]. This article will begin with some colour and paint chemistry, followed by a few lab techniques used by art conservation scientists to assess damage and paint losses, and will end off briefly discussing what is done for the artwork afterwards. Photo by Tabitha Turner on Unsplash But First, What Even is Paint? Generally, the key components of paint are a binder and pigment(s), and sometimes a solvent and additives [3]. Binders in paint are a liquid component that holds the pigments together and allows for the paint to be spread. It also forms a film on the surface when it dries. In the past, eggs, glue, and vegetable oils were used as common paint binders. Nowadays, paint binders tend to be natural or synthetic resins like vinyls, but this can depend on what kind of paint it is. Pigments are inorganic or organic compounds (often made into powder form that gets mixed with binders and solvents) that can be used to colour other things when combined with a binder [4]. A pigment gets its colour from electrons in d-orbitals of the material’s atoms transitioning between energy states. Transition metals like cobalt or cadmium tend to be used for inorganic pigments because their ions have partially filled electron d-orbitals, making them available for repulsive interactions with ligands. The repulsion between the electrons in the d-orbitals of the transition metal ion and the electrons in the ligand causes the d-orbital electrons to be excited into a higher energy state. However, the electrons will be split into high energy and low energy groups because not all electrons will be raised by the same energy amount. The energy difference between the high and low energy groups during electron excitation is what causes absorption of a specific wavelength of light from the visible spectrum, and the colour seen in the pigment is complementary to the colour wavelength that was absorbed [5]. A similar process happens with organic pigments but usually with conjugated double bonds instead [6]. Some colours need more energy to be absorbed than others [5]. Making paints is probably one of the oldest forms of applied chemistry [7]. Natural clay earth pigments known as ochre, which were made from mixtures of clay, sand, and ferric oxide (also known as iron(III) oxide, occurring naturally as the mineral hematite), are the oldest pigments found so far that were used by humans. An ochre pigment processing workshop dating back around 100,000 years was found in Blombos Cave in South Africa [8]. The earliest known drawing was also found there in 2011, etched on a rock as criss-crossed line patterns with red ochre and dates back 73,000 years [9]. Over many millennia of recorded history later, civilisations across the world developed more ways to make pigments and dyes. Some of these used rare materials, like the bright lapis lazuli blue that could only be extracted from Afghanistan mines and was worth more than gold in medieval times, or the Tyrian purple extracted from the mucus of sea snails (Bolinus brandaris, or Murex brandaris) that only members of the Byzantine imperial court were allowed to use [10]. Some pigments were even later found to be harmful, such as the brilliant emerald shades of Scheele’s green and later Paris green that were popular in the 19th century, both of which contained arsenic (copper arsenic and copper(II) acetate triarsenite, respectively) [11]. In more modern times, where we have a better understanding of paint chemistry, these pigments can be more easily replicated through industrial processes with synthetic polymers, and many are generally less harmful than they used to be [7]. How do Pigments in Artwork Degrade Over Time? Pigments degrade and fade over time due to environmental factors such as light or moisture, and past conservation or restoration efforts can contribute to further deterioration too [12]. Organic pigments in particular are more prone to the chemical bonds breaking down from light (especially when it is ultraviolet light), causing gradual discolouration. A notable example of this would be a lake-based pigment known as eosin Y that was popular with 19th century artists like Vincent van Gogh. The strong light absorption from the many double bonds alternating with single bonds (conjugated π system) in the chemical structure gives the pigment its vibrant red colour, which also quickly fades after prolonged light exposure [13]. Many of van Gogh’s canvas paintings were displayed in houses with insufficient light control, so the faded colours posed a challenge for art conservators to identify the eosin pigments on the paintings that used them. Fortunately, these fading pigments leave behind traces of the bromine atoms from the eosin Y chemical structure on the canvas, which can be identified using spectroscopy [14]. Humidity can also gradually affect the state of paint on artworks. As humidity increases, moisture from the atmosphere can accumulate on the painting’s surface and oxidise pigments in the paint, contributing to its discolouration. For example, some of the yellow cadmium paint used in Edvard Munch’s painting The Scream (ca. 1910) have turned off-white and begun flaking. Studies showed the discolouration is likely due to moisture interacting with chloride compounds in the paint, causing the cadmium sulfide (CdS) in the yellow oil-based paint to gradually oxidise into white cadmium sulfate (CdSO4) [15]. Excessive humidity can also encourage mould or mildew growth on the canvas. Low humidity isn’t good either because it can cause the water content binding the artwork together on a molecular level to dry up, leaving the (often fragile) artwork to become brittle and break apart. Some materials will also expand or contract when humidity changes, which can cause them to be weakened [16]. For these reasons, artworks need to be kept in environments with controlled humidity, but this isn’t always easy to control when the artwork is displayed in a museum for thousands of people to see everyday. The oldest human-made drawing found so far, etched with red ochre onto silcrete stone. Where Art and Science Meet Conserving or restoring a painting is multidisciplinary and not a ‘one size fits all’ process, but knowing the chemical composition of the paints used by the original artist and the history of the painting (e.g., how it was created, whether there were any past restoration attempts) is usually a good start. Art conservation wasn’t always done with a scientific approach; some conservation and restoration attempts in the past have resulted in more damage to the painting in the long term, whether due to mishandling or using chemicals that turned out to be too harsh for the paint. The risks of adding more damage to the artwork is partly why art conservation projects are sometimes controversial, especially on famous irreplaceable artworks [17]. With a more scientific approach used nowadays in the field — where conservators prefer more precise minimal intervention methods, the conservation process often starts with assessing the painting using microscopic chemical analysis methods such as Raman spectroscopy, FT-IR (Fourier Transform Infrared Microspectroscopy), X-rays, or microfadeometry, just to name a few common ones Raman spectroscopy is widely used in science, and in this case is particularly useful for identifying individual pigments and their products from degradation. It is a non-destructive chemical analysis technique that analyses interactions between light (often as a laser), and the chemical structure bonds to find out information about the chemical structure and other properties [18]. Knowing this information about an artwork can also provide insight on its original state and help prove its authenticity. FT-IR is another microscopic chemical analysis technique useful for identifying inorganic and organic pigments, as well as any varnishes or other protective chemicals the artist may have used. It works by analysing the infrared spectrums of samples (i.e. how much light is absorbed by the sample) that could be as small as 1 nanogram. This is useful because conservation scientists are able to take as little as possible from the artwork. FT-IR is a quicker but more complicated form of UV-vis spectroscopy as it uses more than one light frequency on the sample at the same time, allowing for more accuracy [19]. X-radiography is also often used to reveal details about the artwork that the human eye can’t easily see (such as holes or tears in the canvas), and can show hidden layers of underdrawing or underpainting, as well as previous changes to the painting [20]. This information is useful when identifying forgeries or determining a timeline for the artwork’s creation. X-rays use the amount of electromagnetic radiation absorbed by the artwork to produce the radiograph, but some pigments show up better than others. Paints made with heavier elements, like lead white paint, tend to absorb a lot of radiation, so they are less likely to show up on the radiograph compared to carbon black paints, which don’t absorb much radiation. This is due to carbon black paint being made of atoms with less electrons and protons in the nucleus. The amount of radiation used in x-rays for art conservation is significantly less than for medical x-rays, so damage to the artwork is not generally a concern for conservators [21]. Microfadeometry is a relatively newer non-contact and mostly non-destructive technique that is useful for finding out how light exposure affects the colour of pigments. It uses a microfading tester instrument that focuses a tiny amount of UV-filtered light from its powerful xenon arc lamp onto an area of between 0.3 mm to 0.4 mm on the artwork. This can mimic years of light exposure on the pigments, providing useful information to art conservators about optimal display conditions for the artwork [22]. While non-invasive methods are preferred, sometimes they will analyse tiny microscopic samples of material from the artwork as well, as long as the samples are carefully documented and from areas of pre-existing damage on the artwork. Chromatography techniques are often used to identify varnishes and binders in paint mixtures by separating the components in samples [23]. Using these techniques, conservation scientists can find out more about the properties of the materials used, which can give insight on how the artist created it. They can also determine the deterioration factors or risks that the artwork may be susceptible to and advise art conservators on how to proceed. People working on art preservation projects have to be careful when handling artworks so they don’t accidentally cause more damage to them. Some attempts to clean valuable artworks have even been controversial, despite the intention to restore it to how the artist had intended [17]. A painting will naturally deteriorate and accumulate impurities over time, especially if it has not been well cared for. Completed paintings will usually have a layer of transparent varnish over the paint that can help protect it from dust and dirt, but older varnishes made from natural materials are more susceptible to deteriorating. As the varnish ages, it can lose transparency and affect the colour of the paints underneath as well. Based on the information gathered from examining the artwork, the old varnish might be removed and replaced by a newer and more durable varnish. The new varnish is chosen based on what is known about the painting’s chemical composition and the environment where it will be displayed [24]. Outright repainting over the artwork is considered unethical, but damaged areas on the canvas or sections that suffered paint loss could be carefully restored by a well trained art conservator in a process known as inpainting. However, this process must be documented to comply with art ethics guidelines that require all changes made in the art restoration process to be easily identifiable and reversible [25]. When the field of art conservation combines art with science, we can see the results on display in museums and galleries even centuries after the artist has passed. Despite the relatively imprecise beginnings of the field, art conservation still grows more multidisciplinary as newer technologies have been developing in recent years, including methods like removing accumulated impurities on artwork with lasers, repairing and reforming damaged areas with nanoparticles, or using bacterial enzymes to clean dirt [17]. Painting as an artform has come a long way since the earliest ochre cave paintings, and hopefully, future generations will still be able to see artworks from centuries ago with their own eyes too. It can be difficult to control conditions for preserving paintings, while also allowing the public to enjoy art. Photo by Andrew Neel on Unsplash. References [1] “The Science Behind the Restoration of a Painting,” Invaluable, 12-Jun-2019. [Online]. Available: https://www.invaluable.com/blog/the-science-behind-artrestoration/. [Accessed: 25-Jul-2021]. [2] J. H. Larson, J. C. Podany, A. L. Rosenthal, N. S. Brommelle, D. W. Insall, and F. Zuccari, “Art Conservation and Restoration,” Encyclopædia Britannica. [Online]. Available: https://www.britannica.com/art/artconservation-and-restoration. [Accessed: 25-Jul-2021]. [3] T. Bosveld, “Paint Ingredients: What’s In Paint?,” Dunn Edwards Paints, 01-Jan-1187. [Online]. Available: https://www.dunnedwards.com/colors/specs/posts/what-ispaint-made-of. [Accessed: 25-Jul-2021]. [4] J. Janson, “The Anatomy of Paint: Pigment and Binder,” Vermeer’s Palette: The Anatomy of Pigment and Binder. [Online]. Available: http://www.essentialvermeer.com/palette/palette_anatomy_of_paint.html. [Accessed: 25- Jul-2021]. [5] J. Clark, “8.3.6: Origin of Color in Complex Ions,” Chemistry LibreTexts, 16-Sep-2020. [Online]. Available: https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_124A:_Fundamentals_of_Inorganic_Chemistry/08:_Coordination_Chemistry/8.03:_Complex_Ion_Chemistry/8.3.06:_Origin_of_Color_in_Complex_Ions. [Accessed: 25-Jul-2021]. [6] V. Bozhulich, “The Chemistry of Pigments and How Scientists Prevent Color Degradation,” in Chemistry, 23-Jun-2020. [Online]. Available: https://inchemistry. acs.org/atomic-news/chemistry-of-pigments.html. [Accessed: 25-Jul-2021]. [7] H. G. Friedstein, “A short history of the chemistry of painting,” Journal of Chemical Education, vol. 58, no. 4, p. 291, 1981. [8] C. S. Henshilwood, F. Derrico, K. L. V. Niekerk, Y. Coquinot, Z. Jacobs, S.-E. Lauritzen, M. Menu, and R. Garcia-Moreno, “A 100,000-Year-Old Ochre-Processing Workshop at Blombos Cave, South Africa,” Science, vol. 334, no. 6053, pp. 219–222, 2011. [9] C. S. Henshilwood, F. D’Errico, K. L. V. Niekerk, L. Dayet, A. Queffelec, and L. Pollarolo, “An abstract drawing from the 73,000-year-old levels at Blombos Cave, South Africa,” Nature, vol. 562, no. 7725, pp. 115–118, 2018. [10] C. Walsh, “Forbes pigment collection serves as teaching tool, resource, and even artwork,” Harvard Gazette, 05- Jan-2018. [Online]. Available: https://news.harvard.edu/gazette/story/2015/09/a-wall-of-color-a-window-to-the-past/. [Accessed: 25-Jul-2021]. [11] “DANGERS IN THE MANUFACTURE OF PARIS GREEN AND SCHEELE’S GREEN.” Monthly Review of the U.S. Bureau of Labor Statistics, vol. 5, no. 2, 1917, pp. 78–83. JSTOR, www.jstor.org/stable/41829377. [Accessed 25- July-2021]. [12] B. Halford, “Chemical studies reveal what’s making The Scream lose some of its vibrant color and how to prevent further degradation,” C&EN, 15-May-2020. [Online]. Available: https://cen.acs.org/analytical-chemistry/art-&-artifacts/Chemical-studies-reveal-what-making-The-Scream-lose-vibrant-color/98/i19. [Accessed: 25-Jul-2021]. [13] M. Yin, Z. Li, J. Kou, and Z. Zou, “Mechanism Investigation of Visible Light-Induced Degradation in a Heterogeneous TiO2/Eosin Y/Rhodamine B System, "Environmental Science & Technology, vol. 43, no. 21, pp. 8361–8366, 2009. [14] S. Everts, “Van Gogh’s Fading Colors Inspire Scientific Inquiry,” C&EN, 01-Feb-2016. [Online]. Available: https:// cen.acs.org/articles/94/i5/Van-Goghs-Fading-ColorsInspire.html. [Accessed: 25-Jul-2021]. [15] L. Monico, L. Cartechini, F. Rosi, A. Chieli, C. Grazia, S. D. Meyer, G. Nuyts, F. Vanmeert, K. Janssens, M. Cotte, W. D. Nolf, G. Falkenberg, I. C. A. Sandu, E. S. Tveit, J. Mass, R. P. D. Freitas, A. Romani, and C. Miliani, “Probing the chemistry of CdS paints in The Scream by in situ noninvasive spectroscopies and synchrotron radiation x-ray techniques,” Science Advances, vol. 6, no. 20, 2020. [16] “How Does Temperature and Humidity Affect Fine Art,” Emerald Transportation Solutions, 06-Apr-2020. [Online]. Available: https://emeraldtransportationsolutions.com/temperature-humidity-affect-fine-art/. [Accessed: 25-Jul-2021]. [17] R. Brazil, “Modern Chemistry Techniques Save Ancient Art,” Scientific American, 28-Jun-2014. [Online]. Available: https://www.scientificamerican.com/article/modern-chemistry-techniques-save-ancient-art/. [Accessed: 25-Jul-2021]. [18] G. D. Smith and R. J. Clark, “Raman microscopy in art history and conservation science,” Studies in Conservation, vol. 46, no. sup1, pp. 92–106, 2001. [19] M. Carbó, F. Reig, J. Adelantado, and V. Martínez, “Fourier transform infrared spectroscopy and the analytical study of works of art for purposes of diagnosis and conservation,” Analytica Chimica Acta, vol. 330, no. 2-3, pp. 207–215, 1996. [20] A. Anitha, A. Brasoveanu, M. Duarte, S. Hughes, I. Daubechies, J. Dik, K. Janssens, and M. Alfeld,“Restoration of X-ray fluorescence images of hidden paintings,” Signal Processing, vol. 93, no. 3, pp. 592–604, 2013. [21] “X-ray light,” Pigments through the Ages. [Online]. Available: http://www.webexhibits.org/pigments/intro/xray.html . [Accessed: 25- Jul-2021]. [22] B. Ford, “Non-destructive microfade testing at the National Museum of Australia,”AICCM Bulletin, vol. 32, no. 1, pp. 54–64, Dec. 2011. [23] K. Mosford, “The Art of Museum Conservation Using Solid-Phase Microextraction–Gas Chromatography–Mass Spectrometry,” Chromatography Online, 12-Nov-2020. [Online]. Available: https:// www.chromatographyonline.com/view/art-museum-conservation-using-solid-phase-microextraction-gaschromatography-mass-spectrometry. [Accessed: 25-Jul2021]. [24] “Research on Varnishes for Paintings,” National Gallery of Art. [Online]. Available: https://www.nga.gov/conservation/science/varnishes-for-paintings.html. [Accessed: 25-Jul-2021]. [25] “Our Code of Ethics and Guidelines for Practice,” Cultural Heritage. [Online]. Available: https://www.culturalheritage.org/about-conservation/code-ofethics. [Accessed: 25-Jul-2021].
- The Turing Test
By Hazel Watson-Smith Image from Pixabay The Turing Test was developed by the father of modern computer science, Alan Turing, to test the intelligence of a computer. Turing first introduced this adaptation of an old parlour game in his 1950 paper entitled “Computing Machinery and Intelligence” [1]. The original test involves an interviewer and two conversational partners, one computer and one actual human. After 5 minutes of questioning, the interviewer must decide which conversation was with a computer and which was with the human [1]. The original Turing Test assesses the text conversation functionality of a weak AI — an artificial intelligence that can only conduct a limited task [2]. The classic examples of narrow AI tasks being computation heavy things like playing chess (IBM Deep Blue [3]), database search applications like playing Jeopardy (IBM Watson [4]) or answering simple spoken questions (Siri [5] or Google Assistant [6]). The most common type of conversational AI is a chatbot. Only a few chatbots have ever passed The Turing Test. The ‘first’ was Eugene Gootsman in 2014 [7] — a chatbot playing the role of a 13yr old Ukrainian boy. Eugene only just passed the test, with 33% of judges mistaking him for the human. (He needed 30% or more to pass) [8]. Some critics claimed that the character used was too manipulative and meant that judges were significantly more likely to discount mistakes that the AI made as a young boy speaking his second language. With the low pass margin and unfair character, many don’t count this as passing the test [9]. After having a quick chat with him myself, if he replied a bit slower, I might believe he was a random kid in Ukraine asking if I like Borscht. Most would agree that chatbots aren’t intelligent, but we have chatbots that can pass the traditional Turing Test. Let’s say that being mistaken for a human equals intelligence. This definition, although helpful in the pursuit of simplicity, becomes increasingly problematic as we move towards strong AI — General artificial intelligence [10]. We have to start thinking about what makes a human a human. Is it appearance, image recognition, speech production and comprehension, or having memories, thoughts, opinions, moods and emotions? Do they also need to successfully navigate the physical world, manipulating objects and respond to the changing environment? In 1965, Herbert A. Simon wrote in his book The Shape of Automation for Men and Management that “machines will be capable, within twenty years, of doing any work a man can do.” [11]. As we are now 26 years past this deadline, where are the general AI’s? Ray Kurzweil in The Singularity is Near has since updated this timeline to expect a general AI by 2045 [12]. If a robot had general intelligence and could sit across from you, Ex-Machina style [13], and be interviewed, that would be the complete Turing Test. However, in this situation, I think we would have to pivot to using actual human tests of intelligence. Can it do times tables, can it write a recount essay about what it did over the weekend, can it go to the supermarket, buy ingredients, cook its favourite meal, and experience joy in sharing it with its loved ones? Can it look at any surface and know what it would feel like to lick (try it)? I could continue but you get the idea. Some of the hardest things for an AI to achieve are the things that we as humans do almost effortlessly. Chess and Jeopardy are challenges, tests of human intelligence. The average human can’t beat Gary Kasparov at chess or win Jeopardy every single time but that’s almost all a narrow AI can do. Memorising the entirety of Wikipedia, or having the mental computational power to be multiple steps ahead of your opponent in a game of chess is too hard for most humans but DeepBlue and Watson could do this with their (metaphorical) eyes closed. These tasks play to the strengths of these narrow AIs, they play to the strengths of computers. Humans use a lot of embodied cognition, so in my opinion, we won’t have a true general artificial intelligence until they have a fully functioning body. The mental processes behind a seemingly simple task like picking up a cup involve a multitude of complex subtasks. Visually locating the cup, moving your hand over to the cup — keeping in mind that there are 12 degrees of freedom in a human arm and 27 more in the hand, and we must instantaneously select the most efficient path to the cup. On the way to the cup, we must form our hand into the correct shape and adjust its angle to pick the cup up, move with the right velocity so that we can maintain a feedback loop with our visual system to ensure accuracy, then grip with an appropriate amount of force and lift slowly enough — keeping the cup level so you don’t spill anything. Each of these tasks independently is a challenge in the world of computer vision, robotics and mechanical engineering. If you see a giraffe for the second time in your life do you think you could recognize it from a new angle? Yes, of course. Do you think you could distinguish between a small giraffe and one that’s just a little further away? In the human mind, we have processes for dealing with relative size, relative lighting and relative perspective. If you’ve ever played around with a system like Google Lens [14], you’ll know that it’s actually getting pretty good. This is thanks to advanced deep learning algorithms that use insane amounts of image data to identify objects and read text. You still have to select the type of thing you’re looking for e.g. text or a landmark. This limits the image search massively and separates it from the human visual system. Also, seeing something isn’t the same as perceiving something. To visually perceive something you must see it, give it a name, connect it to a prior experience, etc. Giving an object a name isn’t as easy as you might think. If you see a sock on the ground you might think “that’s a piece of human clothing”, “that’s a sock”, “that’s my sock” or “that’s the pink women’s size 9 ankle sock that I washed yesterday that my grandma gave me for Christmas in 2019”. The level of abstraction or detail is practically limitless. Selecting the most efficient path to reach out and pick up an object is often referred to as motion inbetweening in the world of robotics. The robot must calculate its current position, calculate its relative end position and then work out how to get there. The way that it gets there is a question of how many degrees of freedom it has in its arm, efficiency but also style. Should it just take strictly the most direct path or follow the style of a human or a chicken? Don’t even get me started on how we instantaneously calculate the dimensions of an object and shape our hand to match. Have you ever gone to pick up a cup and found you hadn’t opened your hand wide enough? I’m thinking probably not unless you were a young child or very distracted/drunk. This is a very interesting process of embodied cognition and no one actually knows how we do it. There are many theories involving size judgements being based on our own body and prior experience with similar objects. I look forward to living in a world where sentient artificial intelligence lives amongst us. I hope that research moves fast enough and I live long enough to see it but I think that might be quite an unpopular opinion. Please remember AI isn’t scary, it isn’t here to take over the world. It’s a useful tool that has so much untapped potential, and the Turing Test will probably still be used to test their intelligence for another 70 years. References [1] Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. [2] Weak AI. (n.d.). https://www.investopedia.com/terms/w/weak-ai.asp [3] IBM. (n.d.). Deep Blue. https://www.ibm.com/ibm/ history/ibm100/us/en/icons/deepblue/ [4] IBM. (n.d.). A Computer Called Watson. https://www.ibm.com/ibm/history/ibm100/us/en/icons/watson/ [5] Apple. (n.d.). Siri. https://www.apple.com/siri/ [6] Google. (n.d.). Google Assistant. https://assistant.google.com/ [7] Just AI. (n.d.). Eugene Gootsman. http://eugenegoostman.elasticbeanstalk.com/ [8] Computer AI passes Turing test in ‘world first’. (2014, June 9). BBC:Tech. https://www.bbc.com/news/technology-27762088 [9] Masnick, M. (2014, June 9). No, A ‘Supercomputer’ Did NOT Pass The Turing Test For The First Time And Everyone Should Know Better. TechDirt. https://www.techdirt.com/articles/20140609/07284327524/no-supercomputer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml [10] General AI. (n.d.). https://www.general-ai-challenge.org/ [11] Simon, H. A. (1965). The Shape of Automation for Men and Management. New York: Harper & Row. [12] Kurzweil, R. (2005). The Singularity is Near. Viking Press. [13] Garland, A. (Director). (2014). Ex-Machina [Film]. https://www.imdb.com/title/tt0470752/ [14] Google. (n.d.). Google Lens. https://lens.google/
- The Chemistry of Film Photography
By Struan Caughey Introduction In December 2020 Nikon, over 100 years after its founding, discontinued its final film camera, the D6. This was due to the market dominance of digital cameras and the company shifting full focus to DSLRs and mirrorless cameras. While the technology behind film photography is dated, the art form has been making a resurgence [1]. This has led to companies reemerging in this space, such as Kodak re-releasing Ektachrome film back in 2018 after having discontinued it not six years earlier [2] and potentially having another two releases this year [3]. Polaroid is another company that in 2008, announced that they were ceasing sales of all their instant film products, citing financial issues [4]. This led to three polaroid enthusiasts purchasing the film production equipment from the business to set up their own company, naming it the ‘Impossible Project’ [5]. They went on to buy Polaroid’s trademark, and after several name changes, they rebranded to just ‘Polaroid’ in March 2020, producing new cameras once again [6]. Having been given my grandmother’s film camera in 2019, a 1970s Pentax ME, my own interest was piqued, for much the same reasons as other people: the simplified process of shooting, the nostalgic look of the photos, but especially to me, for the scientific magic going on within this small unit. Despite digital cameras being far more complex, I am studying a BSc in Computer Science, so the analogue magic behind these old cameras is much more of an enigma for me, resulting in the rabbit hole which culminated in this piece. There are three main chemical stages of producing film: manufacturing, shooting, and development — all three of which are very light sensitive and precise. They also vary for the three main types of film: colour negative, slide, and black and white. We will primarily focus on colour negative film and the C-41 developing process for this article. Film photograph of Te Tirohanga o te Tōangaroa Hall. Photo by Struan Caughey Film Construction A basic colour negative film has seven layers to it, however, with many film stocks you can often find additional layers or the combination of multiple layers into one. The standard stack is as follows [7]: 1. Gelatin protective layer 2. Blue-sensitive silver halide with a yellow forming dye coupler 3. Yellow filter 4. Blue and green-sensitive silver halide with a magenta forming dye coupler 5. Blue and red-sensitive silver halide with a cyan forming dye coupler 6. Antihalation layer 7. Base You do not have all the separate colour layers for black and white film, so the assembly is much simpler. Each Layer’s Purpose The gelatine is there to protect the film from scratches and damage. Layers 2, 4 and 5 contain silver halide crystals that are photosensitive. The photosensitivity is caused by an electron being excited by a photon, moving it into a conduction band. From here the electron can be attracted by a ‘sensitivity speck’ on the surface of the crystal to form metallic silver, which is the latent image. The sensitivity speck will often be a defect, other material, or an electron trap within the crystal which makes this part more sensitive. A similar but reversible technique is used for glasses that shade in bright light. The speck on these crystals can then be utilised in the development stage along with a dye coupler that reacts, making the layer a given colour. This forms a negative of the image taken (light areas will appear darker and vice versa). This will be expanded on in the development section [8]. When a photon hits the film, it passes to a specific layer depending on its frequency. For photons in the blue spectrum, they will interact with Layer 2, which is the blue-sensitive silver halide layer. The blue photons which are not absorbed will get absorbed by the yellow (complementary colour to blue) filter, layer 3. Red and yellow photons will proceed to the lower blue/green and blue/red colour sensitive silver halide layers respectively, with the blue light having been filtered out. Layer 6 is an antihalation layer. This absorbs all the light which has been transferred through the previous layers to prevent reflection of the camera body, which could lead to image artefacts. The final layer is the film base, which gives structure and rigidity to the film as well as acting as a protective layer from the back. Taking the Photo The size of the silver halide crystals has an effect on two things when taking a photo, the first of which is the film’s sensitivity. The larger the crystals, the less exposure to light that the film needs to render an image. This can be referred to as the ‘speed’ of the film, and can be quantified through measures such as ISO or the older ASA. The higher the ISO, the larger the crystals, the faster the camera can shoot, and the darker the conditions can be while forming a usable picture. The second effect and the downside to this is the graininess of the image. When each crystal of a high ISO film is activated, a larger area will become coloured. This is where the term ‘grainy’ comes from in photography, literally from the size of the silver halide crystals or ‘grains’. This can be seen as artistic by some photographers; however, most would prefer to avoid this look [9]. To prevent accidental exposure of light to the film, it is kept in a light-proof cartridge. When loaded into the camera the first two photos are often intentionally exposed while loading, after which the housing, which is lightproof, is closed to protect the rest of the roll. To remove the film, we reverse the film back into the cartridge before opening the housing, ensuring our photos are not spoiled. Developing the Photos Once we have our film with the latent images imprinted on them, we need to make the film stable in daylight. This requires several chemical reactions to complete, which we will break down. Several processes can achieve this, each of which behave differently. Two of the main ones are C-41 and E-6. E-6 is designed for colour positive slides, whereas C-41 is for colour negative film. Each film is designed for a specific process; however, you can sometimes use the non-stated process in a technique called cross-processing. For this, we will just look at C-41. The first thing required when developing film commercially is to have a darkroom. This will either be wholly black or contain a safety light, which emits light that the film is not sensitive to. The film will proceed to go through six steps [10]: 1. Presoak 2. Developer 3. Bleach 4. Fix 5. Wash 6. Stabilizer 7. Dry During this time, the process will have to be at 39°C ± 1°C as there are multiple reactions going on at differing depths within the film. If the film is too hot or cold then you may find certain colours develop more or less than others, yielding undesirable results. Because of this, the first process is the presoak. This is in 39°C water and will act to both clean the film and bring the film up to temperature. Next, we have the most important step. This is the developer one variant of which is comprised of a paraphenylene diamine-based chemical known as CD-4 [11]. A reaction occurs between this chemical and the silver halide crystals, turning them into silver metal. Those crystals that have already got silver in them (due to an incident photon having impacted it) will be catalysed, so they should develop faster and darker. This oxidises the developer, and the oxidised developer reacts with the dye coupler, resulting in the colour forming dye coupler turning from clear to the desired colour. The film is then removed from the developer and it is placed in a bath of bleach. This reacts with the silver, reforming it into silver halide, which can be dissolved by the fixer. Some people will skip this step, leaving the silver crystals in the film. These will not be dissolved by the fixer, resulting in a black and white image on top of the regular colour image [12]. The fixer is composed of several chemicals that strip the silver halide from the film, leaving just the coloured dyes and silver metal behind. Sometimes the bleach and fixer are combined into one bath, known as Blix [13]. This is more common for at-home kits than in commercial operations. This is followed by a wash to remove the existing chemicals from the film before a stabiliser is used. The stabiliser used is often formaldehyde; however since the late 90s most film includes the stabilisation process within the film itself so is often omitted [14]. The purpose of this was to stabilise the dyes, harden and clean the film, as well as place a hydrophobic coating on it to prevent watermarks. There are still some stabilisers used but this is an optional step and only cleans the film and waterproofs it. Lastly, the film is dried in a low dust area before being scanned, enlarged or stored. Overview To summarise the process, a photon hits the film, exciting an electron within the silver halide and causing it to ionise into silver metal, which makes an invisible latent image. This film is then developed, turning the silver halide crystals into silver, oxidising the developer. The silver halide crystals which have been impacted by a photon already have some silver in them that catalyses this reaction, so these areas will develop first. This oxidised developer will then be able to react with the film’s dye coupler to form a negative image. The film is then put through bleach to convert the silver back into soluble silver halide, before all the silver halide is removed using the fixer. Lastly, the film is washed and stabilised before being dried. Now you have a negative of your image ready to be scanned! Having now understood the process, the magic of film photography is even greater. The complex set of equations required to turn a latent image in silver crystals into a computer desktop background is truly fascinating. As film gains in popularity, changes in this process may still be on the horizon and I cannot wait to see where this space develops from here. References [1] K. Fung. “Exploring exactly why film photography is on the rise” DailyCal.org. https://www.dailycal.org/2020/11/08/exploring-exactly-why-film-photography-is-on-the-rise (accessed July. 21 2021) [2] “To the Delight of Photographers and Filmmakers Everywhere, New EKTACHROME Films to Begin Shipping” Kodak.com.https://www.kodak.com/en/company/press-release/ektachrome-film-begins-shipping (accessed July. 21 2021) [3] SilvergrainClassics. “New Kodak Products in 2021? + The Year in Review: SilvergrainClassics Fireside Chat #1 29.12.2020” https://www.youtube.com/watch?v=nwdwdcOG4QU (accessed July. 21 2021) [4] A. Clark. “Polaroid files for bankruptcy protection” TheGuardian.com. https://www.theguardian.com/business/2008/dec/19/chapter-11-corporate-bankruptcies-corporatefraud (accessed July. 21 2021) [5] M. Zhang. “Polaroid Acquired by The Impossible Project’s Largest Shareholder” PetaPixel.com. https://petapixel.com/2017/05/12/polaroid-acquired-impossible-projectslargest-shareholder/ (accessed July. 21 2021) [6] Polaroid [@Polaroid], “This is Polaroid — now...”, Twitter, Mar 17, 2020. https://twitter.com/Polaroid/ status/1243292725065199616 (accessed July. 21 2021) [7] “Colour-film structure” Britannica.com. https://www.britannica.com/technology/technology-ofphotography/Colour-film-structure (accessed July. 21 2021) [8] M. Witten. “The Chemistry of Photography” (2016). Senior Theses. 84. https://scholarcommons.sc.edu/senior_theses/84 (accessed July. 21 2021) [9] N. Snape. “Is grain or noise a bad thing in photography” NeilSnape.com https://neilsnape.com/is-grain-or-noisea-bad-thing-in-photography/ [10] Kodak. “KODAK FLEXICOLOR CHEMICALS” https://imaging.kodakalaris.com/sites/default/files/wysiwyg/ pro/chemistry/z131.pdf (accessed July. 21 2021) [11] “COMPOUND SUMMARY (4-Ammonio-m-tolyl) ethyl(2-hydroxyethyl)ammonium sulphate” PubChem. ncbi,nlm.nih.gov. https://pubchem.ncbi.nlm.nih.gov/compound/25646-77-9 (accessed July. 21 2021) [12] A. V. Hurkman, “Color Correction Look Book: Creative Grading Techniques for Film and Video” Chap. 2 ISBN-13: 978-0-321-98818-8. [Online]. Available: http://www.bookref.com. https://www.oreilly.com/library/ view/color-correction-look/9780133818482/ch02.html (accessed July. 21 2021) [13] “CS41 “COLOR SIMPLIFIED” 2-BATH KIT FOR PROCESSING COLOR NEGATIVE FILM AT HOME (C-41 CHEMISTRY)” CinestillFilm.com. https://cinestillfilm.com/collections/product-catalog/products/cs41- simplified-color-processing-at-home-quart-kit-c-41- chemistry (accessed July. 21 2021) [14] “Do I need an additional stabilizer bath with the 2-Bath Powder Cs41 Color Kit?” CinestillFilm.com.https://help. cinestillfilm.com/hc/en-us/articles/360025784411-DoI-need-an-additional-stabilizer-bath-with-the-2-BathPowder-Cs41-Color-Kit- (accessed July. 21 2021)
- Restoring Landscapes Post-Farming: Is Replanting Really the Answer?
By Nina de Jong Sheep grazing on farmland at Tawharanui, New Zealand. Farming drastically transforms the hydrology and ecology of a landscape. Photo by Koon Chakhatrakan on Unsplash (2020). Volunteer restoration groups in Aotearoa are a huge part of national conservation and restoration progress. Generally, we have a “top-down” approach to environmental management, where the Department of Conservation largely manages public land. However, resource constraints often lead to communities taking on restoration projects, overseen and approved by DOC. In many reserves, community groups do the bulk of the management and provide on-the-ground labour. These groups engage in a huge range of activities including pest trapping, native bird and invertebrate monitoring, building recreational and conservation infrastructure such as tracks, toilets, nurseries, and tree planting. Tree planting is usually central to restoration, and arguably for a good reason. Replanting can accelerate ecosystem regeneration processes [1]. It is seen as important for carbon fixation [2], and aligns more with human time scales — natural regeneration is far slower [3]. There are also strong social dimensions to tree planting. It’s a great activity for getting stuck in and making change that you can see. It’s extremely satisfying to overlook a field that a group of you have replanted, and growing native plants in the nursery is delightful and industrious. Restoration groups are important places for forming friendships and communities, and the practices centred around replanting keep meetings regular and give groups direction and purpose. I can say this from personal experience! However, replanting is not the only approach restoration groups can and should take, both for the landscape they are involved with and the group themselves. How we are replanting, and perhaps in some cases, whether we should replant at all, might be important to rethink. Over recent generations, land in Aotearoa that was previously used for farming has been transferred to regional parks and state reserves, and now this land is managed with environmental conservation in mind. In many instances, new land that is adopted into conservation management has been drastically altered from its pre-human condition by farming. Like any land-use activity, farming involves making profound changes to the environment. Pākehā settlers and the colonial state that bought or confiscated millions of acres from Māori, had clear priorities: to make the land suitable for farming, and become educated in profitable farming techniques [4]. To do this, firstly they had to burn off the forest to create better conditions for agriculture. If the land was poorly draining, farmers would drain the land so that it wasn’t as muddy, reducing the extent of wetlands and increasing the probability of river flooding. To reduce river flooding, they then needed to place stopbanks and train river channels so that land would flood less frequently, and the river would carry water more efficiently out of the farm. Drained land and naturally drier land is more drought prone, and so irrigation from rivers back onto the fields might be necessary. As farming is practiced, soil fertility is reduced. Without continued input of nutrients from ecosystem processes like decomposition, nitrogen-fixing, or regular flooding, fertilisers must be added to the land to improve production. This creates nutrient-rich run-off that flows into waterways, promoting algal blooms, anoxic conditions and local extinction of aquatic species… and of course you can follow these threads on and on. These processes are not straightforward and don’t always occur in the same way or in the same order. However, the important pattern is that colonial farming practices develop a completely different set of hydrological and ecological processes for the land they are practiced in. The changes made under agricultural production do not disappear when the farming stops and planting starts. History accumulates in the memory of a landscape, which means reversion back to “pristine”, pre-human ecosystems will never happen. This has consequences for remediation of land. Restoration approaches have to understand the land’s history, and take this history into the new, remembered, ecosystem of the future. For example, replanting at the edges of streams is often considered an effective way to absorb and filter nutrient run-off from farms before it reaches the water. However, on farms where drains have been laid or where groundwater flow is deeper than where roots have grown to, run-off can flow straight past riparian planting and into waterways [5]. If this history is not addressed, the same issue remains and the replanting is ineffective. Clearing vegetation is only one part of the drastic transformations that landscapes undergo when land conversion occurs. The complex networks of connections between soils, waterways, animals, plants, fungi, nutrients, and everything in between means that restoration projects have to tackle lots of things at once. Successful replanting relies on the state of other aspects of the environment, such as water availability, pest species, and exposure. For example, some plants that do best in wet places, such as kahikatea, may struggle to grow in a landscape that has been drained, where they can’t compete with other species [6]. A lack of pest control can prevent the success of planting, as rabbits, possums and deer can easily continue to browse new plants to death, and invasive plants can smother native trees [7,8,9]. It might be better to focus just as much, or more, on these other issues than to give all our energy to replanting. Across the world, reforestation projects often use only one or only a few species when replanting, leading to low biodiversity plantations that do not resemble a naturally regenerating land [10]. In Aotearoa, replanting schemes use mānuka and kānuka far in excess of any other plants. This is because they’re thought of as classic “pioneer” species that grow quickly in exposed paddock environments, creating shelter and shade for trees more typical of mature forest to grow beneath. The economic value of mānuka honey has also led beekeepers to plant monospecific crops for honey-making. However, this approach may not be best for restoration if creating a “natural” forest is the goal. Mānuka and kānuka have very dense canopies, especially when they are young. Although some protection from the elements is important for many forest trees, the darkness beneath dense mānuka or kānuka canopies prevents most plants from surviving, especially where the trees are still low and bunched up. As a result, instead of the multi-storeyed, species-rich, complex forest structures that you see in naturally regenerated kānuka or mānuka dominated pioneer forest, beneath these monospecific canopies, there is usually bare, dry ground. Other plants often used in pioneer planting such as harakeke, do not actually have a canopy at all as it is a large flax bush. Tī kōuka has only a small canopy that is unlikely to provide extensive shade and shelter. The conditions required for the biodiverse, multi-layered forests everyone wants to see might not be achieved by planting large, dense swathes of just a few species. Replanting is an important and popular aspect of landscape restoration. Photo by Lachlan Cormie on Unsplash (2019). The scientific understandings of actual natural regeneration and succession in native ecosystems are far from complete. Plants used in “pioneer” planting are used to recreate what research has suggested occurs in natural landscape regeneration, but this isn’t the only way a forest can develop. Not all mature forest trees require a sheltered nurse canopy to grow beneath. Iconic forest species such as kahikatea, totara, and other conifers need high light environments to regenerate [11]. If all the abandoned fields across the country were planted with “pioneer” plants, we could lose opportunities for these conifers to establish. The plants that arrive first in a regenerating plot often determine what comes next. For example, the native species established beneath gorse growing in a paddock are different from the species found growing beneath mānuka [12], and the species that regenerate under kānuka and silver fern are not the same species that regenerate beneath mamaku [13]. Letting land regenerate naturally can reveal the interesting patterns of how plants appear in the landscape. It is fascinating to see how many different directions ecosystems might take, and how this could lead to different plant communities [14]. If we create “garden” forests, where we try to recreate natural regeneration with our best guess at the plants that should be there, we might never know what sort of community would have developed otherwise. Replanting landscapes can be very important, for example where erosion is a major problem or where there is social pressure for action and results. However, not every landscape must be replanted, because every landscape is different. Other aspects of restoration, such as raising the water table, or reducing pest plants and animal populations, might be more important to restoring the wellbeing of a landscape. Sometimes it can feel like replanting is a way of covering over our mistakes — we can cover a paddock with trees and pretend it was never a paddock at all. However, the landscape doesn’t forget, and it isn’t transformed back to a pre-human ecology by planting. It might be slower, less satisfying, more unpredictable and even more work (if other aspects of restoration are more difficult), to just let plants naturally return to the land. However taking things slowly, learning as we go and working with landscapes rather than just re-managing them over again, might be an interesting alternative for restoration. References [1] Omeja, Patrick A., Colin A. Chapman, Joseph Obua, Jeremiah S. Lwanga, Aerin L. Jacob, Frederick Wanyama, and Richard Mugenyi. "Intensive tree planting facilitates tropical forest biodiversity and biomass accumulation in Kibale National Park, Uganda." Forest Ecology and Management 261, no. 3 (2011): 703-709. [2] Bastin, Jean-Francois, Yelena Finegold, Claude Garcia, Danilo Mollicone, Marcelo Rezende, Devin Routh, Constantin M. Zohner, and Thomas W. Crowther. "The global tree restoration potential." Science 365, no. 6448 (2019): 76-79. [3] Holl, Karen D., and T. Mitchell Aide. "When and where to actively restore ecosystems?." Forest Ecology and Management 261, no. 10 (2011): 1558-1563. [4] Nightingale, Tony. “Government and agriculture.” Te Ara - the Encyclopedia of New Zealand, http://www.TeAra.govt.nz/en/government-and-agriculture/print (accessed 21 May 2021) [5] McKergow, Lucy A., Fleur E. Matheson, and John M. Quinn. "Riparian management: A restoration tool for New Zealand streams." Ecological Management & Restoration 17, no. 3 (2016): 218-227. [6] Ogden, John., Stewart, Glenn. H. “Community Dynamics of the New Zealand Conifers.” In Ecology of the Southern Conifers, 81-119. Melbourne University Press, 1995. [7] Husheer, Sean W. "Introduced red deer reduce tree regeneration in Pureora Forest, central North Island, New Zealand." New Zealand Journal of Ecology (2007): 79-87. [8] Gillman, L. N., and J. Ogden. "Seedling mortality and damage due to non‐trophic animal interactions in a northern New Zealand forest." Austral Ecology 28, no. 1 (2003): 48-52. [9] Standish, Rachel J., Alastair W. Robertson, and Peter A. Williams. "The impact of an invasive weed Tradescantia fluminensis on native forest regeneration." Journal of Applied Ecology 38, no. 6 (2001): 1253-1263. [10] Seddon, Nathalie, Beth Turner, Pam Berry, Alexandre Chausson, and Cécile AJ Girardin. "Grounding nature-based climate solutions in sound biodiversity science." Nature Climate Change 9, no. 2 (2019): 84-87. [11] Lusk, Christopher H., Murray A. Jorgensen, and Peter J. Bellingham. "A conifer–angiosperm divergence in the growth vs. shade tolerance trade‐off underlies the dynamics of a New Zealand warm‐temperate rain forest." Journal of Ecology 103, no. 2 (2015): 479-488. [12] Sullivan, Jon J., Peter A. Williams, and Susan M. Timmins. "Secondary forest succession differs through naturalised gorse and native kānuka near Wellington and Nelson." New Zealand Journal of Ecology (2007): 22-38. [13] Brock, James MR, George LW Perry, William G. Lee, Luitgard Schwendenmann, and Bruce R. Burns. "Pioneer tree ferns influence community assembly in northern New Zealand forests." New Zealand Journal of Ecology 42, no. 1 (2018): 18-30. [14] Wilson, H. D. "Nature not nurture; minimum interference management and forest restoration on Hinewai reserve, Banks Peninsula." Canterbury Botanical Society Journal 37 (2003): 25-41.
- Will Kauri Survive? Resilience of Ancient Kauri Populations to the Modern World.
By Toby Elliot Kauri (Agathis australis) is one of New Zealands most prominent, notable and exceptional tree species. It is a taonga species for Māori, as kauri are considered irreplaceable ancestors, and their health is often used as a sign of the wellbeing of the ngahere (forest), as well as the plants and animals within it [1,2]. Kauri is also economically important, as its valuable timber was logged and used for various projects [3]. However, recently kauri has gained economic value through tourism. Visitors to the Northland region often visit prominent kauri, such as Tāne Mahuta, and learn more about their rich history and the fascinating ecology of these important trees [4]. A large healthy kauri tree. Photo by Toby Elliot. Kauri is an integral part of the many ecosystems that it inhabits — its leaf litter creates acidic and nutrient-poor soils that can promote the growth and survival of some species, while also inhibiting others [5,6,7]. Therefore, kauri can create distinctive vegetation communities composed of kauri and a suite of associated species — such as Corokia buddleioides — with competitive advantages in kauri forests [8,9]. However, the value of kauri for materials and human-mediated habitat clearance resulted in a rapid decrease in its range since the arrival of humans to New Zealand. Less than 1% of the original kauri forest area remains [1], and many of the largest and oldest kauri trees have been lost. Although large areas of secondary (planted) kauri forest now exist, kauri now has a new microscopic enemy: kauri dieback (Phytophthora agathidicida). Furthermore, climate change is predicted to cause a variety of modifications to Northern New Zealand. It is expected to impact aspects such as temperature and rainfall patterns and the frequencies of disturbances, such as fires [9,10], therefore casting further uncertainty over the long-term survival of kauri. Kauri dieback is a fungal pathogen that affects kauri roots and kills trees of all ages by essentially ringbarking them [2]. P. agathidicida has been found in many A. australis forests throughout its present range [14], making it a potential imminent threat to the survival of these important trees. Infected trees typically have bleeds at their bases that do not appear to be caused by physical damage. They also typically have thinning canopies that degrade over time [2]. Kauri dieback is primarily spread through soil and water, and the movement of humans between forest patches can facilitate the spread of kauri dieback over long distances. Pig activity serves as a secondary disease pathway [12]. Canopy of an infected tree. Photo by Toby Elliot. Various methods are in place to contain kauri dieback, slow its spread, and give researchers time to develop effective ways to treat infected trees. These methods include track closures, spray stations for people to wash their shoes before and after entering kauri forests, and strategies to control pig numbers [2,12]. The primary treatment method for infected trees is through the injection of phosphite into infected kauri trunks, which can temporarily control kauri dieback and reduce mortality rates in infected trees [13]. A permanent control method, however, is currently absent. Additionally, little research is present on the effect of kauri dieback on kauri population dynamics (e.g. growth rates, death rates, recruitment rates), which can be used to predict the long-term survival of kauri in a particular forest, and as a species as a whole. Mairehau (Leionema nudum) is common within kauri forests. Photo by Toby Elliot. For my PhD, I will be attempting to explore kauri population dynamics, how they might be impacted by kauri dieback and climate change, and assess the survival of kauri as a species. To do this, I will investigate the population dynamics of kauri under 'normal' conditions by creating various population models, which can predict long-term changes in forest compositions using this demographic data. I will then analyse how these factors might change with kauri dieback and under conditions and disturbance regimes predicted to occur under future climate change scenarios. One of the methods I am using to achieve this is using permanent plots, which are plots within kauri forests where trees were measured and had tags with unique codes. The idea is that one can go back and re-measure these plots after a few years, as they can give invaluable information regarding how much the trees grew between measuring times, and which new trees came into the plot or have died between measurements. Some of the plots that I will be using are in the Waitakere Ranges, which are heavily affected by kauri dieback. I will also look at how these dynamics change in different regions, which can help see what forests are most likely to suffer more into the future, especially if they are infected with kauri dieback. I hope that my research can shed some light on the severity of kauri dieback, and identify stands that are most at risk of being lost to this terrible disease. This identification, hopefully, will allow for more targeted containment and control measures to protect these incredible trees and the unique forests that they help create. Basal bleeding, which commonly appears when trees are infected. Photo by Toby Elliot. References [1] Steward, G. A., & Beveridge, A. E. (2010). A review of New Zealand kauri (Agathis australis (D. Don) Lindl.): its ecology, history, growth and potential for management for timber. New Zealand Journal of Forestry Science (New Zealand Forest Research Institute Ltd (trading as Scion)), 40. [2] Bradshaw, R. E., Bellgard, S. E., Black, A., Burns, B. R., Gerth, M. L., McDougal, R. L., Scott, P. M., Waipara, N. W., Weir, B. S., Williams, N. M., Winkworth, R. C., Ashcroft, T., Bradley, E. L., Dijkwei, P. P., Guo, Y., Lacey, R. F., Mesarich, C. H., Panda, P. & Horner, I. J. (2020). Phytophthora agathidicida: research progress, cultural perspectives and knowledge gaps in the control and management of kauri dieback in New Zealand. Plant Pathology, 69(1), 3-16. [3] Steward, G. A., Kimberley, M. O., Mason, E. G., & Dungey, H. S. (2014). Growth and productivity of New Zealand kauri (Agathis australis (D. Don) Lindl.) in planted forests. New Zealand Journal of Forestry Science, 44(1), 27. [4] Boswijk, G. (2010). Remembering kauri on the 'Kauri Coast'. New Zealand Geographer, 66(2), 124-137. [5] Jongkind, A. G., Velthorst, E., & Buurman, P. (2007). Soil chemical properties under kauri (Agathis australis) in the Waitakere Ranges, New Zealand. Geoderma, 141(3-4), 320-331. [6] Wyse, S. V., Macinnis-Ng, C. M., Burns, B. R., Clearwater, M. J., & Schwendenmann, L. (2013). Species assemblage patterns around a dominant emergent tree are associated with drought resistance. Tree Physiology, 33(12), 1269-1283. [7] Wyse, S. V., Burns, B. R., & Wright, S. D. (2014). Distinctive vegetation communities are associated with the long‐lived conifer Agathis australis (New Zealand kauri, Araucariaceae) in New Zealand rainforests. Austral Ecology, 39(4), 388-400. [8] Wyse, S. V. (2012). Growth responses of five forest plant species to the soils formed beneath New Zealand kauri (Agathis australis). New Zealand Journal of Botany, 50(4), 411-421. [9] Wyse, S. V., & Burns, B. R. (2013). Effects of Agathis australis (New Zealand kauri) leaf litter on germination and seedling growth differs among plant species. New Zealand Journal of Ecology, 178-183. [10] Sansom, J., & Renwick, J. A. (2007). Climate change scenarios for New Zealand rainfall. Journal of Applied Meteorology and Climatology, 46(5), 573-590. [11] Watt, M. S., Kirschbaum, M. U., Moore, J. R., Pearce, H. G., Bulman, L. S., Brockerhoff, E. G., & Melia, N. (2019). Assessment of multiple climate change effects on plantation forests in New Zealand. Forestry: An International Journal of Forest Research, 92(1), 1-15. [12] Bassett, I. E., Horner, I. J., Hough, E. G., Wolber, F. M., Egeter, B., Stanley, M. C., & Krull, C. R. (2017). Ingestion of infected roots by feral pigs provides a minor vector pathway for kauri dieback disease Phytophthora agathidicida. Forestry: An International Journal of Forest Research, 90(5), 640-648. [13] Horner, I. J., & Hough, E. G. (2013). Phosphorous acid for controlling Phytophthora taxon Agathis in kauri glasshouse trials. New Zealand Plant Protection, 66, 242-248. [14] Waipara, N. W., S. Hill, L. M. W. Hill, E. G. Hough, and I. J. Horner. "Surveillance methods to determine tree health distribution of kauri dieback disease and associated pathogens." New Zealand Plant Protection 66 (2013): 235-241.