Search Results
76 results found with an empty search
- Going Incognito: The Invisible Universe of the Nanoplastic Pandemic
The Plastic Pandemic With the mass production of plastic in the 1940's, humans have come to live lives of convenience and ease. What manufacturers didn't anticipate was the vast distances across which these plastics would eventually travel — remote places such as the Balcony of Mt. Everest [1] and Antarctic ice cores [2], the digestive tracts of marine organisms [3], even in our own blood [4]. It's everywhere. Although the idea of plastic pollution has been around since the 1970's [5], it hasn't been until this year that the extensive impacts of this have been acknowledged. The beginning of March marked a historic event in which the United Nations declared by 2024 to have created an international treaty addressing the plastic problem at each stage of its lifecycle. From looking at more environmentally friendly alternatives to the management of its waste, the treaty is an awesome step in the right direction. However, many of the mitigation steps still require us to know exactly where our plastic is in order to help regulatory communities assess their risk. Plastic in the environment is fragmented and degraded into smaller particles via various natural processes; UV-induced, thermal, and microbial processes to name a few [6]. The plastic lifecycle is long. Large (bulk) plastics degrade slowly into the well-known microplastics (akin to those of exfoliant beads in face scrubs), and eventually weathering, fragmenting over time (in various environments) into smaller nanoplastics. Plastic degradation is infinite, and has given rise to the scientific quest of detecting these seemingly ‘invisible’ plastics. Microplastics and larger fragments already have methods of detection and isolation that are long established; however, their even smaller counterparts, nanoplastics, have slipped under the radar, incognito if you will. Only emerging in scientific literature in the last seven years, the implications of nanoplastics for human health and the environment are still riddled with uncertainties. Nanoplastics themselves are yet to have a standardised definition of their size. The issue stems from the vastly different physical and chemical properties these nano-sized materials exhibit, and by their behavior, while interacting in our macroscopic world. Understanding these materials seems unattainable (for now). While we combat the nanoplastic threat socially, the first line of active defense comes with detection. With our research, we reached our small victory in just that. Nanoplastics are invisible no more. Before things get underway, let us introduce to you our weapon of choice, Raman Spectroscopy — the detection method that gives the possibility of pinning nanoparticle contaminants on our pollution radar. Raman Spectroscopy By shining a red light onto a material, you would expect the exact same colour to be reflected back. A Raman spectrometer, however, can measure that a small fraction of this reflected light is a different colour. This is a result of molecular vibrations causing a change in the energy, and therefore, the frequency of the light being scattered. As each chemical bond has its own associated energies, individual bonds (eg. C-O, C-H) and groups of bonds (e.g. benzene rings) exhibit different energy shifts and therefore, can be identified by their peak position in the Raman spectra. The benefits of Raman are the incredibly simple sample preparation and non-destructive nature of characterisation [7]. It is typically an effective technique for polymer identification — however, nanoplastics are unable to generate signals strong enough to give rise to spectral peaks. So, how can we use this vibrational technique to detect nanoplastics? Figure 1. Polystyrene molecule. Surface-enhanced Raman spectroscopy (SERS) is a technique introduced to overcome some of the limitations of traditional Raman spectroscopy. By using metallic nanoparticles, the electric field surrounding the metal surface is enhanced, allowing the amplification of scattering signals when interacting with the analyte. Our Approach As summer interns part of Professor Duncan McGillivray's Soft Matter group, we leaped into this exciting project, developing methodology for the detection of polystyrene nanoparticles. The studies involved the synthesis of spherical gold nanoparticles (AuNP) acting as our electric field enhancing material, this being used to detect polystyrene nanoplastics as our analyte of interest (Fig. 1). To lay the foundations, a batch of ca. 20 nm AuNP’s were synthesised via an experimentally-optimised Reverse Turkevich method [8], which involves a particular ordered addition of key reagents. This method was named after the original publication by Turkevich et al. in 1951, for the synthesis of nano-sized spherical particles between the size range of 10 to 30 nm [9]. Frens in 1973 revised this method for ensuring monodispersity (uniform particle sizes) of colloidal gold nanoparticles [10]. The original Turkevich method requires a certain order of reagent addition (chloroauric acid (HAuCl4 ) to trisodium citrate (Na3 C6 H5 O7 )), with the reversed method requiring the inverse starting material addition — trisodium citrate added to the boiling gold solution. The monodispersed AuNP suspension was synthesied via reflux, giving it an overall characteristic ruby red colour (think cranberry juice). Using this suspension, we created a system analogous to an open sandwich. The first layer or the ‘bread’ is a cellulose filter paper. The AuNP suspension once diluted was mixed together with a 1 mg mL-1 positively charged polystyrene suspension (Fig. 2), and was deposited (drop-casted) like a ‘spread’ onto the filter paper. Once dried, the monochromatic red laser (785 nm) of the Raman Spectrometer is shone through a X50 microscope lens at 50 mW power. Our vibrational spectral data was collected from a wavenumber range of 200-1800 cm-1, with a 20 s scan acquisition time. An example of the experimental set up is seen in Fig. 3, Fig. 4. Figure 2. The system of AuNPs (-) and polystyrene (+) aggregates in a 1:1 mixture due to having opposite charges Finding Nanoplastics — Unveiling the ‘Invisible’ Nano-world Our results were astounding, with consistent enhancement of characteristic styrene peaks from our analyte of interest, which was positively charged 20 nm polystyrene (a common plastic). Selective polystyrene peak enhancement signals over the filter paper matrix with an AuNP capped surface was key to recording successful SERS signals. Figure 3. Inside a Raman Spectrometer lies a sample of a AuNP/ polystyrene system drop-casted onto cellulose filter paper. With SERS effects, we can quantify the increase in signal strength via calculation of the enhancement factor (we will spare you from the math). Selective styrene peak enhancements presented enhancement factors of 110 - 1750, and in some cases, 3050 times the original signal strength were identified (Fig. 5). Excluding the 20 nm AuNP plasmon band at 518 nm, styrene related Raman signals include the aromatic C=C ring deformation (620 cm-1), C-C ring stretch (1004 cm-1), and C-H in-plane deformation (1030 cm-1). With this knowledge, we further explored the limit of detection of the filter paper SERS substrate with Raman measurements of samples with lower concentrations of polystyrene (500, 100, 50, 10, 5 and 1µg mL-1). From this we found consistent nanoplastic detection of polystyrene at concentrations of 10 µg mL-1, with instances of an all-time lowest detection limit of 5 µg mL-1 when compared to recent literature outputs. Figure 4. A schematic illustration for the filter paper system developed in this work. Although we had successful results in a laboratory setting, these are not necessarily representative of its native state in the environment. The effect of salt (NaCl) was tested using 150 and 600 mM concentrations, this being physiologically and seawater relevant, respectively. It was found that our ability to detect nanoplastics had reduced by a factor of 10, and our system’s limit of detection was reduced to 100 µg mL-1. Conclusion and Future Work Based on our filter paper-based investigation to uncover the seemingly ‘invisible’ polystyrene peaks, the developed SERS system presented a robust and efficient detection method for dilute nanoplastics in a selective manner. Consistent and reproducible styrene peak enhancements at characteristic vibrational spectroscopic stretching modes were isolated — with ability to enhance dilute concentrations of nanoplastics. Strong enhancements of positively charged polystyrene were identified, with a reliable limit of detection of 10 µg mL-1, and even as low as 5 µg mL-1. Notably, the average enhancement factor of polystyrene Raman peaks ranged from ca. 1100-1750, with an instance of enhancement performance of ca. 3050 across various AuNP batches. Our findings put to question how the interparticle distancing between the AuNP and PS spheres (mechanism) effect the enhancement factors, which can be explored using more complex methods such as small angle neutron-scattering (SANS), and small angle x-ray scattering (SAXS). To do this, a trip across the ditch to our Australian friends would be required. Whilst we are far from reaching complex detection of nanoplastics from an environmental system with various competing matrices, our research questions the realm of nanomaterial toxicity in and around our complex, macroscopic world. Such research into the complex nanoplastic interactions are still to be pioneered. Nevertheless, a positive step forward in combatting and unveiling the ‘invisible’ plastics in simple systems offer a great potential for building on a foundation of nanoplastic detection methods, and is a small contribution (but perhaps the ultimate key) in nanotoxicology research. Figure 5. A representative surface-enhanced Raman spectra for PS(+)20 (1.0 mg mL-1) with AuNP (orange) and non-enhanced PS(+)20 (200 mg mL-1) (black). Acknowledgements We were lucky enough to have jumped into this project as research interns and assistants with some amazing mentors, who we cannot thank enough for the depth of knowledge and skill development they have provided us. Thank you for teaching us to be critical and see the true art of science, and inspiring us to be better scientists and people. Thank you for the endless support and laughter, and all the banter over our many burger trips. Thank you Andrew Chan and Shinji Kihara! Absolute legends.
- Chimpanzees and Bonobos Have a Human-Like Understanding of Death
Our understanding of what death and dying entail has long been viewed as one of the characteristics that makes humans unique [1- 2]. This understanding is termed a “Concept of Death” (CoD). It is unknown how early in human evolution the CoD arose — whether it is restricted to our species or more widely present in primates [3-4]. I was interested in whether a comparative evolutionary perspective could shed light on this question. In a biological anthropological context, a comparative approach means utilising observation and analysis of living non-human primates to help differentiate between biological and cultural drivers of human behaviour. Since we are the only remaining member of our genus, Homo, comparative primatology helps determine what traits stem from our shared primate heritage and what is uniquely human. Suppose the CoD evolved early in our lineage. This could help contextualise ancient hominin behaviours, offer alternate explanations for findings in the fossil record, or even spur a rethink of the possibility of pre-Homo sapiens burials. Chimpanzee mothers with infants. Image by Suju-foto from Pixabay As part of a supervised research project, I utilised this comparative approach to investigate the CoD in our closest living relatives: the two members of the genus Pan. A CoD in the chimpanzee (P. troglodytes) and the bonobo (P. paniscus) would increase the probability that a CoD was also possessed by their last common ancestor with humans. This would then imply an early origin in our evolutionary story — perhaps associated with adaptations to increasing group size. Due to several factors, the most challenging being that death does not happen on command when you have a good project idea, I could not collect my own primary data. Instead, I was restricted to other researchers’ opportunistic observations of Pan behaviours surrounding death. I thus collated and systematically reviewed decades of these videographic, written, and oral records. I analysed behaviours through a methodological framework I adapted from studies of the CoD in human infants and children. I found that chimpanzees and bonobos appear to have a simple but multifaceted CoD, including clear comprehension of death’s biological characteristics and some understanding of its more metaphysical aspects. Chimpanzee and Bonobo Social Behaviour Sociality and relationships are intimately connected to aspects of social cognition such as the CoD. It is thus essential to have some basic knowledge of relevant chimpanzee and bonobo social structure and behavioural flexibility. The chimpanzee and bonobo both form large multi-male and multi-female groups that occupy specific territories [5-6]. Their everyday relationships can reach a depth of “bondedness” only found in reproductive pair bonds in other species [7]. Chimpanzees and bonobos regularly show intense interest in the genitals of their group members, with interactions often involving mutual genital inspection, smelling, and grooming [5,8]. Their interest in genitals is second only to their interest in each other’s faces [9]. Both species have extremely hierarchical societies, although chimpanzees are more likely to reinforce their hierarchies with aggression and dominance displays [5,9]. Bonobos maintain more tolerant societies and utilise sex as a social tool for conflict resolution [10]. Mothers of both species are known to continue to carry their infants after death [8]. Subcomponents of the Concept of Death The CoD in non-human animals has often been contested due to a lack of consistent definitions [2]. However, research on the CoD in children has a long and consistent history [11]. When assessing the development of the CoD in children, researchers break the CoD into seven subcomponents: 1) non-functionality (death means the cessation of bodily and mental functions); 2) irreversibility (once an organism is dead, it cannot be returned to life); 3) universality (death happens to, and only to, living things); 4) inevitability (death happens to all living things); 5) personal mortality (death will happen to me); 6) causality (what causes death); and 7) unpredictability (the timing of death cannot be known in advance) [12]. I consider inevitability and mortality to be sub-aspects of universality, as understanding death’s universality implicitly comprises understanding that this includes yourself, a living thing, and excludes inanimate objects. I also consider causality and unpredictability a cognitive step beyond the fundamental CoD. Therefore, to investigate the CoD in genus Pan, I collapsed these seven subcomponents into only three: 1) non-functionality, being the understanding that death results in the complete cessation of bodily and mental functions; 2) irreversibility, being the understanding that once an organism is dead, it cannot be returned to life; and 3) universality, being the understanding that death also happens to others — this includes only living things and all living things, including oneself. Behavioural Indicators of the Concept of Death Research into the pace and pattern of the development of the CoD in human children relies on language and interviews [12-13], so I had to create non-linguistic behavioural equivalents for each criterion. I recorded individuals as understanding non-functionality when they treated deceased group members’ bodies in ways they never would when alive. These included incidences of post-mortem cannibalism and cases where mothers carried deceased infants in atypical positions that would cause injury if the infant was still alive, such as gripping in the mouth or dragging by a limb [16-20]. I also recorded individuals as understanding non-functionality when they performed deliberate checks for functionality, such as hitting the body [21], lifting and dropping limbs [8,16,19], sniffing genitals [22-23], or prying open the mouth to check for signs of breathing [8,21]. It must be noted here that an organism’s understanding of non-functionality can only be as complex as their understanding of functionality, e.g., a chimpanzee cannot be expected to check for cessation of brain activity, as they do not understand this to be a necessary part of life. That chimpanzees and bonobos ceased their efforts to wake or revive dead group members after receiving no response indicated that they understood death, unlike sleep, is irreversible. I also recorded individuals as understanding irreversibility when they exhibited strong emotional responses after receiving no indications of life. I observed a variety of such responses, including whimpering [24], screaming [25], rocking back and forth [8], tearing out hair [25], disturbed sleep [21], and refusal of food [8]. Some older female chimpanzees had gentler, although still emotional, reactions, such as grooming and cleaning the body or keeping overnight protective vigils [8, 21-23]. One unique indication of universality was seen after a group of chimpanzees who had earlier killed a rogue group member returned to the scene to find the body removed by human researchers [19]. When the group discovered the disappearance, they showed fear and made alarm calls, indicating they understood both that the dead cannot move and that this non-functionality is irreversible—the dead should not suddenly return to life, get up, and walk away. I found behavioural indicators of universality much harder to identify, as this subcomponent is less about an organism’s immediate reaction to a death, which can be observed, and more about a mental transference of that death’s implications to future situations. One incident that may indicate a rudimentary understanding of universality occurred after a mother chimpanzee lost an infant to illness [26]. She became overly attached to her remaining child, a six-year-old, and began treating him like a baby — carrying him on her back, hand-feeding him, and sharing her night nest with him — as if she were afraid he too might die. Reacting to the deaths of other species can also indicate some degree of universality, as the individual is showing they can apply their understanding of death more broadly. A group of chimpanzees who encountered a dying baboon became very agitated — making alarm calls and sniffing, stroking, and grooming the body [9, 24]. Both chimpanzees and bonobos were also observed acting differently towards snakes after their death. They let infants and juveniles use the bodies as toys, rather than exhibiting their usual fear and avoidance [8-9]. I was hoping to find evidence of individuals who witnessed an accidental death becoming increasingly cautious when later navigating the same dangerous environment, indicating an understanding that the same death could happen to them. However, the opposite was observed: after seeing a group member fall and break his neck, a second chimpanzee almost fell himself when vines gave way beneath him [24]. He showed no extra caution despite his group member’s death just hours earlier. Chimpanzees and Bonobos Understand Death When I brought these disparate incidents and behaviours together, it became clear that chimpanzees and bonobos have a complex CoD, including a cogent understanding of the biological subcomponents of non-functionality and irreversibility and at least some degree of comprehension of the more metaphysical subcomponent of universality. There is abundant evidence for non-functionality and irreversibility: chimpanzees and bonobos deliberately examine bodies for signs of life and have strong emotional reactions, analogous to grief in humans, upon receiving no response. In no case did I note a chimpanzee or bonobo continuing their efforts to wake or revive a body for any significant period after receiving no response. Adolescents and juveniles were seen to investigate bodies the longest, whereas older group members, who have likely encountered death before, interacted with the dead for a far shorter time. This difference suggests that the Pan CoD is learnt, rather than innate, and thus close in nature to the human CoD, which is developed via experience and teaching. However, I did not find satisfactory evidence of universality in chimpanzee or bonobo behaviours. This was unexpected, as universality is the first of the three core subcomponents to develop in human children [12]. It is possible that universality may have been absent from my data due to behaviours imperfectly reflecting underlying thought processes and not due to an absence in cognitive capacities. Image by Sasint from Pixabay. Shared Origins of the Human and Pan Concept of Death One common thread in my research was that the individuals most affected by each death were those emotionally closest to the deceased. One chimpanzee, who died of illness, was a highly social individual who spent time with many different subgroups — accordingly, most of the group was interested in and interacted with his body [23]. Even then, the two individuals most affected were his closest friend, who visited his body more than any other male, and his adoptive aunt, who groomed his body, cleaned his teeth, and kept vigil after everyone else had long since left. Conversely, after the death of a low-ranked and socially peripheral female, the only group member to spend any time near the body was her daughter [27]. Infants are also socially peripheral, having not yet formed any social networks. Unsurprisingly, in cases of infant death, only the mothers had any noticeable emotional response [8,16,22]. Pan behaviours around death appear to be simply a translation of the bonds created in life. The Pan CoD also appears to be more of an emotional reaction than categorisable behaviour. Many scholars criticise anthropomorphism, but I believe that anthropodenial, or the rejection of similarities between humans and our close relatives to keep us on an evolutionary pedestal [8], is worse. If two closely related species act similarly under similar circumstances, it is reasonable to theorise that they are similarly driven. Therefore, I describe this emotional response to death as grief, and the behaviours that stem from it, such as grooming and keeping vigil, as mourning. In humans, grief is an emotion, a feeling of sorrow caused by distress over a loss, with mourning then being the social behaviours exhibited in response to that grief [28]. As the Pan CoD appears rooted in grief and mourning, it is reasonable to term it a socially driven phenomenon. This may help to contextualise behaviours throughout the hominin family tree. The socially driven CoD seen in chimpanzees, bonobos, and humans likely evolved as an adaptation to protect against destabilisation caused by death. The more communal a species, the more effort is needed to protect against social destabilisation. Chimpanzees and bonobos, like humans, are highly social animals to whom a defined hierarchy is vital for stability. Death impacts social groups by severing bonds, thus creating a rupture in the social fabric: the most gregarious animals have the most mourners as they had numerous strong bonds in life. The CoD evolved because it is needed to function as a social stabiliser. If a species develops the ability to understand death, then they can feel grief. If a species can feel grief, then they can begin to mourn. If a species can mourn, then they can more quickly re-categorise the living to dead, reform the social structure, and shape a new dominance network after death has left a hole in the hierarchy.
- Mysterious Wolbachia Bacterium Helps Fight the Dengue Virus
If you’re traveling to the Caribbean, Indonesia, Australia, or any tropical climate really (where else would you holiday), falling ill to the dengue virus may be in the back of your mind. And you would be smart to pack the mosquito repellant; dengue fever is a serious and potentially fatal disease without any specific treatment nor preventative medicine. More significantly, dengue burdens millions of people that inhabit the endemic habitat of Aedes aegypti, the carrier mosquito; endemicity that is rapidly spreading towards European and North American populations thanks to climate change. Over the past decade, reported dengue virus (DENV) cases to WHO has increased eight-fold to 5.2 million in 2019 [4]. Asia disproportionately represents 70% of the dengue burden, thought to be an effect of rapid urbanisation and global warming [1]. Thus, social and environmental issues should be taken into consideration when responding to the dengue outbreak. The lack of viable vaccines and specific treatment spotlights Wolbachia bacterium as a cheap and effective solution through a somewhat known, yet mysterious, symbiosis. Figure 1: Mosquito image by Yogesh Pedamkar from Unsplash. Dengue is a positive-strand RNA arboviral disease of the four serotypes DENV 1-4, belonging to the Flavivirus family [2]. Aedes aegypti mosquitoes are its primary vector, facilitating the carrying and spread of DENV amongst populations [3]. Humans also reservoir DENV, infecting mosquitoes who feed on them [4]. The route down DENV infection involves E proteins embedded within the viral lipid membrane, of which bind to cellular receptors to initiate endocytosis (entry into the cell for amplification and replication) [2]. Another pathway, the Antibody-Dependent Enhancement (ADE) pathway, is associated with greater disease severity or potentially fatal dengue shock syndrome. The ADE pathway exploits processes of the immune system. Fc immune cell receptors bind antibodies (bound to pathogen DENV) for endocytosis but also act to block key antiviral molecules such as cytokines, which are regulators of the immune response [5]. DENV acts to decrease transcription and translation of pro-inflammatory cytokines and increase transcription and translation of anti-inflammatory cytokines. Such imbalanced inflammatory responses cause inner blood vessel lining pathology and vascular leakage, leading to hypovolemic shock [2]. Dengue prevalence is attributed to Aedes aegypti’s life-long infectiousness and high transmission, but the spread of Dengue is compounded by social and environmental contributors discussed in coming chapters [2]. Wolbachia To control the dengue burden, research in vaccines, antivirals, and vector-control has been discernible over the past decade [3]. However, lack of effective vaccines has left vector control the pertinent method for reducing viral spread [6]. Endosymbiont Wolbachia’s protection against significant infection of RNA viruses, and thus reduction of dengue infectivity in Aedes aegypti, has been known for years despite the lack of consensus on the underlying mechanisms. Wolbachia are maternally inherited bacteria known to infect >65% of insect species, yet do not naturally infect Aedes aegypti [7]. Fortunately artificial infection is feasible, thus, opening the door to potentially effective and naturally dispersive vector control. Pan et al. [7] proposed the establishment of symbiosis between Wolbachia and its host to increase pathogen resistance. Wolbachia exploits host innate immunity by activating toll and immune deficiency (IMD) biochemical pathways. This is via activating pattern recognition receptors (proteins capable of recognising pathogenic molecules); both of which induce the expression of antimicrobial peptides, which in turn induce overexpression of antioxidants. Pan et al. acknowledge that it is unknown how these pathways reduce DENV infection and facilitate symbiosis, although it is clear upregulation of such pathways increases Wolbachia presence. For example, antioxidant enzymes induced by Toll pathways are suspected to enhance Wolbachia fitness [7]. This is supported in another study, where increasing fly survivability in hyperoxic conditions was shown to have high antimicrobial peptide and antioxidant presence [8]. Antimicrobial peptides potentially maintain the Wolbachia niche in preventing the growth of microbial flora within mosquitos [7]. As described above, Wolbachia’s exploitation of a host’s immune response allows it to beat its microbial competitors. Evidently, boosting mosquito immunity with Wolbachia could both amplify Wolbachia titer (populations) and resistance to DENV. A secondary speculated mechanism suggests Wolbachia may out-compete DENV for important host cell components including cholesterol, by which Wolbachia nor Flavivirus’ have the biosynthetic capability to synthesise autonomously. Interestingly, DENV requires cholesterol in order to replicate and cause pathogenesis [6]. The significance of competition between Wolbachia and DENV is yet to be determined, however, it sounds like it could be a key area of focus in future studies. Figure 2: Poor infrastructure, drainage and sanitation are depicted as a consequence of rapid urbanisation in Vietnam. Vietnam, like Bangladesh, manages the dengue burden seasonally and with increasing severity. Image by Tony Lam Hoang from Unsplash. Thinking beyond Wolbachia The importance of DENV control is reinforced when considering the implications of both urbanisation and global warming on the dengue burden. Aedes aegypti survival, reproduction and transmission are promoted by increase in temperature, annual precipitation and humidity [10]. Bangladesh exemplifies how Aedes aegypti exploit such climatic shifts where the 2015-2017 pre-monsoon season saw seven times more dengue cases than the 2000-2017 season [11]. Rapid urbanisation was also linked to increasing dengue cases in Bangladesh. Poor health care, infrastructure, sanitation, waste disposal, and drainage facilitates increased transmission and mortality in such metropolitan agglomerates [12]. Both global warming and urbanisation extend Aedes aegypti habitat beyond endemicity where human interaction with zoonotic (animalborne) disease increases with deforestation and extension into wild habitat whilst tropical boundaries continually stretch toward the poles [13]. In 2015, approximately 53% of the global population was modeled to inhabit dengue risk areas, and was projected to increase to 60% in 2080 [10]. If you haven’t thought about COVID-19 enough, the latest pandemic exemplifies the ways in which increasing viral transmission in a warmed and urban climate may indirectly impact the dengue burden. Co-infections, lack of discrimination between COVID-19 and DENV in both clinical presentations and diagnostic methods, as well as access to healthcare may overrun such systems and put patients at further risk [14]. Communities vulnerable to such consequences of global warming and rapid urbanisation reinforces the need to explore beyond purely biological solutions. Dengue is considered as one of the fastest growing viral diseases today, now extending beyond endemic boundaries consequential to urbanisation and global warming [4]. Vector control methods using Wolbachia bacteria is a promising area of research. However, unresolved consensus on the mechanisms of DENV control leaves more research to be done. Not only is Wolbachia potentially protective against dengue, but also for other diseases including malaria, yellow fever, and zika. Wolbachia is thus a significant bacterium that may potentially lead the fight against mosquito-borne disease for the protection of human health over the coming years.
- Are We Alone in the Universe?
Arthur C. Clarke, a science writer and futurist, once said, “The idea that we are the only intelligent creatures in a cosmos of a hundred million galaxies is so preposterous that there are very few astronomers today who would take it seriously. It is safest to assume, therefore, that they are out there and to consider the manner in which this fact may impinge upon human society.” The universe is approximately 93 billion light-years in diameter and is expanding at roughly 1.96 million km/s [1]. In other words, when travelling at the speed of light (approximately 3 x 108 m/s), it would take an individual 93 billion years to travel across the universe. Many people are curious as to why we still have not encountered extraterrestrial (ET) life, despite the boundless possibilities that exist within the vast expense of the universe. This is also known as the Fermi paradox, which describes the conflict between expecting a high probability of the existence of intelligent life elsewhere in the universe compared to the ‘empty’ universe we observe [2]. Another term used to describe this silence and loneliness we are experiencing is the ‘Great Silence’ [3]. Image by StockSnap from Pixabay Scientists have pondered the existence of ET life for centuries. In 1961, astrophysicist Frank Drake developed an equation — the Drake equation — that seeks to determine the potential number of intelligent civilisations in our galaxy (Table 1) [4]. However, many sceptics claim that the equation relies on too many assumptions and that the actual number of intelligent civilisations will more likely than not vastly differ from our predictions. Furthermore, scientists have proposed modifications and novel approaches to the original equation in recent years [5-7]. This article explores a few of the many proposed theories as to why we have yet to encounter ET life. The Great Filter Despite the universe being incredibly ancient, we do not have any solid evidence of ET intelligence colonising our solar system or nearby systems. R. Hanson [7] suggests that a ‘Great Filter’ stands between ordinary dead matter and advanced life that flourishes. For humans to thrive as a species like we are now, the appropriate conditions had to be present at the right time. It has been an arduous journey from the formation of our star system to the first ribonucleic acid (RNA). This subsequently led to the establishment of single and multicellular life and to the birth of complex organisms that utilise tools. Between each ‘checkpoint’, there are multiple ways in which the suitable conditions could have been absent, leading to our inexistence. Evolution is a complex biological process that — until the publication of Charles Darwin’s On the Origin of Species — we did not comprehensively understand [8]. Perhaps there are microorganisms somewhere out in the universe, but the probability of such organisms evolving into intelligent sentient beings is infinitesimal. Another suggestion that has been made concerning the Great Filter is that sufficiently developed civilisations eventually eliminate themselves, rendering the species extinct with no traces of them left behind. Kardashev, N. S. [9] introduced a scale to classify technologically advanced civilisations according to the amount of energy they consume. Decades since, extended versions of the Kardashev have been suggested (Table 2) [10]. A Type I civilization is able to fully harness the energy that reaches its home planet from its parent star [9]. Basalla G. [11] claims that we are not a Type I civilization yet as we are unable to capture all the radiant energy streaming down on Earth. Our present civilization is closer to a Type 0.7, and scientists have predicted that we will attain Type I status by 2347 [12,13]. As a Type 0.7 civilisation, we already possess weapons of mass destruction that can destroy the Earth multiple times over. A moment of selfishness and carelessness could send us down a rabbit hole. As a species, we are also battling climate change. A new report generated by the United Nations (UN) mentioned that we must act now and reduce carbon emissions before we tread on an irreversible path toward climate disaster [14]. Climate change can lead to detrimental health outcomes; worse still, due to the unprecedented rate at which glacial ice is melting, thousands of microbes are now being released and reactivated into terrestrial and aquatic environments, which can lead to epidemics or even pandemics [15,16]. The Great Filter is a fantastic hypothesis for why we may not have encountered ET life. Recklessness and ignorance could have led to the fall of once glorious civilisations, preventing us from ever discovering their existence. The Zoo Hypothesis In 1973, John A. Ball proposed the Zoo hypothesis [17]. The hypothesis posits that intelligent life avoids interacting with us on purpose and that — like how we keep animals in enclosures and view them from a distance — they view the areas we reside in like a zoo. As a result, we will never discover ET life as they want to remain hidden from us, and they possess the technological capabilities to ensure it remains so [17]. The Zoo hypothesis is somewhat similar to the ‘Prime Directive’ — the belief that every society has the right to unimpeded and natural development — in the famous series Star Trek [18]. Much of the Zoo hypothesis is about respecting the autonomy of other civilisations, allowing infant civilisations to pursue their own destiny without interference. There is a possibility that advanced civilisations millions or billions of years older than us are watching us from the sidelines, waiting for us to achieve what they would consider intellectual, social and technological maturity. However, the concept that incredibly advanced beings are interested in the natural evolution of life on Earth sounds a little self-centred [19]. From an anthropocentric standpoint, we have never been great at non-interference with populations of other lands and differing cultures. Why should we assume that, unlike us, other civilisations are peaceful and altruistic? The Zoo hypothesis assumes that other civilisations care about our natural development. The contrasting proposal to this is that we are simply not worth contacting. We Are Not Worth Contacting When considering the age of the universe, we are an extremely young civilisation. To put this into perspective, scientists use the Cosmic Calendar, which compresses the timeline of the universe’s birth to our current time of technological development and globalisation. Based on this framework, with the Big Bang occurring on the exact first second of New Year’s Day, the first humans only appeared on December 31, at approximately 22:30. Agriculture was only invented by humans on December 31, at 23:59:20 [20]. The cosmic calendar demonstrates how insignificant our species is on a grand scale. Over the billion years in which the universe has come into existence, there could have been civilizations that are more than a thousand-fold more advanced than us. Like how we would not teach bacteria calculus, ET life may not find contacting us worthwhile. It is undeniable that we have made great strides in many aspects ranging from technological developments like smartphones and aeroplanes to scientific breakthroughs like the ability to edit our genes, amongst others. However, one must acknowledge that many social and scientific problems still exist. There are more than 20 ongoing military conflicts worldwide due to civil wars, territorial disputes and transnational terrorism [21]. Furthermore, there have been increasing inequalities in areas such as health and wealth, and many debilitating diseases still plague the population worldwide with no cure. As a species, we have much to learn and discover. Our lack of knowledge and wisdom, compared to potential ancient ET civilisations, may be why we have yet to encounter ET life. Communication Differences There are approximately 7,151 human languages spoken today [22]. When considering methods of communication the millions of species of animals worldwide use, we end up with a manifold of communication methods. The dolphin, for example, communicates through three known types of acoustic signals: burst-pulsed sounds, echolocation, and frequency modulated whistles [23]. Life on Earth alone communicates through a multitude of modalities. While we have sent and received many signals to and from space, we still have never directly interacted with ET intelligence. Current scientists involved in the Search for Extraterrestrial Intelligence (SETI) attempt to communicate with ET life by relying on the assumption that the basic principles of chemistry, mathematics and physics hold true throughout the universe [24]. This assumption may be wrong, and other scientific principles could govern the environment in which ET life thrives. Another challenge to communicating with ET life is the time it takes for civilisations to receive signals from other life forms. For example, a radio signal we have received from 12 million light-years away would mean that an ET civilization sent the signal toward us 12 million years ago. By now, the civilisation that sent said signal could have destroyed themselves or would have become so advanced that they have decided not to contact us. Similarly, we could send a signal into space now, and if a civilization one million light-years away receives it far into the future, we may have been long gone. Another interesting thought experiment is that ET life who look through a telescope — that must be technologically capable beyond human comprehension — would not see the technological progress we have made. If an ET species from 65 million light-years away looked at Earth through their telescope, they would see dinosaurs roaming around. There would then be no incentive in exhausting resources to travel to Earth and interact with life here. To conclude, there are many suggestions made as to why we have not encountered ET life, most of which are plausible. Could we indeed be alone, or are the technological barriers to cross for interstellar interaction simply too high for us now? Even if we discover ET life over the next few centuries, there are multiple implications to consider. Fundamentally, our worldview will evolve. ET life will probably be vastly different from what most people expect them to be — green figures with large round eyes, or other cinematic representations of ETs. Whether the vast universe we reside in has other life forms remains to be seen.
- Pinpointing Breast Cancer From a Bioengineering Perspective
By Max Dang Vu In 2020, around 2.3 million women globally were diagnosed with breast cancer, the most people for any cancer type. Simultaneously, almost 685,000 women died from the disease [1]. Breast cancer treatment involves locating tumours early and completely removing them through surgery. To enable this, tumour positions are first analysed and identified across medical images acquired from different diagnostic procedures. Based on these analyses and palpations, the surgeon marks the location to perform a tumour excision. But what are the clinical challenges of breast cancer diagnosis & treatment, and how can we help address them from a bioengineering perspective? This article highlights these challenges and reviews state-of-the-art biomechanical approaches to help find solutions to this question. Clinical breast cancer diagnosis and treatment procedures Breast cancer diagnosis involves three imaging procedures: X-ray mammography, magnetic resonance imaging (MRI), and second-look ultrasound (Figure 1). A challenge with this is finding the correspondence of tumour positions between the different procedures because breast tissues undergo large displacements with small changes in patient positioning. The patient stands upright during X-ray mammography, with two plates compressing their breasts to achieve near-uniform distribution of internal tissues [2]. However, tumours and normal breast tissue can appear similar on mammograms, increasing the difficulty in differentiating them [3]. MRI is more effective at discriminating lesions from breast tissue due to the high-resolution contrast between soft tissues [4]. Patients lie face-down (prone position) as gravitational forces separate out tissues in the breast [5]. However, while MRI has high sensitivity, it also has low specificity, making it challenging to differentiate lesion types. This can result in unnecessary biopsies to confirm the presence or absence of tumours. Second-look ultrasound can supplement MRI by visualising lesions in real-time and help catch early-stage cancers. Clinicians apply a handheld high-frequency transducer probe over the patient's breast as they lie face-up (supine position) and tilted to one side to obtain these images [3]. Ultrasound, however, has poor cancer detection sensitivity and is best used to supplement MRI [6]. Figure 1: Breast cancer diagnostic images are taken via X-ray mammography in the standing position (a), followed by MRI in the prone (facedown) position (b) and second-look ultrasound in the supine (face-up) position and tilted to one side (c). Image a is obtained from Monkey Business - stock.adobe.com. Image b is from siemens.com/press. Image c is from Luisandres - stock.adobe.com. Clinicians typically treat diagnosed tumours by surgically removing them, either through lumpectomy plus radiation or a mastectomy [7]. The intention is to remove tumours altogether to minimise cancer recurrence and optimise survival chances [8]. A lumpectomy eliminates tumours along with small amounts of surrounding healthy tissues to conserve as much of the breast as possible. Follow-up radiation ensures any leftover cancer cells are eliminated or shrink in size for future removal. A mastectomy removes the entire breast when previous treatment strategies are ineffective for patients. This prompts the need for tools that assist in accurate tumour localisation and minimise the removal of healthy breast tissue in treatment. State-of-the-art development in the literature These clinical challenges have motivated the development of physics-driven computational models that predict breast biomechanics. The models can simulate motion under gravity loading from the prone position to the supine position, where surgical treatment is performed. This enables radiologists and surgeons to track the features of interest during clinical procedures [9], [10]. Breast biomechanical models are built using the Finite Element Method (FEM), which divides a geometry of interest into a mesh composed of many smaller elements. Partial differential equations describing the breast's mechanical behaviour are solved to predict the breast tissue deformation [11]. Deformation describes an object's changing shape and size in space under applied forces. Obtaining accurate predictions from biomechanical models requires identifying the mechanical properties of breast tissue. These properties provide insights into breast tissue composition as its underlying architecture and biological environment dictate their mechanical moduli or stiffness [12]. The breast is internally composed of adipose and fibroglandular tissues [13], and their reported stiffnesses vary with different mechanical loading conditions and experimental protocols used for their identification [14]. From the testing of excised samples of tissues (ex-vivo), the general observation is that fibroglandular tissues are 1 to 6.7 times stiffer than adipose tissue [13], and tumours have significantly higher stiffnesses that increase with cancer growth [15]. Researchers typically assess tissue stiffness in-vivo to avoid tissue removal and subsequent damage that may alter their mechanical behaviour during testing [16]. However, identifying mechanical properties in-vivo requires a rich dataset acquired either from MR imaging of the breasts in multiple positions or capturing surface deformation of the breast under indentation using multi-camera systems [10], which is highly challenging to obtain. Therefore, the validation of these identification methods is typically conducted first by performing experiments on soft silicone gel phantoms. These can be moulded into different shapes, such as rectangular beams or the breast [17-18]. At this point, tumours have been identified via imaging, and their locations predicted using biomechanics during surgery. This information is to be communicated to clinicians to assist in tumour localisation. Existing approaches display these predictions on a 2D interface [10]. However, such communication should be more intuitive to improve treatment outcomes. Head-mounted holographic augmented reality (AR) systems have the potential to visualise tumour locations directly on patients during clinical procedures [19-20]. These systems have been successfully trialled in orthopaedics [21] and neurosurgery [22] because the tissues of interest deform minimally during interventions. Perkins (2017) [19] found that aligning holograms to the breast is far more challenging, as the breast significantly deforms with small positional changes. However, proof-of-concept studies combining AR systems with biomechanical models by Gouveia (2021) [20] demonstrated some promise. Clinicians could visualise the identified tumours from diagnostic images on their view of patients before interventions. Clinical translation challenges of these developments While proof-of-concept demonstrations have been developed, researchers must address the following challenges to enable routine use of this technology in the clinic. Firstly, state-of-the-art FEM simulations can take 30 seconds or longer to run [23], which is slower than the 60 frames-per-second required to reduce nausea and disorientation [24]. A proposed solution is surrogate models, which uses machine learning to accelerate the evaluation of the models but maintain similar accuracy to FEM models. Studies using decision trees, randomised trees, and random forest models have enabled breast tissue deformation predictions under compression in about 0.15 seconds [25]. However, these approaches require training surrogate models offline for each patient that will be considered, which can be time-consuming. Studies in the breast biomechanics literature have utilised trained models from a previous problem to predict the mechanical behaviour of a new dataset [26]. Secondly, estimating mechanical properties is also a computationally intensive procedure, often taking hours to complete. The model parameters that describe, for example, the stiffness of breast tissues are tuned iteratively to best match measured breast shape under known loading conditions [14]. Clinical use requires this process to be much faster. Thirdly, clinicians need their AR headset to align the 3D hologram dynamically with the object of interest as they move their heads around. Studies in the literature have assumed the breast is rigid, making it difficult to align the model hologram with the real breast [19-20]. The breast must align with a deformable model that incorporates mechanical properties to account for how even small changes in patient positioning can alter breast shape. Objectives of the PhD The challenges above have motivated me to develop an integrated physics-driven AR software platform that will provide navigational guidance to clinicians for tumour localisation. My platform will extend an automated clinical image analysis workflow developed by the Breast Biomechanics Research Group at the Auckland Bioengineering Institute [10] to align diagnostic images directly onto a clinician's view of a patient during breast cancer treatment procedure (Figure 2). Figure 2: My proposed physics-driven AR platform will leverage an automated clinical workflow for breast cancer image analysis [10]. The workflow builds personalised biomechanical models of the breast from diagnostic MRI and visualises breast tissue displacements in near real-time during clinical procedures performed in supine. In addition to technical developments for accelerating the workflow and identifying mechanical properties of the breast, my work will replace the GUI with an AR workflow that aligns diagnostic images to the clinician's view of patients. Images are obtained from Romaset - stock.adobe.com One of my platform's key features is near real-time simulation of breast tissue motion using surrogate models that incorporate information from population-based breast shape analyses. This will enable the surrogate models to provide predictions without time-consuming offline training. I will also integrate surrogate models developed in-house [27] with skin surface measurements from sensors on AR headsets (Figure 3) to enable estimation of breast tissue stiffness under known loading conditions (changes in an individual’s posture which changes the gravity loading conditions the breast experiences). The platform will integrate these developments with fiducial markers placed on the breast surface, and shape measurements from AR headset sensors to enable dynamic alignment of biomechanics simulations to the patient. The platform will apply the tissue displacements predicted by the mechanics simulations to the diagnostic prone MRI to help clinicians visualise how the internal tissues change shape in the supine position. This will help clinicians co-locate regions of interest across modalities e.g. between MRI and second-look ultrasound images. I will develop the platform in my first year and incorporate it into a state-of-the-art AR headset (Microsoft HoloLens 2 shown in Figure 3). The platform's accuracy for predicting supine tumour locations will be validated during platform development on soft silicone gel phantoms with tumour-like inclusions. In subsequent years, I will evaluate the platform's performance on patients in a series of clinical pilot studies. Figure 3: A diagram of the Microsoft Hololens 2 AR headset to be embedded with the developed software platform. Clinicians will be wearing these during procedures to visualise the predicted tumour locations onto patients directly. This image is from the Microsoft News Center Image Gallery. Breast cancer research is fast becoming an interdisciplinary field. Whether it is medical image registration, large deformation mechanics modelling, or computer vision, research opportunities are growing to address the field's significant challenges. I hope my work contributes to increasing the accuracy and efficiency of breast cancer treatment, improving health outcomes, and saving more lives. Acknowledgements External editors I want to thank Dr Prasad Babarenda Gamage, Dr Gonzalo Maso Talou, and Dr Huidong Bai for their guidance throughout the editing process, for approving the article proposal, and for supervising my PhD study in the ABI's Breast Biomechanics Research Group. Funding The ABI's Breast Biomechanics Research Group is grateful for the funding we have received from the University of Auckland Foundation, the New Zealand Breast Cancer Foundation, and the New Zealand Ministry of Business, Innovation and Employment that has supported our research. I would also like to thank The University of Auckland for awarding me a Doctoral Scholarship to support my PhD study financially. References H. Sung et al., “Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries,” CA: A Cancer Journal for Clinicians, vol. 71, no. 3, pp. 209–249, 2021, doi: 10.3322/caac.21660. A. Mîra, A. K. Carton, S. Muller, and Y. Payan, “A biomechanical breast model evaluated with respect to MRI data collected in three different positions,” Clinical Biomechanics, vol. 60, no. February, pp. 191–199, 2018, doi: 10.1016/j.clinbiomech.2018.10.020. R. F. Brem, M. J. Lenihan, J. Lieberman, and J. Torrente, “Screening breast ultrasound: Past, present, and future,” American Journal of Roentgenology, vol. 204, no. 2, pp. 234–240, 2015, doi: 10.2214/AJR.13.12072. L. Lebron-Zapata and M. S. Jochelson, “Overview of Breast Cancer Screening and Diagnosis,” PET Clinics, vol. 13, no. 3, pp. 301–323, Jul. 2018, doi: 10.1016/j.cpet.2018.02.001. R. M. Mann, N. Cho, and L. Moy, “Breast MRI: State of the art,” Radiology, vol. 292, no. 3, pp. 520–536, 2019, doi: 10.1148/radiol.2019182947. V. Y. Park, M. J. Kim, E. K. Kim, and H. J. Moon, “Second-look US: How to find breast lesions with a suspicious MR imaging appearance,” Radiographics, vol. 33, no. 5, pp. 1361–1375, 2013, doi: 10.1148/rg.335125109. A. G. Waks and E. P. Winer, “Breast Cancer Treatment: A Review,” JAMA, vol. 321, no. 3, p. 288, Jan. 2019, doi: 10.1001/jama.2018.19323. M. S. Abrahimi, M. Elwood, R. Lawrenson, I. Campbell, and S. Tin Tin, “Associated Factors and Survival Outcomes for Breast Conserving Surgery versus Mastectomy among New Zealand Women with Early-Stage Breast Cancer,” IJERPH, vol. 18, no. 5, p. 2738, Mar. 2021, doi: 10.3390/ijerph18052738. A. W. C. Lee, V. Rajagopal, T. P. Babarenda Gamage, A. J. Doyle, P. M. F. Nielsen, and M. P. Nash, “Breast lesion co-localisation between X-ray and MR images using finite element modelling,” Medical Image Analysis, vol. 17, no. 8, pp. 1256–1264, Dec. 2013, doi: 10.1016/j.media.2013.05.011. T. P. B. Gamage et al., “An automated computational biomechanics workflow for improving breast cancer diagnosis and treatment,” Interface Focus, vol. 9, no. 4, pp. 1–12, 2019, doi: 10.1098/rsfs.2019.0034. O. C. Zienkiewicz, R. L. Taylor, and D. Fox, The Finite Element Method for Solid and Structural Mechanics, Seventh. Oxford: Butterworth-Heinemann, 2013. doi: 10.1016/C2009-0-26332-X. M. P. Nash and P. J. Hunter, “Computational mechanics of the heart. From tissue structure to ventricular function,” Journal of Elasticity, vol. 61, no. 1–3, pp. 113–141, 2000, doi: 10.1023/A:1011084330767. D. E. McGhee and J. R. Steele, “Breast biomechanics: What do we really know?,” Physiology, vol. 35, no. 2, pp. 144–156, 2020, doi: 10.1152/physiol.00024.2019. T. P. Babarenda Gamage, P. M. F. Nielsen, and M. P. Nash, “Clinical Applications of Breast Biomechanics,” in Biomechanics of Living Organs: Hyperelastic Constitutive Laws for Finite Element Modeling, Elsevier Inc., 2017, pp. 215–242. doi: 10.1016/B978-0-12-804009-6.00010-9. N. G. Ramião, P. S. Martins, R. Rynkevic, A. A. Fernandes, M. Barroso, and D. C. Santos, “Biomechanical properties of breast tissue, a state-of-the-art review,” Biomechanics and Modeling in Mechanobiology, vol. 15, no. 5, pp. 1307–1323, 2016, doi: 10.1007/s10237-016-0763-8. P. Elsner, E. Berardesca, and K.-P. Wilhelm, Eds., Bioengineering of the Skin: Skin Biomechanics, Volume V, 0 ed. CRC Press, 2001. doi: 10.1201/b14261. V. Rajagopal, J. H. Chung, D. Bullivant, P. M. F. Nielsen, and M. P. Nash, “Determining the finite elasticity reference state from a loaded configuration,” International Journal for Numerical Methods in Engineering, vol. 72, no. 12, pp. 1434–1451, 2007, doi: 10.1002/nme.2045. T. P. Babarenda Gamage, V. Rajagopal, M. Ehrgott, M. P. Nash, and P. M. F. Nielsen, “Identification of mechanical properties of heterogeneous soft bodies using gravity loading,” International Journal for Numerical Methods in Biomedical Engineering, vol. 27, no. 4, pp. 391–407, 2011, doi: 10.1002/cnm.1429. S. L. Perkins, M. A. Lin, S. Srinivasan, A. J. Wheeler, B. A. Hargreaves, and B. L. Daniel, “A Mixed-Reality System for Breast Surgical Planning,” Adjunct Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2017, pp. 269–274, 2017, doi: 10.1109/ISMAR-Adjunct.2017.92. P. F. Gouveia et al., “Breast cancer surgery with augmented reality,” Breast, vol. 56, pp. 14–17, 2021, doi: 10.1016/j.breast.2021.01.004. N. Navab, S.-M. Heining, and J. Traub, “Camera Augmented Mobile C-Arm (CAMC): Calibration, Accuracy Study, and Clinical Applications,” IEEE Trans. Med. Imaging, vol. 29, no. 7, pp. 1412–1423, Jul. 2010, doi: 10.1109/TMI.2009.2021947. R. M. Comeau, A. F. Sadikot, A. Fenster, and T. M. Peters, “Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery,” Med. Phys., vol. 27, no. 4, pp. 787–800, Apr. 2000, doi: 10.1118/1.598942. L. Han et al., “A nonlinear biomechanical model based registration method for aligning prone and supine MR breast images,” IEEE Transactions on Medical Imaging, vol. 33, no. 3, pp. 682–694, 2014, doi: 10.1109/TMI.2013.2294539. A. Vovk, F. Wild, W. Guest, and T. Kuula, “Simulator sickness in Augmented Reality training using the Microsoft HoloLens,” Conference on Human Factors in Computing Systems - Proceedings, vol. 2018-April, pp. 1–9, 2018, doi: 10.1145/3173574.3173783. F. Martínez-Martínez et al., “A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time,” Computers in Biology and Medicine, vol. 90, no. September, pp. 116–124, 2017, doi: 10.1016/j.compbiomed.2017.09.019. A. Mendizabal, E. Tagliabue, J.-N. Brunet, D. Dall’Alba, P. Fiorini, and S. Cotin, “Physics-Based Deep Neural Network for Real-Time Lesion Tracking in Ultrasound-Guided Breast Biopsy,” Computational Biomechanics for Medicine, pp. 33–45, 2020, doi: 10.1007/978-3-030-42428-2_4. G. D. Maso Talou, T. P. Babarenda Gamage, M. Sagar, and M. P. Nash, “Deep Learning Over Reduced Intrinsic Domains for Efficient Mechanics of the Left Ventricle,” Frontiers in Physics, vol. 8, pp. 1–14, 2020, doi: 10.3389/fphy.2020.00030.
- The Ecology of Undesirable Organisms
By Jasmine Gunton An integral part of human nature is to place every object in the universe into a hierarchical ranking system. We can see these hierarchies in social constructs such as class distinctions, businesses, and political systems. Additionally, hierarchies have an established role in science, with many biologists over human history attempting to place the Earth’s many living organisms into a structured grouping. The most common of these ranking systems today is the Linnaean taxonomy. In this system, different species are grouped into a kingdom, phylum, clan, and so on [1]. However, other systems are still insidiously ingrained into the human understanding of biological theory, with one of the prime examples being the ‘tree of life’. The tree of life places humans at the top of the proverbial tree, supposedly being the most intelligent and sophisticated animal. ‘Inferior’ species such as apes and reptiles are placed at lower branches. Finally, the prokaryotes are placed at the base of the trunk, being deemed as the most simple and unintelligent creatures [2]. The discipline of ecology challenges this old view, instead opting to view organisms in the context of the highly complex ecosystems in which they occupy. Ecology recognises that each species within an ecosystem contributes greatly to the functioning of that ecosystem through indirect and direct interactions with other organisms and their environment. Nevertheless, some organisms are still viewed by the public as ecologically useless and undeserving of conservation efforts. I would like to explore why such creatures are, in fact, biologically important, and why their identity in popular culture should be reconsidered. Rattus Rats are one of the first creatures to come to mind as being universally disliked. This reputation has been sculpted by the rodents’ tendency to spread deadly pathogens and parasites to humans [3]. But you already know why rats are considered repugnant. Instead, let us look at why this rodent is beneficial to its native environments. Along with various pathogens, rats are also transporters of mass quantities of plant seeds. For example, in southwest China, Edward’s long-tailed rat (Leopoldamys edwardsi) is the main dispersal vector of the seeds of the tea oil camellia (Camellia oleifera). It just so happens that the long-tailed rat is also a voracious consumer of these seeds. In true rodent fashion, the long-tailed rat will hoard the seeds it has collected in various subsurface burrows. This effectively disperses the tea oil seeds, increasing the population’s chance of survival. The survival of tea oil camellia is therefore directly dependent on the abundance of long-tailed rats within the region [4]. Another important rat species is the Californian giant kangaroo rat (Dipodomys ingens). In the sandy grasslands of California, the kangaroo rat acts as a keystone species and habitat engineer of the ecosystem. Services provided by the kangaroo rat include soil disturbance and the creation of vast burrow networks that act as a habitat for other native species. By changing habitat structure, the kangaroo rat alters the community composition of the ecosystem, exerting positive effects on plant and invertebrate diversity, as well as lizard and squirrel abundance [5]. The vital presence of rats in these community structures conveys their ecological importance. It is important to note that in these situations, rats are native to the community, unlike in New Zealand where they are considered a threat to native ecosystem structures. Fungi It is not only animal species that receive negative attention from the human population. Mould, as many websites would tell you, is undesirable to have in the home as it releases mycotoxins that can be harmful to humans [6]. While not particularly useful within a house, mould has many benefits for its native ecosystem. Now, just to make things clear before explaining its ecology, mould is neither an animal nor a plant. It is instead part of the eukaryotic group of organisms known as fungi. This means that mould and other fungi species are special, and not like the other organisms. The taxa ‘mould’ has been given to multiple polyphyletic groups of fungi, so for the sake of simplicity, we will treat both fungi and mould as if they are the same. Native to every continent, fungi are incredibly hardy and ancient, having evolved symbiotic relationships with several plant and animal species [7-9]. One of the most important roles that fungi play is the decomposition of organic material. In almost every ecosystem, the same cycle of decomposition takes place. When organisms die, their bodily material is digested by various species of fungi. This digestion process converts the organic material into nutrients that plants can use. Herbivorous animals eat these plants, the animals eventually die, and the cycle is renewed. Fungi can also benefit plants through a mutualistic relationship known as mycorrhizae. Mycorrhiza is a symbiotic association between a fungus and the roots (or the rhizosphere) of a plant. The plant supplies sugars from photosynthesis to the fungi, and the fungi in return supply the plant with water and nutrients such as phosphorus and nitrogen, which are taken from the soil [10]. For some plant species, mycorrhizae are essential for the effective establishment and growth of the plant. Therefore, the survival of certain plants in an ecosystem depends on the existence of, and services provided, by fungi [11]. Vespidae Ample information has been included in this article concerning the benefits provided by foragers and decomposers. Now I want to discuss the question, how do predators benefit their respective ecosystems? One cannot deny that wasps are menacing, aggressive, and persistent in their violence. Yet, these qualities are what make wasps such beneficial predators in their ecosystem. Once again, this discussion of the benefit of wasps to their environment is focusing on their native environments. Wasps prey on a number of insects, including caterpillars, cicadas, flies, and beetles. By feeding on these carnivorous and herbivorous insects, the wasp indirectly protects both insects and plants in the lower levels of the food chain [12]. However, wasps are only predators of insects in a certain sense. Adult wasps do not actually eat insects, preferring instead to paralyse their prey and feed it to their larvae [13]. Nevertheless, this process ensures that certain insect species do not become over-abundant in the ecosystem. As well as performing natural regulatory services, wasps have substantial potential to act as biological pest control agents in urban and pastoral regions. A study by Prezoto et al. suggests that wasp colony management is a cost-effective and feasible technique in controlling pest species [14]. It is not only their violent nature that makes wasps an asset to their ecosystem. In addition to predation, wasps also act as pollinators for a large range of plant species (which is a fact I'm sure bee enthusiasts greatly detest). In fact, Brock et al. found that 164 plant species were solely dependent on aculeate wasps for pollination [12]. Perhaps we should display the same amount of appreciation for wasps as we do for another certain flying insect. Columbidae The last example I wish to discuss has been described by some as a ‘flying rat’. This species often inhabits cities and feeds on food scraps discarded by humans [15]. I am talking about none other than the pigeon. Despite once being used by humans for communication, pigeons have sadly earned a reputation less than favourable [16]. It is thus my duty to convince you, the reader, of the pigeon’s usefulness in its ecosystem, and to inform you of its charismatic qualities. Similar to the long-tailed rat, pigeons are important distributors of plant seeds. Pigeons are especially effective at seed dispersal as they travel long distances away from the parent plants. For example, the New Zealand Kererū (Hemiphaga novaeseelandiae), or ‘wood pigeon’ is an important seed disperser of the native tree species Beilschmiedia tawa (Tawa), Vitex lucens (Pūriri) and Pseudopanax arboreus (Five-finger) [17]. In other countries, pigeons are an important food source for many species of falcons, including the peregrine falcon (Falco peregrinus) [18]. The peregrine falcon itself is also important in its ecosystem as a predator of several other bird species, including ptarmigan and ducks. Therefore, by supporting peregrine falcon populations, the pigeon indirectly helps to regulate other bird species [19]. Another interesting fact (that admittedly does not have much to do with its ecology) is that pigeons have excellent visual discrimination skills. A study by Watanabe et al. showed that pigeons can be taught to discriminate between the artworks of Claude Monet and Pablo Picasso [20]. In a later paper, Watanabe displayed that it was possible to teach pigeons how to discriminate between the paintings of other artists, including Van Gogh and Marc Chagall [21]. The pigeon’s discrimination skills do not stop at only paintings. Pigeons are also able to discriminate between human individuals and have shown a basic understanding of human behaviour [15]. I hope that I have persuaded you not only of the pigeon’s ecological importance but also of their intellect and charm. Concluding Statements In this article, I have described the ecological importance of only a few species, placing emphasis on those considered undesirable by many people. In truth, all organisms are ecologically important to the functioning of their native habitats. Biologists often use the terms ‘keystone species’ and ‘species engineer’ to denote species that appear to be more vital than other species to their respective environments. In my opinion, hierarchical categorisation is impractical in both ecology and the wider field of biology. In research and the application of environmental management, scientists need to stop thinking of organisms as being in an ecological ranking, but rather as part of the highly complex system of abiotic and biotic elements that make up an ecosystem. Nature does not view one animal as inherently ‘better’ than another, and neither should we. References C. Linnæii, Species Planatarum, 1st ed. Stockholm, Sweden: Laurentius Salvius, 1753. E. Haeckel, The Evolution of Man: A Popular Exposition of the Principal Points of Human Ontogeny and Phylogeny, 1st ed. New York, NY, USA: D. Appleton & Company, 1879. C. G. Himsworth, K. L. Parsons, C. Jardine, and D. M. Patrick, “Rats, Cities, People, and Pathogens: A Systematic Review and Narrative Synthesis of Literature Regarding the Ecology of Rat-Associated Zoonoses in Urban Centers,” Vector-borne and Zoonotic Diseases, vol. 13, no. 6, pp. 349-359, May. 2013. [Online]. Available: https://doi.org/10.1089/vbz.2012.1195 Z. Xiao and Z. Zhang, “Long-term seed survival and dispersal dynamics in a rodent-dispersed tree: testing the predator satiation hypothesis and the predator dispersal hypothesis,” Journal of Ecology, vol. 101, no. 5, pp. 1256-1264, June. 2013. [Online]. Available: https://doi.org/10.1111/1365-2745.12113 L. R. Prugh and J. S. Brashares, “Partitioning the effects of an ecosystem engineer: kangaroo rats control community structure via multiple pathways,” Journal of Animal Ecology, vol. 81, no. 3, pp. 667-678, May. 2012. [Online]. Available: https://www.jstor.org/stable/41496035 I. Sengun, D. Yaman and S. Gonul, “Mycotoxins and mould contamination in cheese: a review,” World Mycotoxin Journal, vol. 1, no. 3, pp. 291-298, Aug. 2008. [Online]. Available: https://doi.org/10.3920/WMJ2008.x041 G. R. Bisby, “Geographical Distribution of Fungi,” Botanical Review, vol. 9, no. 7, pp. 466-482, Jul. 1943. [Online]. Available: https://www.jstor.org/stable/4353291 C. Gostinčar, M. Grube, S. De Hoog, P. Zalar, and N. Gunde-Cimerman, “Extremotolerance in fungi: evolution on the edge,” FEMS Microbiology Ecology, vol. 71, no. 1, pp. 2-11, Dec. 2009. [Online]. Available: https://doi.org/10.1111/j.1574-6941.2009.00794.x R. Lucking, S. Huhndorf, D. H. Pfister, E. R. Plata, and H. T. Lumbsch, “Fungi Evolved Right on Track,” Mycologia, vol. 101, no.6, pp. 810-822, Dec. 2009, doi: 10.3852/09-016. M. J. Harrison, “The Arbuscular Mycorrhizal Symbiosis,” in Plant-Microbe Interactions, 1st ed. Boston, MA, USA: Springer, 1997, ch. 1, pp. 1-34. M. G. A. Van Der Heijden, “Arbuscular Mycorrhizal Fungi as a Determinant of Plant Diversity: in Search of Underlying Mechanisms and General Principles,” in Mycorrhizal Ecology, 1st ed. Berlin, Germany: Springer, 2002, ch. 10, pp. 243-265. R. E. Brock, A. Cini and S. Sumner, “Ecosystem services provided by aculeate wasps,” Biological Reviews, vol. 96, no. 4, pp. 1645-1675, April. 2021. [Online]. Available: https://doi.org/10.1111/brv.12719 K. Konno, K. Kazuma and K. Nihei, “Peptide Toxins in Solitary Wasp Venoms,” Toxins, vol. 8, no. 4, pp. 114, April. 2016. [Online]. Available: https://doi.org/10.3390/toxins8040114 F. Prezoto, T. Tagliati Maciel, M. Detoni, A. Zuleidi Mayorquin and B. Correa Barbosa, “Pest Control Potential of Social Wasps in Small Farms and Urban Gardens,” Insects, vol. 10, no. 7, pp. 192, June. 2019. [Online]. Available: https://doi.org/10.3390/insects10070192 A. Belguermi et al., “ Pigeons discriminate between human feeders,” Animal Cognition, vol. 14, article no. 909, June. 2011. [Online]. Available: https://doi.org/10.1007/s10071-011-0420-7 S. Capoccia, C. Boyle and T. Darnell, “Loved or loathed, feral pigeons as subjects in ecological and social research,” Journal of Urban Ecology, vol. 4, no. 1, pg. juy024, Nov. 2018. [Online]. Available: https://doi.org/10.1093/jue/juy024 D. M. Wotton and D. Kelly, “Do larger frugivores move seeds further? Body size, seed dispersal distance, and a case study of a large, sedentary pigeon,” Journal of Biogeography, vol. 39, no. 11, pp. 1973-1983, Nov. 2012. [Online]. Available: https://doi.org/10.1111/jbi.12000 P. Lopez-Lopez, J. Verdejo and E. Barba, “The role of pigeon consumption in the population dynamics and breeding performance of a peregrine falcon (Falco peregrinus) population: conservation implications,” European Journal of Wildlife Research, vol. 55, article no. 125, Oct. 2008. [Online]. Available: https://doi.org/10.1007/s10344-008-0227-2 C. M. White, N. J. Clum, T. J. Clade and W. G. Hunt. “Peregrine Falcon (Falco peregrinus).” Birdsoftheworld.org. https://doi.org/10.2173/bna.660 (accessed March 16, 2022). S. Watanabe, J. Sakamoto and M. Wakita, “Pigeons discrimination of paintings by Monet and Picasso,” Journal of the Experimental Analysis of Behaviour, vol. 63, no. 2, pp. 165-174, March. 1995. [Online]. Available: https://doi.org/10.1901/jeab.1995.63-165 S. Watanabe, “Van Gogh, Chagall and pigeons: Picture discrimination in pigeons and humans,” Animal Cognition, vol. 4, pp. 147-151, Oct. 2001. [Online]. Available: https://doi.org/10.1007/s100710100112
- Classical Conditioning in Brave New World
By Sheeta Mo Science fiction reflects science progress at the time it was written. Authors apply their wild imaginations to scientific breakthroughs to visualise the world of tomorrow. What if we analyse a science fiction masterpiece with real science? Would the description be accurate or outdated? Does it present a possible future of where we are heading? The wonders jumped into my head as I read Brave New World. To ease my curiosity, I decided to draw parallels between the novel and Forty Studies that Changed Psychology by Roger R. Hock. Brave New World by Aldous Huxley is one of the world’s classic science fiction novels. It depicted a dystopian future of our society where science was used to control people. Biological applications and psychological theories were combined to manufacture citizens into replaceable gears as early as they were embryos. I will focus on the psychological methods used to manipulate people’s minds suggested in the book. Children were conditioned with phobias to fix them in their predetermined social classes. It was done with an extremely unethical approach, by electrocuting and scaring babies in the “INFANT NURSERIES. NEO-PAVLOVIAN CONDITIONING ROOMS” [1, p. 19]. The conditioning wired fear with flowers and books. The fear will last a lifetime to keep lower-class citizens away from literature [1]. “Books and loud noises, flowers and electric shocks–already in the infant mind these couples were compromisingly linked; and after two hundred repetitions of the same or a similar lesson would be wedded indissolubly” [1, p. 22]. It sounds overstated and cruel, but theoretically possible. Let’s examine the scene with the Classical conditioning theory of learning. Or you might recognise it as “Pavlov’s Dog”. It might be the most publicly known psychological phenomenon. Pavlov identified two types of reflexes: unconditioned and conditioned. No learning is needed for unconditioned reflexes as it is automatic and inborn. In contrast, conditioned reflexes need to be established by learning or experience [2]. In Brave New World, the unconditioned reflex would be fear of sudden loud noises, and the conditioned reflex would be to fear flowers and books. Before conditioning, babies crawled towards flowers and books with “little squeals of excitement” [1, p. 21]. Therefore, the items were neutral stimuli [2]. How did neutral stimuli trigger fear, a conditioned response? The simplest way to explain it is through a diagram: Figure 1: Diagram adapted from the table in [2] to fit with the article's content In short, the neutral stimuli were paired with unconditioned stimuli to produce fear. The process was repeated until neutral stimuli became conditioned stimuli. In the end, “the infants shrank away in horror, the volume of their howling suddenly increased” [1, p. 22] when flowers and books were shown to them without electric shocks. You might think that it was an exaggerated fictional scene based on psychology. Unfortunately, you are wrong. It was almost a direct transcription of the Little Albert experiment carried out by Watson and Ryaner in 1920 [3]. Watson’s morally challenged study involved an 11-month-old baby named “Albert B.” He was aiming to study how emotions can be learnt. Note that Pavlov’s study only focused on reflexes (e.g. secretion of saliva) but not specifically on emotions (e.g. fear). In the experiment, Albert was presented with a white rat and several other fluffy animals and objects. Little Albert was curious, but wasn’t afraid of the objects. Then, a white rat was shown to him again while striking a steel bar to make loud noises behind the baby. Albert was frightened and started crying. The experiment was repeated seven times [3] until little Albert cried and clawed away at the sight of a white rat, even when there were no loud noises [4]. Further observation of Albert showed that conditioning could be generalised, transferred between situations, and persist over time. It meant that Albert was fearful towards not only white mice but also rabbits, fur coats, and even a Santa Claus mask [4]. His fear was not limited to the lab environment. Rather, it happened when Albert was taken to another room. The conditioned emotional response stayed over time, as little Albert was afraid of the same items even after a month of no experiment [3]. Putting it in Brave New World, the babies were likely to fear all flowers and books even if the objects had different features. They would be afraid no matter what environment they were in. It was also possible that they would stay conditioned for a lifetime (especially when there is nothing else in the society to ‘recondition’ them). “They’ll grow up with what the psychologists used to call an ‘instinctive’ hatred of books and flowers. Reflexes unalterably conditioned” [1, p. 22]. So why is this important? Fictional works are stories after all. However, we should be alarmed if the story sounds too much like real life. After examining the fictional scene in Brave New World with real psychology, we come to two conclusions: a) It can be done. b) It had been done. Science fiction is fascinating and frightening because the future it visualises could be true. Huxley wrote the novel because he saw trends in our society that might lead us to a similar world where science is manipulated to control and exploit individuals. Conditioning people for control might not be as extreme as Huxley envisioned. It might be done subtly for ‘harmless’ reasons. For example, linking a product with positive emotions to maximise the effect of advertisement. Science fiction is like a fire alarm. It screams sharp warning when there is any trace of smoke. We might never get a fire, but we all need an alert in our hearts. Hopefully, it never rings. References A. L. Huxley, Brave New World, Coradella Collegiate Bookshelf Editions, 1932, ch. 2, pp. 19-23. [Online]. Available: http://scotswolf.com/aldoushuxley_bravenewworld.pdf R. R. Hock, “Reading 9: It’s not just about salivating dogs!,” in Forty Studies that Changed Psychology, 7. Editor, Ed., Beijing, China: Ptpress, 2017, ch. 3, sec. 9, pp. 83-90. R. R. Hock, “Reading 10: Little Emotional Albert,” in Forty Studies that Changed Psychology, 7. Editor, Ed., Beijing, China: Ptpress, 2017, ch. 3, sec. 10, pp. 90-96. Psychological Experiments Online. Studies Upon The Behavior Of The Human Infant : Includes Little Albert. (1920). Accessed: 03 20, 2022. [Online Video]. Available: https://video-alexanderstreet-com.ezproxy.auckland.ac.nz/watch/studies-upon-the-behavior-of-the-human-infant-includes-little-albert/transcript?context=channel:psychology Little-albert.jpg, Akron psychology archives, 1920. [Online]. Available: https://commons.wikimedia.org/wiki/File:Little-albert.jpg
- Gene-Editing: Where Do We Draw the Line?
By Lucas Tan Since the discovery of Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) in 2012, scientists and pharmaceuticals have invested countless hours and billions into developing ground-breaking gene-editing technologies due to CRISPR’s simplicity, affordability, and efficiency [1-2]. The potential benefits of gene-editing through CRISPR range from treating genetic diseases like sickle cell disease1 and Duchenne muscular dystrophy2, to increasing the yield and hastening the process of crop growth [3-6]. James J. Lee, a researcher at the University of Minnesota, also claim that, in principle, scientists could utilise CRISPR to significantly boost the expected intelligence of an embryo [7]. For the first time in history, Homo sapiens —instead of natural selection —possess the ability to influence the biological fate of living things on Earth. As with any disruptive technology, it is of paramount importance for us to explore the ethical boundaries of CRISPR. Should genetic enhancements such as increasing the intelligence of individuals be allowed? What constitutes genetic enhancements? Where do we draw the line? This article seeks to present existing applications of CRISPR and explore a variety of — but by no means all — ethical concerns regarding gene-editing. Figure 1: How CRISPR/Cas9 Works adapted from [28]. Before debating the ethics of gene-editing, it would do well for one to understand how CRISPR works. One of the most popular methods scientists use to perform genetic editing is through CRISPR/Cas9. Cas9, a CRISPR-associated protein, is an endonuclease that forms base pairs with DNA target sequences. It accomplishes this by utilising a guide sequence within an RNA duplex, trans-activating crispr RNA (tracrRNA):crispr RNA (crRNA). This enables Cas9 to introduce a site-specific double-strand break in the DNA. Researchers then engineer the dual tracrRNA:crRNA as a single guide RNA (sgRNA) that possesses two critical characteristics: a duplex RNA structure at the 3’ side that binds to Cas9 and a sequence at the 5’ side that determines the DNA target site through base-pairing with the DNA. This allows Cas9 to target any DNA sequence of interest by changing the guide sequence of the sgRNA programme [8]. Figure 1 is a brief illustration of the process of gene-editing with CRISPR/Cas9. Genetic congenital abnormalities and disorders are present in 2-5% of births [9], a staggering statistic. Harnessing the power of gene-editing will provide a whole host of benefits. CRISPR gene-editing has already displayed fantastic potential in areas like therapeutics and agriculture. In 2019, D. Alapati et al. accurately timed in utero intra-amniotic administration of CRISPR/Cas9 elements—for monogenic lung disease — to an embryonic mouse model through a CRISPR fluorescent reporter system, allowing specific and targeted gene-editing in fetal lungs. Through the process mentioned above, the mouse model, which possessed the human SP gene SFTPCI73T mutations, had enhanced life expectancy by 22.8% and development of lungs, along with decreased pulmonary pathogenesis [10]. More recently, C. K. W. Lim et al. [11] demonstrated that CRISPR possesses the potential to treat Amyotrophic lateral sclerosis (ALS) — remember the ice bucket challenge? Mouse models displayed a significantly decreased rate of muscular atrophy, improved neuromuscular function, and prolonged life expectancy after in vivo base editing [11]. There are also currently multiple ongoing registered clinical trials that utilise CRISPR. One such clinical trial aims to assess the efficacy and safety of genetically engineered, neoantigen-specific Tumour Infiltrating Lymphocytes (TIL), where scientists utilised CRISPR gene-editing to inhibit the intracellular immune checkpoint CISH, for the treatment of gastrointestinal (GI) cancer [12]. Another clinical trial aims to assess the safety and efficacy of allogeneic T cells that were modified ex vivo through CRISPR/Cas9 gene-editing components in CTX130 CD70-directed T-cell immunotherapy, to treat T cell lymphoma [13]. When it comes to agriculture, the benefits of CRISPR are plenty as well. Examples of existing applications of CRISPR/Cas9 in crops include targeting the gene PL or ALC to increase the shelf life of tomatoes and genetic modifications to obtain disease- and virus-resistant plants [14-16]. While CRISPR possesses a host of benefits, it does have its limitations. For example, an optimal CRISPR/Cas tool must attach and/or break a specified target without producing additional off-targets as by-products in complicated genomes [9]. Nevertheless, research work is already underway to enhance the effectiveness and safety of CRISPR technology. In addition to technical limitations, there are multiple ethical considerations to make. Those most enthusiastic about genetic enhancements call themselves transhumanists. These people believe that we should transcend the blind and arduous process of evolutionary selection since we now possess the ability to control our biological fate [17]. Some, like Nick Bostrom, criticises that idea, claiming that changing our nature will cause us to lose our human dignity [18]. With the development of rapidly advancing gene-editing technologies, proponents of gene-editing claim that if we do not embrace the full potential of genetic engineering, we are denying many individuals of a ‘normal’ life, and such an act would be considered ethically wrong. Individuals may consider genetic engineering as ‘playing God’ in various cultures. Others may believe in staying ‘natural,’ yet with all the processed foods with additives, pesticides, and other chemicals that most of the population consumes daily, what is considered ‘natural’? It is possible that gene-editing may eventually become commonplace. Would there then be a stigma associated with not having undergone gene-editing? Could gene edits eventually be associated with certain levels of prestige within society? Some futurists predict that gene-editing technologies will eventually allow individuals to enhance themselves or handpick traits that they want their children to possess. Many around the world would love physical characteristics like a lower body fat percentage or increased intelligence. It is theoretically possible to intervene with the aesthetics of height, hair colour, eye colour, and perhaps, even the more subtle aspects of appearance and even intelligence [7, 19]. While technologies that can create ‘designer’ babies are not available yet, they could soon become a reality. A thought-provoking conundrum to contemplate is that when it comes to traits like intelligence, the distinction between genetic enhancements and gene therapy is blurred [17]. While increasing an individual’s intelligence quotient from 120 to 140 would be considered an enhancement, would raising an individual’s intelligence quotient from 90 to 110 be considered therapy or an enhancement? Ultimately, it depends on our distinctions of normality versus abnormality and health versus disease [20]. In a time when self-image and body consciousness is becoming increasingly widespread due to the influence of social media, many—especially the wealthy—will not see any ethical issues with genetic enhancements for aesthetic purposes. Parents want the best for their children. Should they be offered the opportunity to enhance their children genetically, would parents stop at ‘gifting’ their children an above-average height or increased intelligence, or will there be a never-ending list of enhancements they want? Such a scenario begs a fundamental question: should aesthetic genetic enhancements be allowed, or should we focus solely on gene-editing applications—like therapeutics and agriculture—that bring about societal benefits? Various parties such as policymakers, businessmen, clinicians, and academics need to agree on what constitutes appropriate gene-editing applications in our society. Figure 2: Somatic Gene Editing Vs. Germline Gene Editing adapted from [29]. Aesthetic genetic enhancements raise several concerns. There is a possibility that such enhancements will only benefit the affluent due to cost and accessibility issues, leading to a greater social inequality gap as the rich will get increasingly competent and possess physically ideal traits. Meanwhile, the less fortunate will drift further away from what is considered the new norm. In addition, the approval of aesthetic enhancements could lead to a less diverse society, which may cause us to end up in an environment with less edge, inspiration, and creativity. One thought experiment described by Walter Isaacson to tackle this problem describes two terms: an absolute good and a positional good. Enhanced resistance to common viruses, for example, is an absolute good. On the other hand, enhanced facial features is a positional good [21]. The distinction? Resistance to a virus benefits society, while enhanced facial features give the recipient a positional advantage. Absolute goods such as treating genetic diseases and enhancing resistance to common viruses could lead to happier and healthier individuals. This could translate to increased economic productivity, reduced healthcare expenditure — governments could spend more on other sectors like education — and the possibility of greater equality due to the potential elimination of the biological determinant of health outcomes. Another grey area that regulators and researchers frequently tread on is the question of somatic gene-editing (SGE) versus germline gene-editing (GGE). SGE only affects the treated patient and specific types of cells. On the other hand, GGE affects all the cells in an organism, including sperm and eggs; hence edited traits are passed onto future generations. Figure 2 illustrates the differences between SGE and GGE. In 2018, a gene-editing researcher at the Southern University of Science and Technology in Shenzhen, China, He Jiankui, implanted edited embryos in a woman. Through CRISPR/Cas9, he disabled the gene, CCR5, that encodes a protein that allows human immunodeficiency virus (HIV) to enter a cell [22]. Recently, the BBC published an article mentioning that Lulu and Nana, the first gene-edited babies to be born, may not actually possess resistance to HIV due to multiple problems with He’s methods [23]. The full extent of the consequences of gene-edited babies are still unknown, and He is currently serving a three-year sentence in prison for violating medical regulations [24]. Proponents of GGE cite several benefits. Companies and scientists could utilise GGE to avoid passing on single-gene disorders like cystic fibrosis (CF) — a congenital genetic lung disease that can lead to respiratory and digestive system complications and a shortened life expectancy — especially in cases where two carriers of the gene for CF hope to have a child together. This is because there is a 25% chance that the child of two CF gene carriers will develop CF. Only approximately 19% of women undergoing IVF produce one viable embryo [25]. In such a situation, in vitro fertilisation (IVF) will not provide any tangible benefit, and GGE would prove more beneficial in preventing CF. In addition, IVF is also unable to select against polygenic diseases such as diabetes and coronary artery disease [26]. GGE could be a powerful tool in the fight against such diseases in the future. On the other end of the spectrum, those opposed to GGE have made multiple arguments, including the safety of individuals who had undergone GGE, the possibility of negative consequences for future generations, whether we are infringing upon the consent and autonomy of future generations, and the fact that we could also utilise GGE for heritable enhancements [25]. As described above, genetic enhancements may not be ideal in our current society due to certain disparities and ethical barriers that may arise, but how society will receive such technological changes in the future is yet to be seen. To conclude, gene-editing — although incredibly beneficial in numerous ways — brings about a barrage of questions and concerns from governments, academics, and the public alike. There is still a long list of moral and ethical questions that policymakers and researchers — among other significant players — need to discuss and come to a consensus on over the coming decades. One thing is sure: it is imperative for us to ensure equitable access to gene therapies. Everyone should possess an equal opportunity to be in good health, a state of complete social, mental, and physical wellbeing and not merely infirmity or the absence of disease as defined by the World Health Organisation [27]. Prematurely introducing gene therapies without proper regulation, planning and funding could exacerbate existing health inequities, driving increasing differences between ethnic groups and social classes. How society evolves with the advent of gene-editing and where to draw the line between what is permissible and banned — from genetic therapy to genetic enhancements and GGE — is wholly up to us. With the discovery of CRISPR, we possess greater power than ever before, and with great power comes great responsibility. References M. Jinek, K. Chylinski, I. Fonfara, M. Hauer, J. A. Doudna, and E. Charpentier, “A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity,” Science (New York, N.Y.), vol. 337, no. 6096, pp. 816–821, Aug. 2012, doi: 10.1126/SCIENCE.1225829. “Gene-Editing Stocks Skyrocket in $1.6 Billion Boom After Data - Bloomberg.” https://www.bloomberg.com/news/articles/2020-12-09/gene-editing-stocks-go-parabolic-in-1-6-billion-boom-after-data (accessed Mar. 06, 2022). S. Demirci, A. Leonard, J. J. Haro-Mora, N. Uchida, and J. F. Tisdale, “CRISPR/Cas9 for Sickle Cell Disease: Applications, Future Possibilities, and Challenges,” Advances in Experimental Medicine and Biology, vol. 1144, pp. 37–52, 2019, doi: 10.1007/5584_2018_331. E. Choi and T. Koo, “CRISPR technologies for the treatment of Duchenne muscular dystrophy,” Molecular Therapy, vol. 29, no. 11, pp. 3179–3191, Nov. 2021, doi: 10.1016/J.YMTHE.2021.04.002. H. Gao et al., “Superior field performance of waxy corn engineered using CRISPR–Cas9,” Nature Biotechnology 2020 38:5, vol. 38, no. 5, pp. 579–581, Mar. 2020, doi: 10.1038/s41587-020-0444-0. T. Wang, H. Zhang, and H. Zhu, “CRISPR technology is revolutionizing the improvement of tomato and other fruit crops,” Horticulture Research 2019 6:1, vol. 6, no. 1, pp. 1–13, Jun. 2019, doi: 10.1038/s41438-019-0159-x. “Can CRISPR–Cas9 Boost Intelligence? - Scientific American Blog Network.” https://blogs.scientificamerican.com/guest-blog/can-crispr-cas9-boost-intelligence/ (accessed Mar. 06, 2022). J. A. Doudna and E. Charpentier, “The new frontier of genome engineering with CRISPR-Cas9,” Science, vol. 346, no. 6213, Nov. 2014, doi: 10.1126/SCIENCE.1258096. R. Luthra, S. Kaur, and K. Bhandari, “Applications of CRISPR as a potential therapeutic,” Life Sciences, vol. 284, p. 119908, Nov. 2021, doi: 10.1016/J.LFS.2021.119908. D. Alapati et al., “In utero gene editing for monogenic lung disease,” Science Translational Medicine, vol. 11, no. 488, Apr. 2019, doi: 10.1126/SCITRANSLMED.AAV8375/SUPPL_FILE/AAV8375_SM.PDF. C. K. W. Lim et al., “Treatment of a Mouse Model of ALS by In Vivo Base Editing,” Molecular Therapy, vol. 28, no. 4, pp. 1177–1189, Apr. 2020, doi: 10.1016/J.YMTHE.2020.01.005. National Library of Medicine (U.S.). (2020, June 11 - ). A Study of Metastatic Gastrointestinal Cancers Treated With Tumor Infiltrating Lymphocytes in Which the Gene Encoding the Intracellular Immune Checkpoint CISH Is Inhibited Using CRISPR Genetic Engineering. Identifier NCT04426669. https://clinicaltrials.gov/ct2/show/NCT04426669 National Library of Medicine (U.S.). (2020, August 6 - ). A Safety and Efficacy Study Evaluating CTX130 in Subjects With Relapsed or Refractory T or B Cell Malignancies (COBALT-LYM). Identifier NCT04502446. https://clinicaltrials.gov/ct2/show/NCT04502446 M. Elsner et al., “Correction: Corrigendum: Genetic improvement of tomato by targeted control of fruit softening,” Nature Biotechnology 2016 34:10, vol. 34, no. 10, pp. 1072–1072, Oct. 2016, doi: 10.1038/nbt1016-1072d. Q. H. Yu et al., “CRISPR/Cas9-induced Targeted Mutagenesis and Gene Replacement to Generate Long-shelf Life Tomato Lines,” Scientific reports, vol. 7, no. 1, Dec. 2017, doi: 10.1038/S41598-017-12262-1. L. Arora and A. Narula, “Gene editing and crop improvement using CRISPR-cas9 system,” Frontiers in Plant Science, vol. 8, p. 1932, Nov. 2017, doi: 10.3389/FPLS.2017.01932/BIBTEX. U. Schüklenk and P. Singer, “Bioethics: An Anthology,” 4th ed., U. Schüklenk and P. Singer, Eds. Hoboken, NJ: Wiley-Blackwell, 2021, pp. 135–137. N. Bostrom, “IN DEFENSE OF POSTHUMAN DIGNITY,” Bioethics, vol. 19, no. 3, pp. 202–214, Jun. 2005, doi: 10.1111/J.1467-8519.2005.00437.X. P. Singh, R. Vijayan, E. Singh, and A. Mosahebi, “Genetic Editing in Plastic Surgery,” Aesthetic Surgery Journal, vol. 39, no. 6, pp. NP225–NP226, May 2019, doi: 10.1093/ASJ/SJZ064. D. B. Resnik, “The moral significance of the therapy-enhancement distinction in human genetics,” Cambridge quarterly of healthcare ethics : CQ : the international journal of healthcare ethics committees, vol. 9, no. 3, pp. 365–377, 2000, doi: 10.1017/S0963180100903086. W. Isaacson, “The Code Breaker: Jeniffer Doudna, Gene Editing, and the Future of the Human Race,” New York, NY: Simon & Schuster, 2021, p. 351. D. Cyranoski and H. Ledford, “Genome-edited baby claim provokes international outcry,” Nature, vol. 563, no. 7733, pp. 607–608, Nov. 2018, doi: 10.1038/D41586-018-07545-0. “The genetic mistakes that could shape our species - BBC Future.” https://www.bbc.com/future/article/20210412-the-genetic-mistakes-that-could-shape-our-species (accessed Mar. 07, 2022). “Chinese scientist who edited babies’ genes jailed for three years | China | The Guardian.” https://www.theguardian.com/world/2019/dec/30/gene-editing-chinese-scientist-he-jiankui-jailed-three-years (accessed Mar. 07, 2022). C. Gyngell, T. Douglas, and J. Savulescu, “The Ethics of Germline Gene Editing,” Journal of Applied Philosophy, vol. 34, no. 4, pp. 498–513, Aug. 2017, doi: 10.1111/JAPP.12249. H. Bourne, T. Douglas, and J. Savulescu, “Procreative beneficence and in vitro gametogenesis,” Monash bioethics review, vol. 30, no. 2, p. 29, 2012, doi: 10.1007/BF03351338. “Constitution of the World Health Organization.” https://www.who.int/about/governance/constitution (accessed Mar. 06, 2022). “CRISPR: Implications for materials science.” https://www.cambridge.org/core/journals/mrs-bulletin/news/crispr-implications-for-materials-science (accessed Mar. 07, 2022). “Harvard researchers share views on future, ethics of gene editing – Harvard Gazette.” https://news.harvard.edu/gazette/story/2019/01/perspectives-on-gene-editing/ (accessed Mar. 07, 2022)
- Einstein's Miracles, Part 2: Atoms
By Caleb Todd Atoms are a deeply familiar part of our natural world. Their name is taken from the Greek atomos (meaning ‘indivisible’, despite the fact that atoms have constituent pieces into which they can be divided) because they constitute the fundamental unit of a chemical element. If you take a helium atom and try to break it up — divide it — what you have left is no longer helium. Figure 1: A simulation of a particle undergoing Brownian motion. The red particle jittered randomly along the blue curve. We are quite comfortable, these days, with the idea that atoms are matter’s building blocks, but universal acceptance thereof is actually a relatively recent development. While the idea of atoms goes back to ancient Greece, where scholars like Leucippus and Democritus proposed indivisible units of substance, these were philosophical arguments, not scientific [1]. Those who were more rigorous had to wait until around 1800 AD before atomic theory really developed as a science [1], and when our frizzy-haired protagonist came along in the early 20th century, there was still debate over its validity. In our last issue¹, we began the story of Einstein’s annus mirabilis papers by highlighting his work on the quantum nature of light. He helped launch the quantum revolution which subsequently redefined and recontextualised all of physics. The significance of that paper was only recognised slowly, though, so Einstein decided that if one revolutionary paper per year wasn’t enough, he’d just have to write two². As such, he turned his mind to the matter of matter and published ‘Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen’ (‘On the movement of small particles suspended in a stationary liquid demanded by the molecular-kinetic theory of heat’) in Annalen der Physik, 18 July 1905 [2]. At the heart of Einstein’s discussion lies the phenomenon of Brownian motion³. If you suspend a very light particle (like a mote of dust, for example) in a fluid and place it under a microscope, you will see it randomly zigzag and jitter [3], like in Fig. 1. Though no objects are visibly colliding with it or exerting a force on it, the particle is still continuously changing its motion. Is this a violation of Newton’s first law of motion? Certainly not. Instead, we might suspect that there are invisible objects colliding with the particle causing its variation in speed and position — and if they are invisible, they must be very small indeed. Perhaps Brownian motion is caused by atoms, which were proposed by the chemist John Dalton to explain how some substances can combine to make other substances. This is the explanation that Einstein proposed, but proposals and proofs are two very different beasts. At the same time, there was another thread in physics running parallel to the question of atoms: what is heat? What property of a substance makes it hot or cold? For a long time, scientists thought that there was an invisible fluid that imbued heat to the objects around which it flowed⁴. That notion was dismissed, however, when James Joule demonstrated that heat was just another form of energy⁵ [4] — but what kind of energy? This is where the ‘molecular-kinetic theory of heat’ in Einstein’s title comes in. In this theory, heat energy is really kinetic energy; that is, the energy associated with motion. In particular, it purports that heat is the kinetic energy of the atoms (or molecules) that make up a substance. You can now, perhaps, see how these threads tie together. If atoms exist and the kinetic theory of heat is correct, Brownian motion can be directly explained as collisions between the jittering particle and hot atoms in motion⁶. It seems a very cogent theory, but that in and of itself does not place these questions beyond doubt. We need something measurable that could experimentally validate Einstein’s conclusions. For this reason, one of the most significant parts of his paper is a mathematical expression for how quickly particles undergoing Brownian motion spread out from their initial positions. As it turns out, this average speed depends on the fundamental properties of the atoms being theorised about. So, by measuring Brownian motion, a physicist could help substantiate (or discredit) the kinetic theory of heat. Einstein did not have available to him sufficient data to actually draw a conclusion. Rather than trying to do the experiment himself, he simply concluded his paper by saying (in German), “Let us hope that a researcher will soon succeed in solving the problem posed here, which is of such importance in the theory of heat!” [2]. Fortunately for Einstein (and all other theoretical physicists), there are plenty of experimentalists who are willing to actually check whether the nonsense they write down is true. In this case, it was a Frenchman by the name of Jean Baptiste Perrin, whose experiments concluded (lo and behold) that Einstein’s predictions were correct⁷ [5].Atoms do exist, the molecular-kinetic theory of heat works, and we’ve never looked back since. Figure 2: Jean Baptiste Perrin, French physicist and winner of the 1926 Nobel Prize for demonstrating the existence of atoms. Image taken from Encyclopædia Britannica. Though it would be disingenuous to suggest that this result was totally surprising to the physics community — atoms and kinetic heat were well-regarded theories — it was absolutely still a controversial topic when Einstein’s paper was submitted. The importance of atomic theory need hardly be restated, and Jean Baptiste Perrin was awarded a Nobel Prize for his experimental verification of Einstein’s theory [5]. Note that this is the second of Einstein’s 1905 papers connected to a Nobel prize (although not for Albert himself, this time). Einstein is two-for-two⁸. This paper on Brownian motion is often overlooked in the annus mirabilis because of how revolutionary quantum theory and special relativity (the subjects of the other three 1905 papers) are, but that is somewhat unfair to an incredibly significant paper. We are now living in the ‘Atomic Age’, but barely a century ago we weren’t even sure that atoms existed. For the second time in just two months, Albert Einstein had changed the way we saw the world — but he wasn’t finished yet. In the next edition of the UoA Scientific, we will watch Einstein quite literally challenge the structure of reality itself. ¹ Available on our website. ² I might be projecting motivations a little bit here. ³ Thankfully this has nothing to do with digestion. It is named after its discoverer Robert Brown ⁴ Physicists often invent imaginary fluids to grapple with phenomena they don’t understand, as we will see in the next part of this series when we discuss the aether and light speed. ⁵ Essentially all engines and electricity generators depend on this principle. The SI unit of energy is named the Joule in his honour. ⁶ The hotter the atoms, the more jittering they cause. Much like in night clubs (or so I am told). ⁷ You’re shocked, I’m sure. ⁸ Although the prizes themselves were, of course, awarded far later than 1905. References S.B. McGrayne, J. Trefil, and G.F. Bertsch, “atom,” 2022. [Online]. Available: https://www.britannica.com/science/atom. [Accessed: 25- March- 2022]. A. Einstein, “Uber die von der molekularkinetischen theorie der warme geforderte bewegung von in ruhenden flussigkeiten suspendierten teilchen,” Annalen der Physik, vol. 322, no. 8, pp. 549–560, 1905. R. Feynman, "The Brownian Movement," The Feynman Lectures of Physics, vol. I, pp. 41–1, 1964. J.P. Joule, “On the Mechanical Equivalent of Heat”, in Report of the British Association for the Advancement of Science, 15th Meeting, 1845, pp. 536. M. Dardo, Nobel Laureates and Twentieth-Century Physics, Cambridge: Cambridge University Press, 2004, pp. 114–116
- The Algol Paradox: How Do Stars Age?
By Aimee Lew Algol is perhaps one of the most storied stars in Earth’s night sky. Going by Beta Persei or the Demon Star (from Arabic: ra's al-ghūl, just like the DC villain), for thousands of years, Algol was considered an omen of death and destruction [1]. The reason for this may have been its variability, or the periodic fluctuation of its brightness. In astronomy, the apparent magnitude is a measure of a star’s brightness to earthbound observers. Algol’s apparent magnitude dips approximately every three days, which — if perhaps one squints and drinks — could look like the slowly blinking eye of a harbinger of death. With the advent of telescopy and spectroscopy, astronomers learned in the 1880s that Algol was not one star, but multiple [2, 3]. It comprises three bodies that blur into a bright dot before us. Two of Algol’s components, β Persei A and B, orbit and eclipse each other. The passage of one body in front of the other, with respect to the line to Earth, decreases the apparent magnitude. With more technological breakthroughs in the twentieth century, , the catalogue of star masses, distances from Eart h, temperatures, compositions and age grew and grew. Astrophysicists attempted to put together a description for the evolution of stellar bodies. The theory went — and still goes, with more nuanced addendums — that more massive stars age quicker. But the Algol binary was an anomaly: the smaller body had advanced, and the larger body remained in an earlier stage of stellar evolution. Thus, the Algol Paradox was born [4]. Figure 1: The Algol system on 12 August 2009. Image from the CHARA (Center for High Angular Resolution Astronomy) array in California. To understand why β Persei A and B supposedly defied the trends of stellar evolution, we first need to understand stellar evolution. Think of a star like a furnace. From nebulae, they are born with a certain amount of ‘fuel’ (hydrogen) in their cores. The amount of fuel a star gets is determined by its mass. In their cores, nuclear reactions are taking place at extraordinary rates and scales. The light of a star comes from the energy output of converting protons into helium nuclei. More massive stars burn fuel at larger rates and scales, so they are hotter, brighter, and deplete their hydrogen sources more quickly [5]. While a star remains in the fuel-burning early stage of their lifetime, they sit on the main sequence. The main sequence is a diagonal band of stars on the Hertzsprung-Russell (HR) diagram [5]. Figure 2: The Hertzsprung-Russell diagram with data from the Hipparcos and Gliese catalogues. HR diagrams compare the luminosities, or brightness, of a star to its stellar colour, or effective temperature. In the upper-left are the hot and bright stars, which appear as white-blue, and in the lower-right are the (relatively) cool and dim stars, which appear as orange-red. Underneath the main sequence are the white dwarves—extremely dense stars that are in their final evolutionary state—and above are the red giants and supergiants. Astronomers used to think that the main sequence showed the pathway of stellar evolution. Stars would begin their lives in the upper-left, hot and bright, and cool over time, falling into the lower-right corner [5]. But the main sequence of the HR diagram is no more an evolutionary pathway than the band of the Milky Way; both are simply smatterings of stars at some given point in time. All stars are born on the main sequence, their position dictated by brightness and temperature, which is in turn dictated by mass. For as long as nuclear fusion persists in their cores, which is about 90% of their lifetime, they don’t move significantly along the diagonal [6]. Larger stars inhabit the upper-left and leave the main sequence fast (cosmically speaking) after a few million years. Smaller stars inhabit the lower-right and can burn for billions of years, like our Sun. When stars deplete their sources of hydrogen, they evolve. Lower mass stars expand into red giants (large, less dense and cool), then shrink into white dwarves (small, dense and hot) which will eventually radiate away all of its energy and wink out. Higher mass stars expand into red supergiants (very large and very cool) that will expand until the outward radiation pressure drops lower than the inward force of gravity. At that moment, red supergiants will crumple and supernova, creating atomic nuclei anew. In the Algol system, astrophysicists expected the two orbiting bodies to evolve as two individuals, but the observations seemed to suggest the opposite was true [7]. There is a smaller star in the giant phase and a larger star still on the main sequence. To understand the contradiction of Algol A and B’s behaviour, we need to focus on two things: red giants and gravity. Figure 3: Stellar evolution diagram. Image from Encyclopaedia Britannica, 2012. Notice that whether a star begins with more mass or less, both classifications have a period of expansion into either a subgiant, giant or supergiant star, which is a later but not final stage of stellar evolution. In binary star systems, the Roche lobe helps to shed light on the Algol Paradox. The Roche lobe defines the region around each star in which matter is gravitationally bound to that star. Roughly teardrop-shaped, the intersection of each lobe is called the Lagrange point. In Algol-type binaries, it’s possible for expanding red giants to fill their Roche lobe [8]. When that happens, matter can be transferred away from the initial stellar body. Roche lobe overflow (RLOF) is the culprit for this anomalous behaviour [9]. Astrophysicists now know that binary star systems like Algol A (a hot blue-white main sequence star) and B (a cooler orange subgiant) begin as two close-range stars of the same age and composition—having formed from the same nebula—with different masses. The larger star ceases nuclear fusion and starts expanding into a red giant earlier than the smaller star, as suggested by conventional stellar evolution. At this point, orbiting material that fills the larger star’s Roche lobe and surpasses the Lagrange point falls into the gravitational well of the smaller star [10]. As stellar expansion continues, more mass transfers to the smaller body until there appears to be a less massive star (Algol B at roughly 0.7 solar masses) that has burned through all its hydrogen and a more massive star (Algol A at roughly 3.2 solar masses) that is still trucking along [11]. Solved in the twentieth century, the Algol Paradox helped to shed light on the behaviour and evolution of binary star systems. Algol-type binaries remain a hotbed of academic interest, classified as cases “where the less massive donor fills its Roche lobe, the more massive gainer does not fill its Roche lobe and is still on the main sequence and the donor is the cooler, fainter and larger star” [9]. Meanwhile, training a closer and closer eye on Algol revealed in 2020 that the system might contain more bodies than anyone, thousands of years ago, staring up at the blinking Demon Star, could have thought [12]. The history of Algol unfolds alongside the discipline of astronomy and astrophysics, marking with pinpricks of light what we thought we knew, what we do know, and what we have yet to discover. Figure 4: Roche lobes of a binary star system. Image from COSMOS, the SAO Encyclopaedia of Astronomy, Swinburne University. References S. R. Wilk, "Mythological evidence for ancient observations of variable stars", The Journal of the American Association of Variable Star Observers, vol. 24, no. 2, pp. 129-133, 1996. [Accessed 21 March 2022]. [E. Pickering, "Dimensions of the Fixed Stars, with Especial Reference to Binaries and Variables of the Algol Type", Proceedings of the American Academy of Arts and Sciences, vol. 16, p. 1, 1880. Available: 10.2307/25138595. A. Batten, "Two Centuries of Study of Algol Systems", International Astronomical Union Colloquium, vol. 107, pp. 1-8, 1989. Available: 10.1017/s0252921100087625. I. Pustylnik, "The early history of resolving the Algol paradox", Astronomical & Astrophysical Transactions, vol. 15, no. 1-4, pp. 357-362, 1998. Available: 10.1080/10556799808201791. K. Lang, The life and death of stars. Cambridge [England]: Cambridge University Press, 2013. "How do scientists determine the ages of stars?", Scientific American, 1999. [Online]. Available: https://www.scientificamerican.com/article/how-do-scientists-determi/. [Accessed: 21- Mar- 2022]. I. Pustylnik, "Resolving the Algol Paradox and Kopal’s Classification of Close Binaries with Evolutionary Implications", Astrophysics and Space Science, vol. 296, no. 1-4, pp. 69-78, 2005. Available: 10.1007/s10509-005-4379-1. P. Davis, L. Siess and R. Deschamps, "Binary evolution using the theory of osculating orbits", Astronomy & Astrophysics, vol. 570, p. A25, 2014. Available: 10.1051/0004-6361/201423730. W. Van Rensbergen, J. De Greve, N. Mennekens, K. Jansen and C. De Loore, "Mass loss out of close binaries", Astronomy and Astrophysics, vol. 510, p. A13, 2010. Available: 10.1051/0004-6361/200913272. B. Paczynski, "Evolutionary Processes in Close Binary Systems", Annual Review of Astronomy and Astrophysics, vol. 9, no. 1, pp. 183-208, 1971. Available: https://www.annualreviews.org/doi/10.1146/annurev.aa.09.090171.001151. F. Baron et al., "Imaging the Algol triple system in the H band with the CHARA Interferometer", The Astrophysical Journal, vol. 752, no. 1, p. 20, 2012. Available: 10.1088/0004-637x/752/1/20. L. Jetsu, "Say Hello to Algol's New Companion Candidates", The Astrophysical Journal, vol. 920, no. 2, p. 137, 2021. Available: 10.3847/1538-4357/ac1351.