top of page

Search Results

76 results found with an empty search

  • Patients Knee to Know: Evaluating the robustness of IMU-derived Knee Angle measurements

    By Jae Min Seo (He/Him) Over 2.5 million people around the world turn to knee replacement surgery (also called total knee arthroplasty) to treat conditions such as osteoarthritis [1], and other degenerative joint diseases, which cause pain during gait and often prevent them from walking effectively [2]. The road to post-operative recovery and walking independently is a long and arduous process. The result of such invasive surgery is that patients’ rotational range of motion about their knee is severely attenuated, and most patients can only extend their legs up to 80 degrees in their first two weeks post-operation, if at all [3]. In order to track patients’ rehabilitation and to check for any abnormalities, it is important to measure how their range of motion about the knee improves over time [4-6]. How then, do we measure the joint angles about the knee? Anatomical Considerations One may think that the knee joint only has a single axis of rotation, but this is actually not the case; The knee joint has 6 degrees of freedom¹: 3 rotational (flexion/extension, varus/-valgus, axial), and 3 translational (superior/inferior, anterior/posterior, medial/lateral) [7]. Try it! You can find the axial rotation of your knee by holding your knee still and moving your heel inwards and outwards. During knee flexion, the motion is caused by a combination of translation and rotation between the contacting tibia and femoral condyle surfaces [8]. Excess sliding or rolling about the joint is prevented by soft tissue structures, such as the menisci, the muscular connections via the tendons, and the ligaments between the femur and tibia [8]. The combinations of these muscle contractions and physical structures allow us to observe from a macroscopic scale what we know as knee flexion and its characteristic range of motion [7]. However, the knee-joint is often regarded as a perfect hinge-joint for modelling purposes. This is because the flexion-extension angle (~140°) of the knee joint is much larger than the varus/valgus angle (~10°), and the axial angle (~5°) [7-8], and in everyday movements such as walking and squatting, clinicians are primarily concerned about the restoring the flexion-extension angle of the knee [5,9,10]. Figure 1: The knee has 6 degrees of freedom [7] Current Methods & Technologies One may think that measuring the angle about the knee is quite trivial – you simply use a protractor and measure the angle between the thigh and shank when the leg is fully flexed and fully extended. And you thought right! There are in fact clinical tools that do just that, called goniometers [11]. However, clinicians often place the goniometers on different parts of the knee joint each time, and do not actually align them in line with biomechanical landmarks and bones [11]. This has shown to cause large inter-clinician (and in some cases, intra-clinician) variability in repeated measurements, causing large errors compared to ground-truth data measurements calculated using body scans [13-14]. There is also the added disadvantage of not being able to capture dynamic data, which is oftentimes more useful for biomechanists and orthopaedics [15]. Figure 2: Long arm goniometers are a common tool by orthopaedics and biomechanists to determine the knee angle of patients undergoing rehabilitation [12] The gold standard for calculating biomechanical measurements is to use Optical Motion Capture (OMC) [16]. This is where reflective markers are placed on the patient, and the movement of the markers is tracked in a room surrounded by high-rate, high-accuracy cameras [16]. You may have seen these being used in high-performance athletes or other fields where human movement is tracked, like in video game animation captures [17]. Figure 3: Motion Capture is used in a myriad of different applications [18] The data collected from these cameras are processed by an open source software called OpenSim [19]. This software uses the marker information to calculate biomechanical variables using inverse kinematic techniques, which uses each time frame of marker positions and positions the model in a pose that "best matches" experimental marker and coordinates data for that time step [20]. This "best match" is the pose that minimises a sum of weighted squared errors of marker coordinates. OpenSim assumes that the marker position relative to the bones does not change over time. These Optical Motion Capture and optimisation techniques are the most accurate non-invasive methods for capturing knee angle data [18]. This method typically has a root mean square error (RMSE)2 of less than 5 degrees and is used most widely as a ground-truth measurement for other prediction algorithms [19-20], [21-23]. We do not consider this RMSE to be significant as it is of a similar scale to the measurement errors that occur during data capture [22]. Optical Motion Capture is not perfect, however. This is due to the non-invasive nature of the markers; Reflective markers are placed on the skin, and oftentimes on top of other soft tissue such as muscle and fat. These soft tissues have large deformations for small changes in position, which can cause variation over time in the distance between the markers and the anatomical landmarks/bones they are meant to represent over time [24]. These introduce noise called soft-tissue artefact into the data acquired, and can result in large inaccuracies if markers are not placed on landmarks with a lower proportion of soft tissue/fat/muscles such as the ankle (where soft-tissue movement is minimal) [25]. This is particularly of interest as we know that knee movements cause these soft tissues to jiggle, and we know that the knee joint doesn’t jiggle-jiggle, it folds. Figure 4: Markers on my leg as I do a range of motion exercise, visualised in OpenSim There exists an even more accurate form of data acquisition, and this is through bone pins. These pins are directly attached to the bones, which prevents any form of relative movement between the bones of interest and the markers, minimising soft-tissue artefact [26]. However, this method is typically for purely research purposes, and the bone pins can have a confounding effect on the gait of participants, which can render the data useless from a gait rehabilitation standpoint. The three aforementioned methods come in varying degrees of accessibility, accuracy, and invasiveness. Whilst goniometers may be more accessible and non-invasive, they are not the most precise. Bone pins are extremely accurate, but are invasive and not very accessible for patients who have undergone surgery. Optical Motion Capture provides a nice middle ground, but these are typically only found in Biomechanics research institutes and require very expensive equipment. None of these options are favourable for patients who need accessible, accurate, and dynamic measurements of their knee angle during their rehabilitation. Figure 4: IMeasureU is one of the leaders in wearable motion tracking (and was founded by research at the University of Auckland) [27] Inertial Measurement Units In the past decade there has been a surge in research regarding capturing joint measurement angles using portable and lightweight devices called Inertial Measurement Units (IMUs). IMUs capture linear acceleration, angular acceleration, and magnetic field strength through on-board accelerometers, gyroscopes,and magnetometers respectively. Research has been done by strapping one IMU on each segment about the joint (one IMU on the thigh, one IMU on the shank), and running the rate information through a data processing pipeline that can predict the angle using these different measurements. IMUs by nature do not have an absolute frame of reference. If you have two IMUs in motion, you cannot directly determine the distance or angle between them without using some analytical or computational tools. Research has been done by placing IMU markers as parallel as possible to anatomical coordinate systems, but depending on the algorithm used, this may not be the optimal method for capturing the angle of the knee, or having participants perform simple calibration movements. Angle Prediction Algorithms Initial research on IMU-derived angle-calculations was done in 2001 by simply integrating the rate data at each capture to get from the angular acceleration to absolute angle measurements, from some known calibration pose (eg. full extension or full flexion) [28]. Any offsets were removed by subtracting the average angle from a static pose, and the initial angle was found by taking the inverse tangent of the acceleration in the first few seconds of each trial. The calibration procedures were done at the beginning of each trial to zero the effects of any accumulated drift on the next trial. Some studies have tried running calibrations by assuming IMUs are mounted exactly parallel to anatomical reference frames/axis, but have found that results are quite heavily corrupted by kinematic crosstalk3. A new approach was taken by Seel [29] where kinematic constraints were applied to the joint axis algorithms. This works under the assumption that the joint axis is a perfect hinge, whereby the only degree of freedom is the flexion-extension angle. This is done by projecting all knee motion vectors onto a shared plane that is defined by the range of motion of the thigh and shank. The angle was found by integration using the Gauss-Newton algorithm4. However, the problem of drift and measurement bias were not addressed in this paper. Seel extended upon his work in 2014 to find the flexion/extension angle of the knee, by estimating local joint positions using data from the gyroscope and accelerometer [4]. This allows a data-fusion approach to find the real-time angle without any drift, as no integration is involved. However, only information about the flexion/extension angle is found, with no mention of abduction/adduction or internal/external rotation. An extension is offered by Laidig et al., where knee flexion/extension angles are accurately estimated by exploiting the knee’s hinge axis to control misalignment about the vertical axis due to drift and/or magnetic field interference. Vitali [30] provided yet another approach similar to the above, but was able to extract abduction/adduction and internal/external rotation angles, though they did not mention the extent of kinematic crosstalk. Baudet used Principal Component Analysis5 to minimise kinematic crosstalk, and has been very successful in minimising crosstalk, especially in the abduction/adduction and internal/external rotation angles [31] . Research Objectives Lots of measurements have been made by researchers that know exactly where and at what orientation to place their IMUs for optimal results. There is a plethora of research in this field that claim to have the best results for IMU-derived predictions, but do not disclose their robustness to different dynamic movements and slight variations in IMU placement. In order to make IMU-derived clinical measurements more accessible, we must assess the robustness of different prediction algorithms, as well as at which IMU positions and orientations the error in the predictions are minimised. My research will determine just that, by comparing the error associated with different combinations of IMU positions, orientations, prediction algorithms, and movements, to find the optimal conditions in which the error is minimised. Through this research, clinicians can better advise their patients on how to get the optimal readings from IMU-derived knee-angle measurements. This should vastly improve the accessibility of current postoperative total knee arthroplasty patients undergoing rehabilitation, ultimately improving patient outcomes. Footnotes 1 Degree of freedom is the number of independent parameters that can define another parameter within a system. In this case of knee joint biomechanics, the degrees of freedom are the different possible movements, which superimpose together to give what we define as the ‘range of movement’. 2 Root Mean Square Error (RMSE) is the difference between two parameters. This is often used in scientific research and statistical analysis as a means of comparing one or multiple measurements to an expected value. The RMSE is calculated by taking the absolute value of the difference between the two parameters squared, and taking the square root of the result. One may see parallels between the RMSE, Euclidean norm and the Pythagorean theorem. 3 Crosstalk is the phenomenon whereby one parameter’s output is incorrectly recognised as another parameter. In biomechanics this is called Kinematic Crosstalk, and in the context of the knee, an example could be the some part of the internal/external rotation being recognised as an abduction/adduction movement. 4 The Gauss-Newton Algorithm is an iterative algorithm that is used to solve nonlinear least squares (overdetermined) problems. This allows us to make the best possible approximation to overspecified systems (where the amount of degrees of freedom is negative) by minimising the squared sums of the residual errors. This is used in scenarios such as this, where there are more known parameters than the number of equations. 5 Principal Component Analysis is one of the key cornerstones of feature extraction and statistical analysis. The vector which best explains the variation in data is considered the principal component, and the next best vector that is orthogonal to this principal component is the next principal component and so on. In this context, the largest principal component of the three rotations is assumed to be the flexion/extension angle of the knee. References C. J. Stewart, “Demand for Knee Replacement Grows 5 Percent Worldwide ”https://orthospinenews.com/2019/06/04/demand-for-knee-replacement-grows-5-percent-worldwide/#:~:text=ST.,knee%20replacement%20surgery%20each%20year (accessed 28th June, 2022) J. Favre and B. M. Jolles. “Gait analysis of patients with knee osteoarthritis highlights a pathological mechanical pathway and provides a basis for therapeutic interventions.” EFORT Open Rev. 2017 Mar 13;1(10):368-374. doi: 10.1302/2058-5241.1.000051. PMID: 28461915; PMCID: PMC5367582. A. Kornuijt, G. J. L de Kort, D. Das, A. F. Lenssen and W. van der Weegen. “Recovery of knee range of motion after total knee arthroplasty in the first postoperative weeks: poor recovery can be detected early”. Musculoskelet Surg. 2019 Dec;103(3):289-297. doi: 10.1007/s12306-019-00588-0. Epub 2019 Jan 9. PMID: 30628029. T. Seel, T. Schauer, and J. Raisch, “IMU-based joint angle measurement for gait analysis,” Sensors, vol. 14, no. 4, pp. 6891–6909, April 2014. T. Yeung and T. F. Besier, “A new paradigm for assessing and monitoring joint health in joint arthroplasty patients: Imu clinic.” H. J. Luinge and P. Veltink, “Measuring orientation of human body segments us-ing miniature gyroscopes and accelerometers,” Medical & biological engineering & computing, vol. 43, no. 2, pp. 273–282, 2005. S. Fathy and M. E. Messiry, “Study of the effect of cyclic stress on the mechanical properties of braided anterior cruciate ligament (acl),” Journal of Textile Science & Engineering, vol. 6, no. 2, 2002. P. Komdeur, F. E. Pollo, and R. W. Jackson, “Dynamic knee motion in anterior cruciate impairment: a report and case study,” Proceedings (Baylor University Medical Center), vol. 15, no. 3, pp. 257–259, 2002. J. Favre, B. Jolles, R. Aissaouic, and K. Aminiana, “Ambulatory measurement of 3D knee joint angle,”Journal of biomechanics, vol. 41, no. 5, pp. 1029–1035, 2007. R. Williamson and B. J. Andrews, “Detecting absolute human knee angle and angular velocity using accelerometers and rate gyroscopes,” Medical & Biological Engineering & Computing., vol. 39, no. 3, pp. 294–302, 2001. G. E. Hancock, T. Hepworth, and K. Wembridge. “Accuracy and reliability of knee goniometry methods”. Journal of Experimental Orthopaedics. 2018 Oct 19;5(1):46. doi: 10.1186/s40634-018-0161-5. PMID: 30341552; PMCID: PMC6195503. B. Sears, “Goniometer: A Tool for Measuring a Joint’s Range of Motion” https://www.verywellhealth.com/what-is-a-goniometer-2696128 (accessed 26th June, 2022) N. Marques Luís and R. Varatojo. “Radiological assessment of lower limb alignment”. EFORT Open Rev. 2021 Jun 28;6(6):487-494. doi: 10.1302/2058-5241.6.210015. PMID: 34267938; PMCID: PMC8246117. G. E. Hancock, T. Hepworth, and K. Wembridge, “Accuracy and reliability of knee goniometry methods,” Journal of Experimental Orthopaedics, vol. 5, no. 46, October 2018. M. T. Karimi and S. Solomonidis. “The relationship between parameters of static and dynamic stability tests”. Journal of Research in Medical Science. 2011 Apr;16(4):530-5. PMID: 22091270; PMCID: PMC3214359. P. Eichelberger, M. Ferraro, U. Minder, T. Denton, A. Blasimann, F. Krause, and H. Baur. “Analysis of accuracy in optical motion capture - A protocol for laboratory setup evaluation”. Journal of Biomechanics. 2016 Jul 5;49(10):2085-2088. doi: 10.1016/j.jbiomech.2016.05.007. Epub 2016 May 10. PMID: 27230474. IMeasureU, “Leaps and Bounds: Pixelgun is capturing the elite athletes of the NBA for 2k” https://www.vicon.com/resources/case-studies/leaps-and-bounds/ (accessed: 27th June, 2022) Real-Time Computing and Communications (MIT) “Motion capture: Capturing the movement of objects and people.” https://fab.cba.mit.edu/classes/865.21/topics/scanning/05_mocap.html (28th June, 2022) S. L. Delp, F. C. Anderson, A. S. Arnold, P. Loan, A. Habib, C. T. John, E. Guendelman and D. G. Thelen. “OpenSim: Open-source Software to Create and Analyze Dynamic Simulations of Movement”. IEEE Transactions on Biomedical Engineering. (2007) H. Wang, Z. Xie, L. Lu, L. Li, and X. Xu, “A computer-vision method to estimate joint angles and l5/s1 moments during lifting tasks through a single camera,” Journal of Biomechanics, vol. 129, p. 110860, 2021. R. E. Mayagoitia, A. Nene, and P. H. Veltink, “Accelerometer and rate gyroscope measurement of kinematics: an inexpensive alternative to optical motion analysis systems,” Journal of biomechanics, vol. 35, no. 4, pp. 537–542, 2002. N. P. Brouwer, T. Yeung, M. F. Bobbert, and T. F. Besier, “3d trunk orientation measured using inertial measurement units during anatomical and dynamic sports motions,” Scandinavian journal of medicine & science in sports, vol. 31, no. 2, pp. 358–370, February 2021. F. Marin, H. Manneland, L. Claes, and L. D. Ürselen, “Correction of axis mis- alignment in the analysis of knee rotations,” Human movement science, vol. 22, no. 3, pp. 1029–1035, August 2003. R. Stagni, S. Fantozzi, A. Cappello and A. Leardini. “Quantification of soft tissue artifact in motion analysis by combining 3D fluoroscopy and stereophotogrammetry: a study on two subjects”. Clinical Biomechanics (Bristol, Avon). 2005 Mar; 20(3):320-9. doi: 10.1016/j.clinbiomech.2004.11.012. PMID: 15698706. A. Ancillao, E. Aertbeliën and J. De Schutter. “Effect of the soft tissue artifact on marker measurements and on the calculation of the helical axis of the knee during a gait cycle: A study on the CAMS-Knee data set”, Human Movement Science, vol. 80, 2021, 102866, ISSN 0167-9457, https://doi.org/10.1016/j.humov.2021.102866. C. Maiwald, A. Arndt, C. Nester, R. Jones, A. Lundberg and P. Wolf. “The effect of intracortical bone pin application on kinetics and tibiocalcaneal kinematics of walking gait”. Gait Posture. 2017 Feb; 52:129-134. doi: 10.1016/j.gaitpost.2016.10.023. Epub 2016 Nov 4. PMID: 27898374. IMeasureU, “Blue Trident” https://images.squarespace-cdn.com/content/v1/51a6c5dae4b0fd1b00158d2c/1562365249778-PYJ0XZQSLKT8PIAW91HO/3.jpg (accessed 27th June, 2022) K. Liu, T. Liu, K. Shibata, and Y. Inoue, “Ambulatory measurement and analysis of the lower limb 3d posture using wearable sensor system,” in IEEE International Conference on Mechatronics and Automation, Changchun, China, August 9-12 2009. T. Seel, T. Schauer, and J. Raisch, “Joint axis and position estimation from inertial measurement data by exploiting kinematic constraints,” IEEE Multi-Conference on Systems and Control, 2012. R. V. Vitali, S. M. Cain, R. S. McGinnis, A. M. Zaferiou, L. V. Ojeda, S. P.Davidson, and N. C. Perkins, “Method for estimating three-dimensional kneerotations using two inertial measurement units: Validation with a coordinate measurement machine,” Sensors, vol. 17, no. 9, p. 1970, September 2017. A. Baudet, C. Morisset, P. d’Athis, J. F. Maillefert, J. M. Casillas, P. Ornetti,and D. Laroche, “Cross-talk correction method for knee kinematics in gait analy-sis using principal component analysis (PCA): A new proposal,” PloS one, vol. 9,no. 7, pp. e102 098–e102 098, July 2014.

  • Measuring the Speed of Time

    Amongst the beautiful mountainous scenery of the Rocky Mountains and the iconic Flatirons of Boulder, Colorado, sits a device that counts seconds better than almost any device on earth. It is the NIST-F2, the clock that was unveiled by the National Institute for Standards and Technology (NIST) in 2014, which does not gain or lose a second in 300 million years (i.e., with an inaccuracy of approximately 10) [1]. To give some perspective, 300 million years ago the earth had one continent, the Pangea, and reptiles were just rising into dominance. The NIST-F2, alongside its predecessor NIST-F1, serves as the primary standard for civilian time in the US [2]. The International Atomic Time (TAI) is based on the readings combined from many of these high precision clocks worldwide. It is the basis of civilian time and Coordinated Universal Time (UTC) [3]. It is hard to overstate the importance of clocks in our world. Contrary to the readers' current thoughts, clocks with an inaccuracy of 10 serve various purposes in our everyday lives. Accurate clocks are used to synchronise GPS systems, utilise radars for both commercial and military purposes, improve geodesy and metrology, timestamp financial transactions, and can even be used to confirm Einstein's theory of general relativity [4-6]. All of the above applications require a precise measurement of time to operate, and new applications of accurate clocks are realised frequently. At the launch of the NIST-F2, Steven Jefferts, the lead designer said, "If we've learned anything in the last 60 years of building atomic clocks, we've learnt that every time we build a better clock, somebody comes up with a use for it that you couldn't have foreseen" [1]. The most common clocks used for accurate timekeeping are known as atomic clocks, as they take advantage of the stable energy level structures of specific atoms. Atomic clocks arose several decades after the rapid establishment of quantum mechanics in the early 20th century when scientists started to understand the detailed structure of atoms. Lord Kelvin first suggested such a device in 1879¹ , but the technology to realise them only came into prominence in the mid-twentieth century [7]. Much of the foundational work on atomic oscillations was laid out by Isador Rabi — a Nobel laureate in physics — in the 1930s and 40s [8,9]. Several laboratories in the UK and US started working on creating an atomic clock a decade later [7]. Atomic clocks work by exploiting the energy levels of atoms. The electrons that surround an atom have discrete energy levels and require a particular amount of energy for it to move to the next energy state. The amount of energy required to transition between energy levels is unique and consistent with each atom²[7,10-12]. This property plays an essential role in mitigating any manufacturing errors as every element is identical to one another. The basic principle behind atomic clocks is as follows³. An ensemble of atoms is cooled to several milli-Kelvin (near absolute zero) in order to access its ground state energy level. The ensemble is then exposed to radiation at a frequency close to their resonant frequency — the frequency that excites the atom and transitions it to the next energy state (in this case, the first excited state). It is for this reason that we must ensure the initially prepared ensemble is in its ground state. A magnet filters out those atoms that are not excited, and the remaining atoms are fed into a detector. The detector then counts the number of excited atoms, and uses this information to adjust the frequency of the radiation until the maximum number of excited atoms is detected. This feedback control and these self-adjustments are what make atomic clocks so precise. The adjusted frequency is then counted by a separate device to keep track of the time elapsed⁴. Figure 1 illustrates the general idea. The most common element used in modern atomic clocks is the Cesium-133 atom [7]. Its heavy mass makes it slow and easier to confine, and its comparatively high resonant frequency makes for a more accurate measurement. In fact, the definition of the SI unit "second" is the duration of 91,926,317,70 periods of the radiation corresponding to the transition between the hyperfine levels of the unperturbed ground state of the Cesium-133 atom [13]. Other common elements used are Hydrogen and Rubidium, though both weigh less and have lower resonant frequencies in their ground states. The significance of the measurement of time can not be understated when almost all measurements we make are in some form compared to time. Until the mid-1990s, Cesium based atomic clocks reigned supreme in accuracy. Though most clocks used nowadays for government, commercial, and military purposes are still Cesium based (owing to its well-known stability and reliability), various research groups worldwide have realised clocks that have higher accuracies. Here we will explore two promising avenues in next-generation high-precision clocks, the first of which is the optical lattice clock. Purple (Cyan) particles represent ground (excited) state Cesium atoms. Emitted atoms are exposed to the radiation of a particular frequency, after which a magnet removes all the remaining ground state atoms that were not excited in the process. A detector then counts the number of excited atoms and uses this information to fine-tune the radiation frequency until the maximum number of excited atoms is detected. An optical lattice clock is created by using several lasers to produce a single or even multilevel egg carton-like potential that traps atoms in its valleys (see Figure 2 [15]). By using numerous lasers and external magnetic fields, the entrapment of the atoms can be finely tuned. The absorption frequency of the atoms can then be measured highly accurately. It was first proposed and realised by Hidetoshi Katori at the University of Tokyo, and since then, various research groups have improved upon it [14]. The most common elements used are Strontium and Ytterbium atoms. A recent optical lattice clock created by researchers at NIST demonstrated an inaccuracy of approximately 10-18, which at the time proved to be the most precise. A significant advantage of the optical lattice clock is its stability paired with its accuracy, such that the Strontium based clock is regarded as the second definition of the SI unit “second”. Many believe it will replace the primary definition in the coming years [14, 17]. The excitement surrounding this new clock is not unfounded. The 2022 Breakthrough Prize in fundamental physics was awarded to Hidetoshi Katori and Jun Ye (NIST / University of Colorado Boulder) for their significant work in optical lattice clocks — the very first winners from the field of photonics. The second promising next-generation clock plays a tug of war with the optical lattice clock for the title of the world's most accurate clock. It is the quantum logic clock, the most precise clock ever to have been created at the time of writing. A quantum logic clock utilises the extremely stable vibration of a single Aluminium ion that has been trapped and laser-cooled. These clocks utilise lasers in the optical frequency in order to cause the ion oscillation, which results in higher accuracy compared to Cesium based atomic clocks that utilise microwave frequencies (frequencies about 100,000 times less). However, manipulating the Aluminium ion using a laser has not proven to be easy. In order to overcome this hurdle, researchers at NIST made a breakthrough in 2005 when they used a partner Beryllium⁵ ion to cool the Aluminium ion and count its oscillations simultaneously [18-19]. An egg carton-like potential traps atoms in its valleys as illustrated. One of the first quantum logic clocks made by NIST in 2010 caught vast media attention. It had a high enough accuracy to test time dilation proposed in Einstein's theory of general relativity with only 33 centimetres of height. They had demonstrated that time goes by quicker when you're higher off the ground, as well as that time moves slower when you're moving faster. It was one of the first realisations of Einstein's theory on a small, laboratory scale [20-21]. Future advancements in high precision clocks could lead to experiments investigating the intertwining effects between relativity and quantum mechanics, something that has stumped physicists for decades. NIST's most recent quantum logic clock surpassed the optical lattice clock in terms of its accuracy (but not in stability, which is about ten times worse). The quantum logic clock does not gain or lose a second in 33 billion years, about twice the universe's age. It is the first clock to have an inaccuracy of less than 10-18 [22]. Both the optical lattice clock and the quantum logic clock are competing to become the next standard clock that defines a second. Though clocks are not often in the limelight, they are the bedrock of many technologies used every day. Hence, it is paramount to keep measuring the speed of time. ¹ Oscillations from resonant transitions in atoms had been known prior to the 20th century, but their detailed nature was not understood until the onset of quantum mechanics in the early 20th century. Lord Kelvin first suggested an atomic clock using Sodium and Hydrogen atoms in 1879. ² More specifically, atomic clocks use atoms cooled to near zero kelvin temperatures in order to access the bottom two energy levels of the atom. The transitions happen in those two isolated energy levels. ³ Note that here we are only stating the general principle of atomic clocks and that most sophisticated clocks are much more complex with their implementation and geometry. ⁴ An excellent video on how atomic clocks work can be found here: https://www.youtube.com/watch?v=l8CI3bs9rvY ⁵ Magnesium is more common in recent implementations.

  • Grief and Learning — the Limits in Our Current Research

    Grief is usually felt after the death of a loved one but can present itself when experiencing loss such as breakups, food insecurity, estrangement, and numerous other challenges [1]. This article will focus on the grief that follows a death, particularly in university-aged students, however, it is overall applicable to other types of loss. Symptoms of grief can present immediately or may be delayed. This article will focus on when the mind feels emotionally safe to begin processing. Grieving is a process with common symptoms over a timeframe, but the specifics are unique to each person [2]. Different cultures have different customs and expectations for acclimatising to loss. The American Psychiatric Association of 1964 defined a “normal” bereavement period as two months. Since then it has been extended to twelve months and acknowledges that grief varies with cultural norms, but still lists criteria for what is “normal” and “abnormal” grief [1]. Social norms of grief in western academia are that of keeping it private, with the expectation of being able to go back to "normal" shortly after bereavement, despite extensive research against this mentality [1]. This has been identified in grief research as a problematic approach because the brain “... needs to learn how to be in the world without someone we love in it" [3], which requires time and experiential feedback [4]. Grief research in general is rather underfunded as it is not a disease, nor is it classified as a mental disorder [5]. By extension, grief and loss are rarely represented in pedagogical research, leaving those in mourning isolated from academic spaces and learning [1]. Grief can affect anyone at any age, but when, for example, 22-33% of university students are within twelve months of a close bereavement, the lack of grief research in pedagogy means that these students are disadvantaged in their academic pursuits as they come to terms with their circumstances [6]. Lack of funding results in fewer and less diverse researchers as well as limits in experimental equipment and experimental data. For the most part, the psychological symptoms of grief are well-known, but only a handful of researchers study its biology. Of those who do, most are psychologists with biological interests [5]. Our understanding of grief at the biological level is limited by having only one field of science researching this topic deeply, when an interdisciplinary approach would produce a more fruitful yield. From the data that does get collected, there is precision missing. Grieving is a process over time, but neuroimages from neuroimaging studies are taken from a single time point. There is also no distinguishing between acute grief, typical grief, and/or prolonged grief. When almost all neuroimaging studies are about grief rather than grieving, it limits what conclusions can be drawn from the data about the biology of grieving [4]. Emotional bonds with loved ones produce feel-good chemicals such as oxytocin, dopamine, and serotonin. Loss triggers both a halt of these, and an increase in stress chemicals such as adrenaline, cortisol, and norepinephrine causes a range of physical and psychological symptoms [7]. In the body this can look like dizziness, sleep disturbances, nausea, and issues with appetite, while emotionally there are often feelings of numbness and disconnect [3,7]. Sadness and anxiety are also common to experience, though in deeper losses these can develop into yearning and hopelessness [1,3]. A combination of the above leaves university students physically and mentally exhausted as they struggle through their studies with memory problems, intrusive thoughts, difficulty staying organised, and a lack of concentration [6]. They are also often navigating higher levels of independence for the first time, such that grief can isolate them from their standard support systems. Students from minority communities may also be navigating an education system not designed to support their needs, which would put them at further disadvantage in their studies [1]. The biology of grieving shows locations of interest in the brain. fMRIs are used to show grief present, which are seen as the periaqueductal grey, anterior cingulate, nucleus accumbens, and somatosensory cortices. These are the same areas which show separation anxiety in babies crying for reconnection, and the same as physical pain in adults. This is why intense grief can physically hurt [7]. The size of the hippocampus prior to grieving is hypothesised to be an indicator of adapting to loss as well. Brain scans showing a smaller than average hippocampus before bereavement in participants predicted trouble accustoming to loss. The hippocampus has also been shown to shrink in those who have lost a child (with and without PTSD) [4]. When the psychological symptoms of grief are interfering with day-to-day life several months after the bereavement, we start to see prolonged grief disorder (PGD) in about 10% of people [3,5]. This is where the loss is all-consuming to the point of significant social withdrawal [5]. It has already been mentioned in this article that university students are experiencing more independence and therefore are less connected to their standard support networks [1], which increases their risks of developing PGD. Image by Thoa Ngo from Unsplash People do go on to live successful and happy lives as they process their loss, sometimes with a treatment plan that their general practitioner has helped form if that is what is needed [2]. Not all students want or need such help though. From some of the grief pedagogy research that has been conducted, it has been suggested that one potential tool could be a training program for non-bereaved students to provide informal support to grieving peers, as friends often feel under-equipped to provide the needed emotional support [6]. There seems to be extensive knowledge on the symptoms of grief, but more work to be undertaken on how to alleviate those symptoms for students in academic spaces. Without up-to-date research on how best to support grieving students, the support that education institutions provide for students experiencing loss is limited. More research into what is helpful and unhelpful for grieving students would result in education institutions assisting students in moving forward from their losses and making gains in their learning.

  • Einstein’s Miracles Part 4: Mass-Energy Equivalence

    The world’s first nuclear weapon detonated on July 16th, 1945. As the sun-like flash died away to be replaced by a mushroom cloud, the physicist Robert Oppenheimer quoted immortal words from a Hindu text [1]: “Now I am become Death, the destroyer of worlds.” Never before had such destructive force been generated by mankind, and no one knew that better than those who had brought it about. However, the beginnings of the nuclear age were far more unassuming. Forty years earlier, a patent clerk named Albert Einstein was about to publish the last of his four ‘miracle year’ papers and, in doing so, pave the way for the most definitive technology of the 20th century: nuclear power. The geopolitical influence that Einstein’s paper would have could never have been suspected at the time — not even by him. In fact, it’s hard to imagine a more nondescript journal article, given that it could easily fit on a single page and was named quite blandly (as is traditional in physics) Does the Inertia of a Body Depend on its Energy Content? [2]. Really, this paper can be seen as a mere addendum to his previous paper on special relativity, which we covered in the last edition of Scientific. But its key result identifies the foundational principle behind the nuclear age and has become perhaps the most famous equation on Earth: As the paper’s brevity indicates, the derivation of this equation is not particularly convoluted and more or less comes directly from an equation Einstein presented in his previous paper (albeit, not one we covered last time). So, it is not the origin of this equation that we will consider; its significance is far more interesting. To understand E = mc ², we should begin by defining its terms. The E stands for energy. An object’s energy is its ability to perform work; to make something move, lift it up, change its state, or some such thing. We often speak of ‘kinetic energy’: the energy stored in motion (an object has more kinetic energy the faster it moves). There is also ‘potential energy’, which is energy stored in interactions between different things. For example, an Acme anvil held stationary above Wile E. Coyote has no kinetic energy, but due to its interaction with the Earth (via gravity), it has substantial potential energy that can be converted into kinetic energy when the anvil is released. Your body stores considerable amounts of potential energy in chemical bonds — interactions between atoms. On the other side of the equation, m represents the mass of an object (also known as its inertia, hence the title of Einstein’s paper), while c is the speed of light. Participants in the first Solvay conference (1911). Einstein and Rutherford can be found standing second and fourth from the right, respectively, while Curie is seated second from the right. The sponsor, Ernest Solvay, was crudely edited into the original before it was released — you should be able to pick him out! The equation, when put together, leads us to a simple but remarkable conclusion: the more energy an object intrinsically possesses, the greater its mass will become. When you take the elevator from the ground floor to your luxury penthouse apartment, the increase in your gravitational potential energy will make you heavier by E/c ². Importantly, the speed of light is a very big number (approximately 300,000,000 m/s). Even a moderately large change in your energy will result in an immeasurably small variation in your mass. Conversely, though, if enough energy were released to produce a noticeable change in mass, then you will have released a very large amount of energy indeed. This is exactly the idea behind nuclear power and The Bomb. Nuclear reactions (far more than chemical reactions) can involve non-negligible changes in the reactants’ masses, and thus release extraordinary amounts of energy. When controlled, such reactions power entire cities, but uncontrolled, they level cities to the ground. Before concluding, we should clarify one final point. Some people describe E = mc ² as being about converting mass into energy, as though it meant you were using mass as fuel to burn and bring about energy in its stead, but that is not the case. Rather, the principle of mass-energy equivalence is exactly that: equivalence. The existence of mass implies the existence of energy; wherever you find mass, you already have energy. Mass disappearing and energy coming out is not a conversion process — the energy has left, therefore that mass has also left. The history of nuclear power and the Cold War involves far more stories than Einstein’s, of course. The pioneering work of Ernest Rutherford (New Zealand’s greatest physicist), Marie Curie (one of the greatest scientists of all time), and other atomic scientists played a major role. However, mass-energy equivalence provided a fitting capstone to a year of miracles. E = mc ² has become the equation most deeply associated with Einstein’s legacy — and fair enough, given its significance — but 1905 must be known for more than just that one equation. In a single year, Albert Einstein kickstarted the quantum revolution, brought atoms out of the realm of speculation, invented special relativity, and then used it to demonstrate a fundamental truth which would define the 20th century. Physics has never been the same since. Einstein would go on to make substantial contributions to the fields he helped invent — most significantly by generalising special relativity to form our modern theory of gravity. However, he was never able to match 1905. Perhaps no one ever has. Fortunately for us, though, there is still a great deal about our universe we do not understand. It’s about time we got another Einstein to have another crack at it. Any volunteers?

  • A Self-Propelled Compass Needle: Places to Be, Bacteria to See!

    Magnetotaxis and magnetoreception are abilities present across both the Earth and its phyla due to the planet’s ever-present magnetic field. These species – such as bacteria, microbial eukaryotes, birds, molluscs, and reptiles — use the magnetic field to navigate during large migrations or for small movements, thus allowing species to inhabit more favourable environmental conditions. In eukaryotic microbes, this ability has been assumed to result from symbiotic relationships with magnetotactic bacteria (MTB), while the origin of magnetotactic abilities in macroeukaryotes has remained largely unknown. One study [1] suggests that similar symbioses are a common occurrence among magnetotactic macro-eukaryotes. However, the recent discovery of microbial eukaryotes that can biosynthesise components necessary for magnetotaxis (rather than uptaking them from the environment or engulfing magnetotactic bacteria to form symbiotic relationships) will challenge current understandings, and add to a wide variety of bacterial biomedical and nanotechnological opportunities. The Ocean is Full of Tiny Microbial Compass Needles A range of microbes use Earth’s magnetic field in orienting and navigating themselves toward anoxic environments; this is often paired with aerotaxis, a movement and directionality driven by oxygen availability. Many microbes move through random Brownian movement1 , but this magnetotaxis directs flagellated microbes towards anaerobic and micro-aerobic sediments2 . This ability arises from the magnetosome. This organelle is present in two forms, each with a lipid bilayer membrane: an iron oxide magnetosome with a magnetite crystal or an iron sulphide magnetosome with a greigite crystal [2]. In prokaryotes, these magnetosomes are bulletshaped or prismatic and form chains in the cytoplasm along a dipole (thus creating a “compass needle”) (Fig. 1). Magnetotaxis is not confined to one bacterial clade but rather has developed across many taxa. Because of this, MTBs are globally distributed and have been found across a wide range of environments (many of which qualify them as extremophiles). The presence of MTBs in extreme environments — for example, as alkaliphiles [4] and thermophiles [5] — suggests that magnetosomes can form and tolerate a large range of environmental extremes. Typically, MTBs are anaerobic or microaerobic and occupy both marine and freshwater environments and sediments. Rather than being dependent upon iron-rich environments (for the formation of the magnetite and greigite) MTBs tend to require an oxic-anoxic interface (OAI), such as those that form at the sediment-water interface. The highest concentrations have been found in the OAI — typically the top 1-4cm of sediment — although some MTBs have been identified at the water-sediment interface in oxic environments [6]. Magnetite-producing MTBs are concentrated around the OAI, while greigite-producing MTBs occupy habitats below the OAI in the sulfidic anoxic zone. This segregation is due to high concentrations of sulphide in the anoxic zone, promoting the formation of greigite (Fe3 S4 ), while the OAI has greater inputs of oxygen and thus forming magnetite (Fe3 O4 ) [7]. The reliance upon and interaction with the Earth’s magnetic field by certain bacteria creates three classifications of MTB. North-seeking MTB dominate the northern hemisphere, while south-seeking MTB dominate the southern hemisphere (and the equator has a roughly equal population of each). When inhabiting chemically and vertically stratified waters, anaerobic MTBs will swim away from the magnetic field and thus downwards to anoxic sediments. However, there are exceptions to this, as populations of north-seeking MTB are present in the southern hemisphere and vice versa. The reason for this discrepancy was initially credited as an increasing redox potential [8], but further study is required to understand the relationship between MTB densities and other abiotic factors [9]. The Independent Microbial Eukaryotes That Don’t Need No MTB Until recently, magnetosomes have only been found in bacteria, and any magnetotactic abilities in eukaryotes were thought to be the product of ectosymbiotic relationships with bacteria and magnetosomes. These relationships occur by non-flagellated bacteria³ latching onto the surface of the eukaryote, and thus granting magnetotactic abilities to the eukaryote [10]. However, a deep-ocean foraminifera — Resigella bilocularis — has been identified and distinguished as one of the few known eukaryotes capable of biosynthesising magnetosomes without a symbiotic relationship with an MTB [11]. This protist inhabits the depths of the Mariana Trench and produces magnetite that is morphologically distinct from that produced by MTB or found in nearby sediments. The magnetite of R. bilocularis is porous, octahedral, and of varying sizes (11, Fig. 2), while the magnetite of MTB is typically smooth-surfaced, cuboidal, and arranged in one or two chains [12]. Environmental magnetite also differs because it typically has an irregular (but smooth) shape of a larger size and lacks an organic envelope. Another study [13] found a similar magnetotactic protist that likely biomineralised bullet-shaped chains of magnetite magnetosomes within its cell. However, while this evidence supports the biomineralisation of magnetite in eukaryotes, it is possible that an endosymbiotic event with MTB occurred early in their evolutionary history. Further research is required into the evolutionary history and relationships of these protozoa and bacteria. Figure 1: This shows transmission electron microscope images of different magnetotactic bacteria. The magnetite magnetosomes are the black lines, often seen in a single line. Scale bars represent 100 nm. Figure 2: This shows the magnetite produced by R. bilocularis. The porous and octahedral structure of this magnetite can be seen (g, h, i, where scale bars are 2μm). The carbon-containing membrane (L) and the magnetite structures (M) are present in (j). (k) denotes elemental mapping of the white square shown in (j), where the red represents iron and green is carbon. Both (j) and (k) have scale bars of 0.5 μm. Figure 3: This shows the required features of a medical nanorobot compared to the biological, features of the magnetotactic bacteria. These MTBs can be paired with a targeted magnetic field for navigation. Note the MTBs ability to easily secure anticancer drugs and molecules to its surface. A Working-Class Microbial Compass Needle MTBs have proven effective in fulfilling a variety of nanobiotic roles in both artificial and microvasculature environments. Traits like their chemical purity, low toxicity, and surface morphology have caused some researchers to turn to bioengineering microbes for biomedicine and healthcare, rather than using artificial nanorobots to perform complex procedures. Magnetotactic microbes and their magnetosomes provide a means to control and direct the microbes to specific targeted tissues by manipulating local magnetic fields (14, Fig. 3). For example, it was found that an MTB, Magnetococcus marinus, could penetrate an anoxic tumour further than passive, artificial agents [15], in addition to bearing no negative effects. However, while microbial nanorobots such as MTBs are often more effective than their artificial counterparts, they are currently obstructed by their own physiology. For example, while the cell may often grow faster in relatively oxygenrich environments, magnetosome growth is optimised in oxygen-poor environments [16]. Meanwhile, magnetotactic eukaryotes have the potential to provide solutions for MTB shortfalls. The highly ciliated Tetrahymena pyriformis is a magnetotactic protozoan that reaches greater speeds than MTBs, and like MTBs, can be controlled via artificial magnetotaxis [17]. As well as the repurposing of the MTB as nanorobots, their magnetosomes can also be of use when extracted and replicated. The magnetosome can be bound to the drug and targeted to specific tissues, as shown by the coupling of antitumor drug doxorubicin to the isolated magnetosomes of Magnetospirillum gryphiswaldense [18]. Another study [19] also found that magnetosomes could be used for hyperthermia cancer treatments4 . Magnetic hyperthermia heats magnetic nanoparticles in a controlled manner to deactivate or kill cancer cells. This method efficiently destroys the tumour by utilising chains of magnetosomes, compared to the limited anti-tumour activity of individual magnetosomes. Alternatively, bacteria like Magnetospirillum gryphiswaldense can increase temperature to 45°C by applying an external magnetic field. M. gryphiswaldense has no direct impact on the viability or proliferation of the cancer cells, but rather its effect occurs only through its heating (i.e. the hyperthermia treatment) [14]. In addition to biomedicine, isolated magnetosomes can also be used to detect and separate pathogens in food (e.g. Salmonella [20]), and in key laboratory processes like protein assays, enzyme immobilisation, and as MRI (magnetic resonance imaging) contrast agents. Finally, MTB cultures (or isolated magnetosomes) can be used to generate a small amount of electricity through the application of Faraday’s law [21]. This study observed the movement of magnetic nanoparticles through a solenoid (a wire coil); while the electricity produced is minute, it may still have applications in bio and nanotechnology. Magnetotactic microbes are typically motile, gramnegative bacteria that inhabit aquatic and endobenthic environments. Their use of magnetosomes to create an internal compass allows them to navigate to optimal oxygen conditions, providing an advantage over microbes that rely on Brownian movement. However, magnetotaxis is also present in protozoa; while this ability is typically the result of symbiotic relationships with the magnetotactic bacteria, some have developed the ability to independently biomineralise their own magnetosomes, thus redefining understanding around magnetotaxis, magnetosomes, and magnetotactic microbes. Both magnetotactic bacteria and eukaryotes ultimately have the potential to be beneficial in a range of biotechnological applications in the future. ¹ Brownian movement is the random movement of microbes which results in chance encounters on ideal environmental conditions. ² Microaerobic environments have very low concentrations of oxygen, while anaerobic environments are devoid of oxygen. This corresponds to different microbial respiratory environments. 3 These non-flagellated bacteria cannot be called magnetotactic bacteria because they cannot independently move in regard to the magnetic field lines, and instead must rely on the host eukaryotes. 4 Hyperthermia cancer treatment heats tissue to 45°C to impair or kill cancer cells, thus increasing the susceptibility of these cells to radio and chemo-therapy.

  • Botany of Auckland Pest Plants

    There are over 40,000 exotic plant species in Aotearoa, a number that completely swamps the 2,400 native species that were here first [1]. Most exotic plants arrive in Aotearoa intentionally for cultivation. Many of these species then ‘jump the garden fence’ and become naturalised in their own wild populations, some becoming invasive species that outcompete native plants [2]. As the country's most populated city, Tāmaki Makaurau is absolutely packed with invasive plant species. They travel by animal and wind to every nook and cranny of our parks, maunga, street-sides, and gardens. But who are these plants? What is their ancestry and where have they come from? How different are they to what is already here? This article will explore the botany of some common invasive plant species and also give advice on removal should readers be inspired to do some weeding of their own. To form sustainable populations in any place, a plant must be able to survive and reproduce under the conditions presented to them. In pre-human times, Auckland provided diverse substrates on which many different plants could grow. For example, the volcanic ash and rock that is abundant across Auckland has allowed fertile, granular soil to develop [3]. Additionally, sediment deposition along river floodplains, such as near the Manukau Harbour, has allowed fertile silt soils to develop [3]. In contrast, greywacke rock uplifted in the Hunua Ranges has created steep, weathered infertile slopes [4]. These different landscapes allow all sorts of plant species to find a place that suits them and develop diverse communities. In more recent times, settlers' arrival to Auckland and the subsequent development of agriculture has made soils more suitable for plant growth. Fertilisers, lime application, irrigation and drainage activities involved with farming reduce the limitations that soil conditions can have on plant growth, and create a more homogenous landscape [5]. This results in far less opportunities for plants with tolerances for difficult conditions to colonise space. Consequently, introduced plants that can more efficiently exploit the nutrient and water resources available to them, are fastgrowing, and spread quickly, are excellent competitors in these conditions. Beyond agriculture, Auckland is now heavily urbanised, and has become the largest city in the country. There has subsequently been an increase in introduced and exotic species as people bring all kinds of plants from all over the world to plant in their private gardens [6]. With such intense changes in water, nutrient, and light availability in Auckland landscapes, native plants have plenty on their plate. The addition of exotic competitors makes the conservation of native forest communities in the urban world incredibly challenging. It's important for Aucklanders who want to reduce invasive plant populations in the city to ‘know their enemy’, and understand as much about the origins and ecological function of these invasive plants as possible. As a refresher, plants are classified in increasingly smaller groups of genetic relatedness: Division, Class, Order, Family, Genus, Species. Climbing Asparagus Fern Asparagus scandens Climbing asparagus fern. Image from weedaction.org.nz Climbing asparagus fern is one of the most common invasive species in Auckland. It is a monocot, which are not woody plants, and as a result they don’t get incredibly large like trees. Monocots diverged earlier on the evolutionary tree than the Eudicots, which comprise most other flowering plants, and are characterised by a lack of secondary growth. This means that their shoots cannot get wider as they get taller, and so limits the structural stability and subsequently the height that these plants can achieve. The climbing asparagus fern, for example, can grow prolifically but all of its stems remain as delicate, thin tendrils, which achieve height by growing on other plants. Although it is described as a ‘fern’ in its common name, climbing asparagus has been historically considered part of the Lily family (Liliaceae), related to lilies and tulips. It is now placed in the family Asparagaceae, and neither of these families are remotely related to ferns1 . The Asparagaceae family is home to many popular houseplant species, and climbing asparagus fern is one of these. Its genus, the Asparagus genus, is made up of around 300 species that grow mostly as vines in the forest understorey, and this species, Asparagus scandens, is native to the understorey of coastal South African forests [7]. In Auckland, climbing asparagus smothers understorey plants and grows all along the ground in shady areas, preventing seedling germination of other species [8]. If you see it around, you can get rid of this plant by spraying it with common glyphosate herbicide repeatedly. Tradescantia Tradescantia flumenensis Tradescantia. Image from forestflora.co.nz. This species is one of the hardest weeds to get rid of. Like the climbing asparagus fern, it is a member of the monocots and related to the Lily family, although rather than being from the asparagus family it is from the Commelinaceae family [12]. Commelinaceae are mostly tropical and subtropical herbs, and are mainly used by people for ornamental value. Native to South America, Tradescantia is a well known houseplant, and although the green variety is no longer allowed to be sold in Aotearoa, the purple and white variegated variety is still very popular with Auckland residents [13]. If you have this houseplant, make sure you don’t throw it outside in a compost or rubbish bin. It needs to be burned or sprayed with herbicide to kill it. This is because Tradescantia can regrow from cut stems, allowing fragments to wash up along rivers and waterways [14]. From these fragments, Tradescantia creates large mats of groundcover in forest understoreys that prevent native seedling germination [15]. An alternative form of weed control is chickens — they love eating the leaves! Interestingly, although Tradescantia has detrimental effects on forest understorey regeneration, it is used as shelter by endangered native skinks [16]. In ecosystems there are always multiple ecological functions of each organism, and it can be difficult for conservationists to evaluate the net effect that each species has on communities with so many different needs. Woolly Nightshade Solanum mauritianum Woolly nightshade. Image from weedbusters.org.nz. Woolly nightshade is a small tree with large, fluffy oval leaves. It germinates and grows to reproductive age easily, and its fruits are eaten by native and exotic birds, promoting its spread. Woolly nightshade is part of the very large clade of the Asterids, which is the most recently diverged group in plant evolution, and also the largest clade with around 80,000 species [9]. Unlike their earlier diverging sister group of Rosids, which comprise the majority of other flowering plants, Asterids usually have fused petals, so that the flower forms a tube around the stamens and carpel [10]. The purple flowers of woolly nightshade follow this trend, with a fused base and five pointed lobes which would have once been five separate petals in an ancestral species. Woolly nightshade is part of the Solanaceae (nightshade) family, which is a famous family with many toxic plants. This species is no exception, and it produces toxins which prevent other plants from colonising the soil around it, known as allelopathy [11]. Plants of the Solanaceae family are found all over the world, but are most diverse in South America, which is where woolly nightshade is from. Its genus, Solanum, includes globally popular cultivated species that millions of people rely on for food, including potatoes, eggplant, tomatoes, and chillies. The woolly nightshade plant has a faint smell that I personally think is nauseating. If you see it in your area, you can handpull small plants and cut the trunks of large plants (the wood is very soft) and paste the stump with tree killer. Tree Privet Ligustrum lucidum Tree Privet. Image from weedaction.co.nz This tree is a member of the Olive family, Oleaceae. Unlike the other invasive species mentioned in this article, privet is from the northern hemisphere, from temperate regions in East Asia [17]. Like the well-known olive, tree privet has small, dark purple berries. These berries are poisonous and are thought to have negative effects on native insects [18]. Another famous exotic species from the Oleaceae family that has invasive characteristics in Auckland is jasmine, which has very sweet smelling flowers. These species, like most other Asterids, have petals that are fused at the base, and members of the Oleaceae family have four petals per flower [19]. Tree privet is an invasive species in Auckland because it produces high quantities of viable seed and is long-lived, surviving as a small tree species for around 100 years [20]. Like woolly nightshade, tree privet forms a subcanopy that outcompetes other native species, preventing native regeneration and succession processes [21]. This species is much harder to pull out than woolly nightshade and also has much harder wood, making it more difficult to saw through. Most of the time removing it involves cutting the base of the trunk and painting the stump with strong stump-killer — tree privet will reshoot from a cut stump so it is very necessary to apply herbicide. Moth plant Araujia hortorum Moth plant. Image from weedaction.co.nz Moth plant is a vine from the Apocynaceae family. The common name of this family are the milkweeds, and this is because these plants release white latex when their stems are broken. Moth plant is no exception, and its milky latex is irritating on skin and stains clothes. Most of the species of the Apocynaceae family are endemic to the tropics, including the moth plant, which is native to South America [22]. As another Asterid, its five white petals are fused at the base and can trap and kill insects [23]. The fruits of the moth plant are very distinctive: massive pods that are about the size of a hand and shaped like a rugby ball. Inside these pods are hundreds of seeds, each with a feathery plume attached to one end to help them disperse by wind. Moth plants germinate in massive numbers, and smother and strangle other plants. Small moth plant seedlings can be pulled out, and large vines should be cut at the base and painted with weed killer. It’s also important to remove the pods (as once they open they cause further infestation), and destroy them so that they don’t open. These five weeds are arguably the most common in Auckland, but there are thousands more invasive plant species growing happily in the city. Once you start looking for weeds, it can be overwhelming just how many there are, particularly those growing along roadsides and on unmanaged land. Conservation is a strongly value-driven science and activity, and is the result of human conceptions of nature and wilderness being projected onto the nonhuman world. Consequently, significant conservation outcomes require huge amounts of effort and intervention. Exotic plant control is a central component of conservation, and understanding where such plants come from, and how they grow and spread, is important for achieving plant conservation goals.

  • Speculative Biology in Science Fiction

    Some phenomena in biology will probably never be truly known to humans. For example, scientists will never get to know exactly what the first organism looked like. We will probably never know what it is exactly like to experience life as an animal that isn’t ourselves. Concepts such as this can, however, be predicted through in-depth analysis of available evidence and thorough reasoning. Theoretical biology is so fascinating because it encourages a higher degree of imagination than other areas of scientific research. We can take well-defined biological concepts and apply them to our prediction of the far past and future. Theoretical biology can be applied to eras in which humans didn't exist, and planets other than our own. When the field of evolutionary biology is combined with speculative zoology, it allows us to envisage populations of animals that we will never get to see. We can imagine new types of body plans, behavioural adaptations, and species interactions. Several scientists, authors, and artists have provided unique perspectives on what the biology of the future will look like. Some examples of outstanding works include The Future is Wild by Victoria Coules (2003) [1], Alien Planet by Wayne Douglas Barlowe (2005) [2], and All Tomorrows by C. M. Koseman (2006) [3]. Critiquing these publications under additional scientific scrutiny can bring forth new insights into what species will arise in a future world affected by rapid climate change, continental drift, and the birth of novel natural selection agents. The Future is Wild is a documentary miniseries written by Victoria Coules and is based on the book ‘After Man’ by Dougal Dixon. This miniseries envisions Earth millions of years in the future where man is extinct and no longer alters the natural environment. The continents have moved with the shifting of the tectonic plates, forming new landscapes and ecosystems. Man-made agents such as climate change and urbanisation have been selected in favour of common pest animals such as termites and wild boars. Over several million years, these animals have evolved into the creatures that the show centres itself on. With each animal presented in the show, a range of scientists describe its respective niche and evolutionary history. The audience is also given detailed examples of similar modern-day creatures. ‘The Future is Wild’ does not aim to display an accurate prediction of how evolution will shape the fauna of Earth. Instead, it gives one possible outcome of the environmental and evolutionary patterns we are seeing today. The miniseries is split into 3 main eras; 5 million, 100 million, and 200 million years into the future. Some of the most interesting organisms are shown in the latter section, having diverged the furthest biologically from modern-day fauna. One such creature is the megasquid, a descendant of the modern-day squid species. The megasquid is a large terrestrial vertebrate, with a somewhat similar appearance to an elephant. It possesses six trunk-like legs with two smaller frontal tentacle appendages. This piece of theoretical biology does not come without its criticism. Some critics argue that squid are not able to evolve to develop terrestrial locomotion as their bodies are specially adapted to the deep sea. Despite the obvious challenges of not being able to see directly into the future, I believe that The Future is Wild still achieves its primary goal, and is an example of an extremely intelligent and entertaining television series. While The Future is Wild focuses on the biology of Earth, Alien Planet focuses on the biology of the exoplanet Darwin IV. Life on Earth as we know it is rigidly defined as entities that hold certain attributes, including metabolism, reproduction, and growth. The ‘life’ of other planets, however, could operate entirely differently to that on Earth. Instead of being reliant on water, life on other planets may be reliant on some other liquid medium. This is all speculation, of course. But some speculation is better than none. In creating the universe of Darwin IV, author Wayne Douglas Barlowe assumed some similarity to Earth organisms, assuming the existence of both singular and multiple cellular life forms. Barlowe also created extraterrestrials that had somewhat similar niches and adaptations to the organisms found on Earth. Although Alien Planet is ironically conservative in its predictions, it is still wildly fascinating. The natural history of Earth must be appreciated for all of its glory, but extrapolating evolutionary theories to other planets is so much more interesting than the possible future animals of Earth. Finding evidence of alien life on another planet may arguably be the greatest scientific discovery of humans. So let us imagine a world far in the future, where humans are getting to see the splendour of extraterrestrial life for the first time. One interesting aspect of life on Darwin IV is that some of the main terrestrial apex predators use sonar rather than sight to hunt for prey. Perhaps this type of sensory adaptation could have appeared more often in Earth animals if evolution were to have taken a different path. At the time of its exploration by our robots, Darwin IV no longer has oceans. Rather, the planet possesses a ‘sea’ formed by a matrix of amoeba and other microorganisms. Creatures seven storeys high travel along this amoeba sea, absorbing the surface upon which they walk. The discovery of life on Darwin IV may have given humans a new appreciation for life in general, and our role in the universe. The last piece of science fiction I wish to discuss is All Tomorrows. If you, the reader, have spent many hours on the science side of YouTube, you have likely heard of this book. Researcher and author Cevdet Mehmet Koseman wrote the book All Tomorrows as a theory of what species man will evolve into with the development of space travel and genetic engineering. Although C. M. Koseman is not formally trained in biological sciences, his predictions about the future of mankind are fascinating, nonetheless. Just because C. M. Koseman is not a biologist in the traditional sense, does not mean that the world he has envisioned will not be realised in some alternate universe. Besides, All Tomorrows discusses scientific concepts such as gene editing, cultural evolution, and artificial intelligence. Before I praise the book any further, I will give a brief summary of some of its highlights. As is common in other works of theoretical evolutionary biology, future humans have heavily altered the Earth’s environment through accelerated climate change. Thousands of years into the future, facing overpopulation and resource depletion, the humans of Earth decide to colonise Mars. After a long period of separation between the two breeding pools, the humans of Mars evolve into the Martians. Martians are similar in appearance to their predecessors, but are taller and thinner due to the low gravity [4]. Modern science says that Martians may be exposed to more radiation per year than Earth humans [5], thus leading to a higher rate of mutations and therefore perhaps higher evolutionary rates than on Earth. After the colonisation of Mars, humans learn to colonise several other planets and galaxies. Through a combination of wars, death, and artificial selection, humans come to speciate into a multitude of forms. Some of these sub-species are highly intelligent, forming their own unique cultures on different planets spanning light-years across the universe. It is difficult to imagine a universe where several species of humans exist. However, all it takes for evolution to occur is a strong natural selection agent, sexual reproduction, and mutation. Perhaps, someday, only our genetically distant descendants will remain extant, unable to comprehend what it meant to be a Homo Sapien. What does it mean to be human? What does it mean to be alive? Theoretical evolutionary biology attempts to answer some of the most challenging questions in science. However, unlike other disciplines, artists, philosophers, and scientists alike can appreciate the creativity that arises when we attempt to understand these concepts. The humans of today will never know for sure what is going to happen in two million years, but we know our atoms will still be present in some form, changing and moving through the universe.

  • The Future of Food

    Our eating habits reflect our biological needs, cultural practices, and accessibility to resources. Aotearoa is facing mounting sustainability issues and Fonterra has recently been named the highest carbon emitter in the country, after reporting over 13 million tonnes of greenhouse gas emissions to the Environmental Protection Authority. Dr Rosie Bosworth is a specialist in the future of food, with a PhD in environmental innovation and sustainable technology development. We interviewed her in 2021 on our radio segment, titled Tomorrow’s World, which airs on 95bFM. We decided to revive this interview in light of growing food sustainability concerns for Aotearoa, and adapt it into a print article exploring the future of food. While it is well known that changing to a plant-based diet mitigates the effects of climate change in a myriad of ways, for some, a stark shift to entirely plant-based just isn’t feasible. So, what could diets look like in the future if the entire planet can’t go strictly vegan? Is a vegan diet more sustainable? Historically humans have consumed meat to satisfy nutritional needs. With hunting related to high danger risks and energy demand, a shift to intensified agricultural practices has increased with patterns of urbanisation [1]. However as wealth and resource extraction has concentrated into some regions, and populations have increased globally, the type and quantity of food produced has changed dramatically. In the last four decades global meat production through agriculture has increased by 20%, with 30% of the global land surface area used for animal production [2]. The normative practices of consuming meat within a daily diet has contributed to biodiversity loss and increased greenhouse gas emissions. However it is important to consider that the consumption of any resource comes at an expense. We asked Dr Bosworth how sustainable a vegan diet is: “It is complicated, vegan food has so many types of ‘plant based’ options – some of which are being criticised for having a large footprint themselves – like almond milk. But there are now also more and more advancements in science and biotech which mean we can even produce the same proteins as those found in animals or dairy proteins themselves, without the animals, that don’t require the use of plants as substitutes . When you’re looking at plant based milks, almond milk gets a worse reputation than other plant based milks like oat or coconut, but even when you compare almond to dairy it is markedly more environmentally friendly, especially in terms of water use.” The idea of lab grown food, which Dr Bosworth refers to as ‘biotech’, has been rising in popularity. Even large fast food chains such as Burger King have released Beyond Meat® and ImpossibleTM Foods burgers. So how do these cell based meat processes stack up sustainably? A life cycle assessment (LCA) considering the eutrophication, potential land use requirements, and greenhouse gas emissions of these alternative proteins compared to chicken, lamb, and beef (Fig 1) show a better performance for cell-based meats [3]. However, currently the energy consumption used by cell-based meat production exceeds all alternatives. Cellular agriculture [cellular agriculture is] “Taking cells from animals and growing these actual cells outside the animal. By feeding them a carbohydrate feed stock, we don’t need all the energy source to produce that we do to grow animals over time to slaughter or raise as dairy cows. Another really cool process that’s being advanced right now to produce dairy proteins and other molecules is precision fermentation. Precision fermentation involves programming yeast or fungi to produce the very same proteins and molecules like milk or cheese, without the animal, in large vats. Essentially, the cow is becoming an old piece of tech.” As a response to the long-term environmental degradation that traditional livestock agriculture creates, biotechnologists have conceived a new route of catering to the 21st century human’s desire for meat: cellular agriculture. As Dr Bosworth mentions, the process is essentially taking a piece of animal tissue, relevant to the section of the animal we want to consume. Then, these cells are cultured, and given all of the nutrients in vitro that they would receive in vivo. They grow to maturity in a bioreactor (which is simply any man- made vessel that carries out biological processes) in the same manner an entire organism would grow in a field, and reach the same fate that such an organism would: they’re harvested, and processed appropriately. There are two distinct processes included in cellular agriculture, and they’re not limited to producing ‘meat’. Acellular products can create things like milk, for example, using a starter culture, inserting the gene that produces milk, an animal protein, into a microorganism. This means the process of milk production then occurs in a lab, outside of an animal, so we skip all of the excess maintenance of the animal that would occur, and jump right to the end result; the animal protein we desire. This is the process by which most medical insulin is made, and the host microorganism in that case is generally E. coli. These engineered microorganisms do all the work for us, and are markedly lower maintenance than farming an entire cow. Cellular agriculture, alternatively, takes specific tissue from a biopsy, and is grown similarly to acellular products, with a scaffold and nutrients. Its differentiation is the fact that living cells are being cultured, rather than proteins. The main part of the meat we eat is muscle tissue, so this is where the biopsy is taken from. Ethics The ethics of cellular agriculture [4] could fill two entire volumes of this publication alone, so we’ll simply outline them. There’s a pro-stance, which argues that since we’re avoiding the raising of livestock purely for the use of their resources and inevitable slaughter, the process aids animal welfare. And it’s easy to see the arguments for this; we do indeed clearly bypass the possibilities of inhumane treatment, because we don’t have a whole organism (in the traditional sense) to deal with. It also ties in neatly with the argument of sustainability; by avoiding the raising of a whole cow, we avoid the emissions that said cow creates, simply by its existence. That’s avoiding a lot of emissions even before we get to the supply chain points of maintenance, space, land use, water consumption, then the myriad of processing that needs to happen after the animal’s demise. The inverse of these arguments is a tricky conversation: Gene-editing may be perceived as tied up with the ethics of ‘playing God’, and the implicit debate within these questions as to what the definition of ‘life’ is. Of course, these cells are ‘living’, but are they sentient? And how does that make a difference to how they should be treated? Answers to these questions are value-laden and boil down to a pretty detrimental issue for the process if left unresolved. If people are unsure about how they feel about this new technology, they A) won’t participate or B) will actively rally against the concept. There’s little point in developing technologies such as this, if they won’t be accepted and adopted by the populus. Science often operates as a knowledge seeking exercise, and as catering to the needs and desires of the population; if no one’s using it, it’s a dead end. Manipulating soy to mimic meat textures and tastes “Heme (or leghaemoglobin) is a molecule found in cows but can also be bio-fermented and harvested using the same DNA found in soy root nodules. It’s what gives meat that umami aroma and meaty rich smell and taste. [This is important because the] average consumer wants a similar experience with meat burgers – not a rubbery or bland soy product. There’s a sensory experience that tofu may not give, and we need to offer the same sensory experience to get mainstream audiences to switch over.” As an alternative to cellular agriculture, the biofermentation of heme may provide another solution to people’s rejection of plant-based alternatives. As important as taste is, it’s not the only component in the sensory experience of food. As Dr Bosworth explains, heme can be found in cows, and is utilised by meat substitution products to recreate the ‘mameme aroma’ experience, which can be so imperative to enjoying meat products. Haemoglobin is the source of heme in cows, but can be replaced with sensational likeness by leghemoglobin in a food context. Leghemoglobin is found in the root nodules of soy and other legumes, and fixes nitrogen as soy plants grow. The two are oddly similar, which is why leghemoglobin has been appropriated for the purpose of mimicking ‘blood’ in plant-based foods. There are many methods of accessing heme in leghemoglobin. The most intuitive one is digging up the roots of soy plants, and extracting the goodness inside for our purposes. However, this does seem counterintuitive if part of the aim is to be more sustainable – ripping up acres of crops for their roots doesn’t quite fit. So, researchers found another way to produce leghemoglobin: fermentation. Again, our tiny microorganism friends help us battle climate change. Fermentation for heme production involves using genetically engineered yeast, which has been inserted with the gene for leghemoglobin production (in soy, this gene is LBC2) [5]. The ancient process of fermentation then ensues, and a whole batch of yeast, working hard to produce leghemoglobin, is created. It is similar to the acellular agriculture process. After this, it’s simply a matter of isolating the leghemoglobin produced, and adding it to whatever meat substitute a company desires. Environmental psychology and the value-action gap When we consider what we will be serving for our University reunion dinner in twenty years time, we may be leaning towards in vitro meats. Although a vegan diet offers many benefits, the sensory experience and cultural ties to eating food associated with emotions of satisfaction will remain [6]. When you know something tastes good, your taste sense works through chemosensory where a chemical stimulus on a nerve ending (taste bud) is mediated through taste and smell, and naturally our bodies like things that give us energy, such as sugars and carbohydrates [7]. We asked Dr Rosie Bosworth how future food developers considered this: “When we think about food, future foods don't want to consider themselves as food tech or science start ups, especially when positioning themselves for the end consumer. By and large they still consider themselves as a producer of tasty food, that is the most important bit.” The cultural and sensory process of eating meat can be related to environmental psychology, modelled by the value-action gap [8] . Although we may be aware of the environmental and health benefits of eating less meat, there are stronger values such as convenience, habits, and satisfaction that result in continued meat eating behaviour. A 2021 New Zealand questionnaire found that an omnivore diet was the most prevalent dietary category (94.1%). Gender (men) and political ideologies (conservatism) predicted lower probabilities of transition from a meat to no-meat diet [1]. As climate concerns, food production demands and ethical tensions continue to grow it will be interesting to see which food technologies gain mainstream traction. This is where future foods such as cell-grown meats may come out as the top dish.

  • Einstein’s Miracles, Part 3: Relativity

    Einstein’s theory of special relativity is among the greatest scientific works ever produced. The content of his third annus mirabilis paper, On the Electrodynamics of Moving Bodies, constitutes the miracle year’s absolute highest point, and is, to me, the most emblematic of what made Einstein such a transcendent genius. His ability to see the universe with fresh eyes — unburdened by the assumptions built by previous generations — and generate a truly original framework will be shown in full force. Special relativity challenges our most basic notions of space, time, and motion. We will not be able to fully develop every idea contained in Einstein’s paper, and our approach will diverge somewhat from his to keep things simple. Nonetheless, you will see how fundamental its subject matter is to how we perceive the universe. Figure 1: Albert Einstein, transcendent genius. Galilean Relativity The story of special relativity begins hundreds of years before Einstein with another truly great physicist: Galileo Galilei. Although he is best known for astronomy and heliocentrism, Galileo made significant contributions to the laws of mechanics. In particular, he formulated the socalled Galilean principle of relativity, which constitutes one of two fundamental postulates that Einstein used to derive his theory of special relativity [1]. However, to understand Galilean relativity, we must first take a detour to talk about reference frames. You can think of a reference frame as the camera through which you are viewing the scene. Imagine someone named Alice is on a train moving at some speed v and passes another person named Bob, who is standing still by the side of the tracks. A camera centred on Bob would see him as stationary and Alice as moving through the shot at a speed v. Conversely, a camera tracking with Alice would see her as stationary and Bob as moving through the shot with a speed v in the other direction. Although we usually think of Bob’s reference frame as being ‘more correct,’ this is only because we spend most of our time stationary with respect to the Earth’s surface. Both perspectives are equally valid. Suppose Bob is throwing and catching a ball for fitness and for fun. If he throws the ball directly upwards, it will rise and fall without deviating sideways and he will not have to move to catch it. Now imagine that Alice repeats this experiment on the train. She stands still within the carriage and throws the ball directly up. What happens? Does the ball, knowing that the ground is moving beneath Alice, start deviating towards the back of the carriage, forcing her to move to catch it? No. Rather, Alice observes exactly the same behaviour as Bob: the ball rises and falls in a direct line above her. None of the physics changes when you change reference frames. Importantly, that is only true because the train is not speeding up or slowing down as the ball is in the air. Any acceleration will change the result. For that reason, we specifically deal with inertial reference frames — ones which are not undergoing any acceleration. With that established, we can now state the Galilean principle of relativity [1]: The laws of motion are the same in all inertial reference frames. In other words, matter has no preferred reference frame; it is impossible to perform an experiment that will tell you which inertial frame of reference you are in. It is very fortunate that this is true, actually. The Earth is hurtling around the Sun at 107 000 kph, so the laws of motion would be very warped indeed in everyday life if it mattered that we were not at rest compared with the Sun (or the rest of the galaxy). A Not-So-Light Matter Our story now moves forward a couple of hundred years from Galileo to James Clerk Maxwell — another scientist who is very close to Einstein on the Good-At-Physics leaderboard. Maxwell’s most noted contributions to physics are the so-called Maxwell’s equations that describe the behaviour of the electric and magnetic fields. There is a great deal which can be said about Maxwell’s equations, but for our purposes, only one fact matters: Maxwell used his equations to prove that light is a wave in the electromagnetic field that will travel in a vacuum at the speed c = 300 000 000 m/s [2]. This was a triumphant moment which finally answered one of the most significant problems in physics, namely the nature of light. However, physicists quickly noticed that a major issue arose when Maxwell’s result was applied to the Galilean principle of relativity. Figure 2: James Clerk Maxwell Maxwell’s equations are laws of physics, just like Newton’s laws. We know from Galileo that Newton’s laws of motion are the same in all inertial reference frames, so is the same thing true of Maxwell’s equations? If so, that implies that the speed of light (in a vacuum) is c in all reference frames, since the speed of light is a direct prediction of Maxwell. However, this forces us to conclude a seemingly nonsensical result. Let’s return to Alice, Bob, and the train. If Alice, on the train, is moving at a speed v with respect to Bob, and Alice throws a ball forward at a speed u, then we would naturally expect that Bob sees the ball moving forward at a speed v + u. However, Galileo and Maxwell are now telling us something quite different about light. If Alice, instead of throwing a ball, shoots a beam of light at a speed c, we would expect Bob to see it moving at a speed v + c. However, what the Galilean principle of relativity would claim, if it applies to Maxwell’s equations, is that Bob also sees the beam of light moving at a speed c. This seems like a contradiction. How can the apparent speed of light not change depending on your motion relative to it? Galilean relativity must not apply to Maxwell’s equations. For light, there must be such a thing as a preferred reference frame, and it is possible to detect inertial motion relative to that frame. Scientists moved quickly to justify why light would have a preferred reference frame. The foremost theory was that electromagnetic waves move through some medium (called the aether), and the speed of light is only c with respect to the reference frame in which the aether is stationary [3]. If so, then the Earth’s motion with respect to the aether ought to be detectable if a sufficiently precise experiment could be devised. A device known as a Michelson interferometer, named after its designer Albert Michelson, could detect the relative speed of two light waves that travelled in perpendicular directions [4]. If the Earth moves relative to the aether, then a beam of light travelling parallel to the ‘aether wind’ will move at a different speed to one travelling perpendicular to said wind. In 1887, Albert Michelson and Edward Morley built an interferometer that they believed would be precise enough to detect that difference. However, when the experiment was conducted, they detected no absolute aether wind [5]. The Michelson-Morley experiment became perhaps the most famous failed experiment in history. One candidate explanation for this failure was that the Earth dragged the aether with it, perhaps by gravity. That, however, failed to explain other observations, like the aberration of light. Other explanations were proposed, but there was no satisfying physical interpretation of the experiment. The outcome of the Michelson-Morley experiment (and subsequent repetitions and improvements on it) posed major problems to the physics community. Some of you may be noticing a close correspondence between this story and that of the ultraviolet catastrophe which led Einstein to quantum mechanics in his first annus mirabilis paper: an unexpected experimental result and no satisfactory explanation to be found. All of the best scientific work happens in that space of uncertainty. All of the hardest scientific work, too, but that’s what miracle years are for. The Two Postulates While other scientists tried continuously to rework the aether theory to account for the Michelson-Morley null result, Einstein did what he did best: thinking so far inside the box that it sounds like he’s thinking outside the box. Einstein decided to move away from the aether and instead returned to Galileo. He asked himself what would happen if we took Galilean relativity really seriously. What would that imply? He began from just two postulates [6]: 1) The laws of motion are the same in all inertial frames [Galilean relativity] 2) The speed of light in a vacuum is c in all inertial reference frames [Maxwell’s equations are subject to Galilean relativity] Thus, special relativity was born. As we shall see, the consequences of these two postulates are patently absurd. But the theory that was born from them has become one of the most precisely verified and universally accepted theories in the history of science. Figure 3: A laser beam moving from the floor to the roof of a moving train from two different perspectives. Alice sees the light move directly upwards, whereas Bob sees it angled to match the changing horizontal position of the train. Time Dilation Let’s return to Alice and Bob. Alice, on the train, has a laser which will send a beam of light from the floor to the roof — a distance of length d. Both Alice and Bob watch this happen and measure how much time the light beam seems to take to travel that distance. In the reference frame co-moving with Alice, which we will call F, the light travels at a speed c, and therefore takes a time t = d/c to travel from the floor to the roof. In Bob’s reference frame, F′, however, the light does not travel directly vertically, but has some horizontal motion as well. If F′ is travelling at a speed v with respect to F, then from Fig. 4 (and using Pythagoras’ theorem) it is clear that Bob sees the light travel a longer distance. We have: for Alice, but, for Bob. Substituting Alice’s expression into Bob’s and rearranging gives the absurd relation: Alice and Bob measure completely different time intervals between the light leaving the floor and hitting the roof. Time is moving more slowly for Alice! This phenomenon is known as time dilation. The time measured between two events depends on the reference frame you are in. Let us define the factor γ by As v increases from 0 to c, γ increases from 1 to infinity. This means the shortest possible time (known as the ‘proper time’) is measured in the reference frame in which the two events are stationary (where v = 0) — Alice’s frame, in our example. Any other reference frame will measure a longer time interval (since v ≠ 0 implies γ > 1), and the greater the relative velocity between reference frames, the longer the time interval will be. Time dilation is a shocking reality to confront. If you had a twin who became an astronaut and travelled to Alpha Centauri at a speed close to the speed of light, the journey would seem to take around four years to you, but they may have only experienced a few weeks. You would still be twins, but no longer the same age. Interestingly, time dilation is a phenomenon that may allow us to feasibly colonise incredibly distant planets. If we find a planet on which humans could live but which is thousands of lightyears away, it would seem impossible for us to reach it before the colonists on the spaceship died. However, due to time dilation, a thousand-year journey could constitute just a few hours of a colonist’s life if the ship were fast enough. Unfortunately, those on Earth would not live long enough to find out if the ship arrived at its destination, unless our lifespans increased by a couple of thousand years. Length Contraction The bending of reality does not end with time dilation, though. Now let’s imagine that Alice and Bob both try to measure the length of the train. To do that, they both measure the time taken for the entire train to pass Bob. Alice gets: and Bob gets: But remember that t and t′ are not the same. This time, our two events are the front and back ends of the train passing Bob. These two events happen at the same place in Bob’s frame, so he measures the proper time. Figure 4: A plot of γ (known as the Lorentz factor) against reference frame speed as a fraction of the speed of light. The larger γ becomes, the more distorted space and time are when compared across reference frames Alice, therefore, measures a time lengthened by a factor of gamma. Substituting that relationship into these two length equations yields: So the size of an object — the distance between two points — depends on the reference frame as well. The longest possible length (known as the proper length) is measured in the reference frame in which the object is stationary — Alice’s frame, where the train is motionless. The object’s length in any other reference frame is shortened by larger and larger proportions as the relative speed between reference frames increases. This is length contraction. Velocity Transformations As we have seen, both time and space are distorted when comparing inertial reference frames. We must expect, then, that the velocity of an object (how quickly its position changes as time increases) will also defy our intuition. If Alice throws a ball forward on the train at a speed u, at what speed u′ will Bob measure it to be travelling? Pre-Einstein, the answer would be u + v. Post-Einstein, however, the answer becomes something more bizarre: This equation has an important consequence. If Alice throws the ball at the speed of light, i.e., if u = c, then the speed Bob measures is: In other words, something travelling at the speed of light in one reference frame is travelling at the speed of light in any reference frame. This was one of our postulates, so we are seeing that the theory of special relativity is self-consistent. Furthermore, any speed u in any reference frame v will always be measured as less than the speed of light. There is a universal speed limit: c. Nothing can travel faster than the speed of light. Simultaneity and Causality With time and space now fully bent — and our brains bent with it — let us now turn to a thought experiment that brings to light an apparent contradiction in special relativity. Alice’s train hurtles towards a tunnel that is half the length of the train. At each end of the tunnel are enormous guillotines that Bob can control and which would destroy the train if it were in the wrong place at the wrong time. When Bob sees that the train is exactly in the middle of the tunnel, he drops both guillotines simultaneously, but finds that nothing at all happens to the train. How can that be the case? Well, fortunately for Alice the train was travelling at 90% the speed of light, so length contraction meant the train was less than half its proper length (and therefore able to fit entirely within the tunnel) in Bob’s frame of reference. That sounds all fine and dandy, given what we know about special relativity. The contradiction, however, comes when we try to view the same situation in Alice’s frame instead of Bob’s. In her frame, the train is stationary, and therefore its length is not contracted. Furthermore, the tunnel is moving towards her at 90% the speed of light, and therefore it is contracted to less than half its proper length (i.e., less than 1/4 the length of the train). There is no way for the train to fit entirely inside the tunnel, and therefore it is impossible for it to survive when the guillotines drop simultaneously. It would seem, then, that Alice sees her train being cut into pieces. How can the train be destroyed according to one observer, but remain unscathed according to another? Though different observers will perceive space and time differently, they must surely agree on what actually happens to the train, right? Figure 5: Ten years after inventing special relativity, Einstein would take the idea of spacetime even further. His general theory of relativity describes gravity as curvature in spacetime, leading to diagrams such as this (acquired from pngwing.com). The resolution of this paradox comes by challenging something I implicitly assumed in my description of what Alice sees. Just because the two guillotines drop simultaneously in Bob’s reference frame does not mean they drop simultaneously in Alice’s reference frame. Alice survives in her own reference frame because she sees the far guillotine drop first (before she reaches it), then a pause before the rear guillotine drops after she has passed it by. The order of events can depend on the reference frame through which they are viewed! The concept of time in special relativity is entirely different to what we normally experience. There is no absolute ‘present’ across all reference frames, and different observers can disagree on the order in which events occur. This is the relativity of simulteneity. The final complication to this rearrangement of events is causality. If one event causes another, their order cannot be switched. Fortunately this is accounted for in special relativity via the universal speed limit c. It is only possible for one event to cause another if a signal travelling at the speed of light (or slower) can leave the first event and reach the second one on time. If they are too far apart, no information about the first event can influence the second. The maths of special relativity says that the order of two events can only be switched if a beam of light could not travel between them, so causality is preserved. Spacetime Throughout this article, I have been referring to space and time separately, but almost always in conjunction. PreEinstein, space and time were viewed as entirely disjoint entities. However, the more you learn about special relativity, the more linked these two aspects of reality become. Indeed, Einstein’s third annus mirabilis paper unified them through the concept of a spacetime interval. Before special relativity, we believed that distances were the same no matter what reference frame you were in. If two objects are separated by distances x, y, and z in the three spatial dimensions we experience, then the value r 2 = x2 + y2 + z2 was invariant across frames of reference. However, we now know that length contraction exists and therefore the Euclidean distance r is not preserved. As length is contracted, though, time is dilated. So, Einstein was able to discover a new value called the spacetime interval between two events, s2 = t2 - x² + y² + z2 , that was the same in any reference frame. The spacetime interval replaces the notion of distance in special relativity. Space and time were no longer separate entities, but rather two parts of a larger fabric of spacetime. This is why we now say that we live in a four dimensional universe, as opposed to three. Spacetime addresses the interesting oddity in special relativity that is the universal speed limit. It seems strange that any speed is possible less than c, but nothing can move faster. It’s almost asymmetrical in that way. When space and time are unified, though, a rather pleasing resolution to that awkwardness manifests itself. We know that the faster a reference frame is moving, the more slowly time is passing in that reference frame. In other words, the faster something moves through space, the slower it moves through time. So, it is not really the case that any speed is possible; instead, there is only one speed. Some objects are stationary and are moving forward through time at a ‘speed’ c, while others are using some of their speed to move through space instead, and hence move more slowly through time. Everyone and everything is travelling at the speed of light, just in different directions. Not the End This article is overly long already, yet I haven’t even covered half of what Einstein spoke about in his paper, nor was this the last paper he wrote on the subject. Furthermore, you may have been wondering why we call special relativity "special" relativity, not just relativity. The answer is that special relativity is merely a special case — a subset — of the more general theory of general relativity, which also incorporates non-inertial frames of reference. General relativity was unleashed on the world by Einstein in 1915 and has become, along with quantum field theory, one of the two pillars of modern physics. Without special relativity, though, general relativity would not have been possible. Special relativity is an extraordinary topic that forces you to really think like a scientist — casting off your assumptions. We are now three papers into Einstein’s ‘miracle year’, and it is becoming increasingly clear why 1905 is now known by that name. The fourth and final annus mirabilis paper will be an elaboration on some aspects of special relativity which we have not mentioned, notably giving rise to the most famous equation in history: E = mc2 . That will be the topic of my next article here in UoA Scientific.

  • Inside the Hive: the Science Behind our Beloved Honey Bees’ Evolutionary Behaviours

    Eusociality is a social behaviour observed across the Arthropoda and Chordata phylums that are characterised by reproductive division of labour, cooperative brood care and the overlap of generations. This complex and highly networked system has evolved over temporal and spatial scales to yield each individual within a colony a specific role to perform. In some species such as the beloved honeybees (Apis genus), helpers working underneath the reproductive queen never get to reproduce themselves, yet they care for new generations of young in the hive instead. It would seem as if this organisation goes against the drive for life that the majority of organisms on the planet experience — to pass on their own successful genes to offspring; however, strong selection pressures have ensured that this phenomenon has become deeply rooted into some species’ systems. Honey bees have long been the focus of immense research efforts, so we now understand the intricate web of life inside the hive. The research done on this species begs to answer the question as to why cooperation should exist in a world dominated by intense competition for the survival of the fittest [1]. Reproductive division of labour A reproductive division of labour is one of the key elements in defining eusocial behaviour. If we start with a small-scale example, division of labour can be seen at the cell level where it is basal. Asymmetric cleavage during meiosis yields ‘germ and soma’ cell distinction — somatic cells serve only somatic function in the animal’s body via mitosis, while germline cells produce the reproductive gametes. Despite both of these types of cells performing independent roles across time and space, their performance leads to the successful functioning of an entire individual as a whole. This divisive mechanism where somatic and germline cells stayed together in the body after dividing was evidently successful enough to become established at the cellular level, and evolution has since propelled it further up the biological hierarchy — into species level. As seen by the cells, multiple replicating entities remaining together after division forms a greater replicating system. At this higher species level, the division of labour is represented as multicellular organisation, of which stems eusocial behaviour. In the case of our beloved honey bees, the multiple replicating entities can be seen as the bees themselves, the colony is the matter which they remain together in, and hence the hive becomes the entire replicating system. The division of two castes inside the honey bee hive is fundamental to the significant success of the species. The morphologically distinct queen is responsible for colony founding, dispersal, and egg-laying while the workers perform tasks such as colony defence, nursing, and foraging in order to maintain the colony, yet do not reproduce themselves [2]. In order to keep this system successful in its operation and avoid the establishment of any individuals developing ‘cheating’ methods that enable them to reproduce, workers are morphologically constrained by a lack of functioning organs for sexual reproduction [3]. Today, eusociality takes the form of adult offspring remaining in the colony to help their mother reproduce; instead of doing so themselves [4]. Cooperative brood care: Workers in the Apis genus who exhibit distinct morphological differences from their queen are restricted to only gaining indirect fitness by helping to rear related offspring. The colony’s inclusive fitness is therefore a function of their reproductive output, with total offspring production depending on the quality of the queen and her mate, and of the cooperation of the workers. Much of the colony’s social life therefore revolves around brood care [4]. The fitness benefits that honey bees inside the colony receive from cooperative brood care means that they continue to work in this system, despite seemingly going against the typical drive for reproduction as most species, including ourselves, experience. The division of labour employed by this species means that their own individual fitness is enhanced as it allows more efficient conversion of resources into reproductive capacity [7, pp. 368 - 373]. Overlap of generations as a causal factor for this behaviour The high degree of social complexity observed in these colonies can be explained by the degree of close genetic relatedness from several overlapping generations. The social behaviour observed in honey bees is facilitated by a unique system of genetics known as haplodiploidy; a system in which females develop from fertilised diploid eggs, and males from unfertilised haploid eggs. The consequence of this is that the male passes on his entire genome to his offspring, while the queen passes on 50% of hers, meaning that offspring are 75% related to one another. This genetic system creates an irregular genetic asymmetry in which full sisters are more closely related to each other than a mother is to her own daughters [5]. Figure 1. Insect sociality among a range of species, ranging from solitary insects (left) to completely eusocial (right). Indirect fitness becomes increasingly important throughout complexity as it is reflectant of the entire colony (Vijendravarma et al., 2017). As a result, the dynamicity of the whole colony changes due to increased levels of relatedness. Essentially, the fitness of an individual bee is based on the combined effects that its actions have on other individuals, weighted by their relatedness to that individual. Thus selection acts to maximise inclusive fitness of the entire colony, albeit through a trade off between expending energy into a bee’s own reproduction or investing in helping its relatives. The overall purpose of this behaviour is to increase the abundance of beneficial alleles present in the colony, which is directly beneficial for all individuals due to their high degree of genetic relatedness. This cooperative behaviour is known as altruism, and can evolve between related individuals over time and space. In altruism, a gene directs aid at other individuals who are likely to bear the same gene to itself despite the reduced offspring of its bearer [1]. From an evolutionary standpoint, we can understand that honey bee workers who rear their siblings are able to achieve maximal inclusive fitness when compared to individuals who reproduce themselves [5]. Using Hamilton’s rule, we can understand how evolution selected for the loss of reproductive organs in worker bees, and instead favoured one reproductive queen. This rule is a theorem that acts as a foundation to predict whether social behaviour evolves under combinations of relatedness, cost, and benefit [6]. Hamilton's rule gives an equation to show when an organism should sacrifice their own reproduction in order to help relatives; given as rB > C, where r is the degree of relatedness between two individuals, B is the benefit to the recipient of the behaviour, and C is the cost of the behaviour to the individual giving the aid. C and B can be viewed as lifetime changes in the direct fitness [1]. Whether an organism should make this sacrifice or not depends on the value that is denoted by r. A gene for social behaviour is favoured by selective pressures if the sum of rB and C exceeds zero [1]. Honey bees have become one of the most successful insects on Earth due to their immense range span and establishment; and evidently their unique genetic system greatly contributes to this success. The honey bee colony arose through major evolutionary transitions that were dependent on cooperating entities finding a situation of inclusive fitness that kept them together for their own fitness benefit [1]. Today, these insects provide us with valuable resources and ecosystem services such as being key pollinators of our flora all over the planet. Perhaps next time you see a honey bee, think about the details of its hidden genes and how remarkable these are, as they allow for their widespread success and hence, ours too.

bottom of page