top of page

Search Results

76 results found with an empty search

  • Are AIs Smarter than a 5th Grader

    By Anne Newmarch he Math Word Problem is a natural language processing (NLP) challenge that has seen exciting progress within the last few years. It requires a machine learning model to read a contextualised math problem, identify the relevant information, and produce an answer for which humans would require multi-step reasoning [1]. Most machine learning models have been trained on primary school mathematics problems to scale to higher levels of complexity after high accuracy is achieved. This article will discuss some recent academic papers addressing the Math Word Problem. While it is easy to conclude that a paper is better based on accuracy rates, it is important to note that such accuracy rates may be contingent on the dataset for which the model was trained. If an independent test of questions that were created for a primary school level were given to these models, it is unclear which would outperform the others. Ultimately, it may depend more on the questions’ nature than the models themselves. One approach to solving the Math Word Problem is to generate an expression tree from which computing the final answer is rudimentary. Reading the tree from the bottom up reveals the multi-step reasoning needed to solve the problem. A paper from Singapore Management University in 2020 [1] generated this type of solution with their novel Graph2Tree model. Their fully supervised approach processed the input and extrapolated the quantities and related words. This information was then projected onto a graph. This graph displays the relative relationships between the concepts of the question. A graph convolution network (GCN) and a tree-based decoder were then used to produce an expression tree. The model was tested against questions from the MAWPS [2] and Math23K [3] datasets, resulting in one of the highest accuracy scores for this problem at 77.4% on Math23K. An example of two expression trees which both evaluate to a correct answer for a Math Word Problem. Hong et al. [4] A more recent approach by Hong et al. last year [4] also generates a solution tree, but instead uses weakly-supervised learning. Fully supervised learning uses the correct answer and solution tree as the target of the learning algorithm. They argue that this restricts the variety of solutions as only one way of reaching the correct answer is produced. There are many distinct approaches to solving these problems, so the study did not train to the tree — only the correct answer. This ultimately allowed the model to suggest a range of correct ways to arrive at the same solution. Furthermore, Hong et al. took an interesting new approach by programming the model to fix its own mistakes by trying out different values in the incorrect expression tree to find the correct answer. This was to more closely imitate the way humans learn, coined by the researchers as ‘Learning by fixing’ [4]. If a correct solution was reached, it was then committed to memory to encourage more diverse solutions. The researchers proved that their model generated a range of different solutions to the same problem at 45-60% accuracy on Math23K. A different approach to solving the Math Word Problem is to use a verifier to improve accuracy, as demonstrated by a paper from OpenAI headed by Cobbe [5]. The researchers proved that a verifier given a range of proposed generated solutions could accurately evaluate the probability that a proposed solution was correct. The solution with the greatest probability of being correct was chosen for output. Cobbe et al. proved that this approach ultimately increased the accuracy of a fine-tuned model by as much as 20%. A comparison between finetuning and verification on 6B and 175B models. Given a large enough training set, the test solve rate of the verification model will surpass that of the finetuned one. Cobbe et al., [5] However, all this research appears to have been shot out of the water by a recent paper released this year in a joint effort from MIT, Columbia University, Harvard University, and the University of Waterloo [6]. Drori et al., in their self-professed ‘milestone’ [6] paper, have produced a transformer model capable of solving math word problems at a university level with perfect accuracy. The model can also produce these university-level problems well enough that students cannot correctly identify whether the problem was machine-generated 100% of the time. As this model requires no additional programming between switching course content, the researchers state that it could be applied to any STEM course. There is, however, a caveat to this, and it is not a small one: Drori et al. essentially solved a slightly different problem than previously discussed, as their model requires additional contextual information with the input text. They attribute their success and previous research failures to this fact. The model [6] works as follows: an input question is tidied and given additional contexts, such as the mathematics topic, and the relevant programming language and libraries. The researchers report that the majority of the questions required minor or no modifications. A portion of the modifications could be done automatically, while the rest is inferred to have been done manually. The transformed question is then fed to the OpenAI Codex Transformer [7], a highly successful machine learning model that takes in text input and generates corresponding code. The produced program is then run to achieve the correct answer. The researchers argue that providing this additional key context is fair, as the students who take these courses rely on implicit knowledge for their answers. Additionally, further research may improve this model to fully automate question modification. This recent development has not been without backlash. In a paper published only 20 days after Drori et al., Biderman and Raff [8] take the stance that this type of machine learning research ‘has not engaged with the issues and implications of real-world application of their results [8]. They argue that machine learning models like that of Drori et al. will be abused by students cheating, especially given that results are often not flagged by plagiarism detection tools. They are correct that students will use these models to cheat if given mainstream access. However, this is not a new situation: a primary school student cheating on their times-tables homework with a calculator is not so functionally different from a university student cheating on their calculus assignment with this model [6]. The result is the same: neither student is likely to perform well under test conditions. For online exams, a tool such as this disappears into the haze of students’ many methods to cheat. While Drori et al. [6] have found success, this is not the end of the road for the previous research. The Math Word Problem is not only about solving the questions themselves – it is about learning how we can improve on our machine learning techniques to facilitate reasoning. If we believe that there are problems we want to solve that require reasoning that cannot be programmed, then developing research into graphical representations of relationships and learning by fixing could be crucial to success. All progress and the effort researchers put into these methods are valuable. Each step of the transformer model, from original question to its modified version, the program generated by Codex, and the output given as an answer. Drori et al. [6] References J. Zhang et al., “Graph-to-tree learning to solve math word problems,” in Proc. 58th Annu. Meeting Association for Computational Linguistics, Jul. 2020, pp 3928-3937. [Online]. Available: https://ink.library.smu.edu.sg/sis_research/5273 R. Koncel-Kedziorski, S. Roy, A. Amini, N. Kushman, and H. Hajishirzi, “MAWPS: A math word problem repository,” in Proc. NAACL-HLT 2016, pp. 1152-1157. [Online]. Available: https://aclanthology.org/N16-1136.pdf Y. Wang, X. Liu, and S. Shi, “Deep neural solver for math word problems,” in Proc. 2017 Conf. Empirical Methods Natural Language Processing, Sep. 2017, pp. 845-854, doi:10.18653/v1/D17-1088. Y. Hong, Q. Li, D. Ciao, S. Huang, and S. Zhu, “Learning by fixing: Solving math word problems with weak supervision,” in Proc. 35th AAAI Conf. Artificial Intelligence, AAAI-21. [Online]. Available: https://www.aaai.org/AAAI21Papers/AAAI-5790.HongY.pdf Cobbe et al., “Training verifiers to solve math word problems,” arXiv preprint arXiv:2110.14168, 2021. [Online]. Available: https://arxiv.org/pdf/2110.14168.pdf Drori et al., “A neural network solves and generates mathematics problems by program synthesis: calculus, differential equations, linear algebra, and more,” arXiv preprint arXiv:2112.15594, Dec. 2021. [Online]. Available: https://arxiv.org/pdf/2112.15594.pdf M. Chen et al., “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.03374, Jul. 2021. [Online]. Available: https://arxiv.org/abs/2107.03374 S. Biderman and E. Raff, “Neural language models are effective plagiarists,” arXiv preprint arXiv:2201.07406, Jan. 2022. [Online]. Available: https://arxiv.org/pdf/2201.07406.pdf

  • Assessing The Quality of Retinotopic Maps Derived From Functional Connectivity

    By Gene Tang To many of us, visual perception seems to be rather effortless, but in reality, our brain processes a plethora of visual phenomena consistently and endlessly. Our perception starts with our neural mechanism transducing electromagnetic energy into action potentials. The distant world we see is translated into a proximal stimulus that impinges on our retina, and that information is subsequently mapped onto our brain. The mapping of the retinal visual input to the neurons is known as retinotopy. This field of study has opened us to opportunities to understand how our visual information is organised in the brain [1]. The simple notion of the retinotopic mapping is that the adjacent locations on visual space are represented by adjacent neurons in the cortex. In saying that, the representation is not exactly a mirror-image. Our visual image is represented contralaterally on our visual cortex with the left side of the visual field projecting onto the right hemisphere, and vice versa. The upper visual field is also represented in the lower side of the visual cortex, and vice versa. Functional magnetic resonance imaging (fMRI) provides us with just a channel to observe this cortical organisation of the visual world. The introduction of an fMRI method, known as the population receptive field estimates (pRF), provides us with a method in visual field mapping [2-3]. The pRF maps visual topology by determining the brain voxels that produce the largest response to a particular position in the visual field [2]. Here, we won’t go into much detail about the conventional pRF mapping but please do keep an eye on our next edition. It would be valid to say that the pRF method proposed by Dumoulin & Wandell in 2008 [2] has set a gold standard, or ground truth, in human retinotopic mapping. pRF has been popularised as it was proven to be very successful in several ways, ranging from investigating the organisation of the visual cortex to examining plasticity and cortical reorganisation of patients [4-5]. However, despite the robustness of the pRF, there are still some limitations to it. Concerns may lie with possible confounding variables that manifest during the long scanning session. As the subjects are required to fixate at a single spot, watching monotonous stimuli (such as a checkerboard), factors such as the patient's medical condition, comfort, and exhaustion can all affect their ability to properly complete the task, thus affecting the results. Figure 1: Visual field maps derived from the pRF and CF method. Polar angle maps (A) and eccentricity maps (B) on a spherical model of the two cortical hemispheres. Table of polar angle and eccentricity maps of a subject comparing the visual field maps derived from the pRF (left), CFa (middle) and CFb (right). Fortunately, a novel technique called connective field (CF) modeling [6] has provided us with a promising method for visual field mapping and analysis, with fewer constraints than ever before. By using the same set of data, instead of determining the correlation between the largest brain response to a location in the visual field, CF modeling quantifies responses of the different brain regions that coincide with the responses in the primary visual cortex, which is also known as V1 [6]. Using a template of how the V1 represents the visual field, we can then translate the peak correlation in V1 into a prediction of other locations of the visual field that maps onto a given location in the brain. As the response is now identified in terms of the inter-areal activations rather than the correspondence between the position on the visual field and on our visual cortex, this method can theoretically liberate us from the previous requirements of steady fixation and controlled stimuli. Instead, subjects can freely view movies and naturally move or even close their eyes [7]. The subject's activity measured inside their V1 should thus yield adequate information necessary for retinotopic mapping. During the summer, Assoc. Prof. Sam Schwarzkopf and I conducted research to assess the quality of the retinotopic maps derived from the CF method. We used the fMRI data previously collected from 25 subjects by Dr. Catherine Morgan and Prof. Steven Dakin. First, we analysed the data using the conventional pRF method. Then, we delineated the visual regions from both hemispheres of all the subjects using SamSrf software (visual field maps delineation can simply be understood as the tracing of the visual area borders based on the fMRI renders). The delineated regions were later also applied to the maps generated by the CF method. After that, we compared the pRF and CF map side-by-side, and qualitatively analysed the similarities and differences between the two methods. What happened after our first qualitative analysis was that we recognised that there were a few constraints that came with our original CF map derived from a group average template (hereafter referred to as CFa), so we included another CF template based on the probabilistic prediction of the cortical anatomy alone (hereafter referred to as CFb) [8]. Ultimately, we carried out statistical inference to investigate the difference between these three maps (pRF, CFa, and CFb) in terms of their coverage (the proportion of vertices in the occipital lobe that passed R2 > 0.01 threshold), angular and eccentricity1 correlation, and similarity (quantified by the mean Euclidean distance between pRFs in the maps). Our Results We first began with the qualitative analysis of the data. Initially, we compared the CFa map and pRF map side-by-side. Figure 1 shows example maps of a subject. Here, we summarise our observations between the maps derived from the two methods. Firstly, the CFa map appears to have greater coverage than the pRF map. This is particularly true in higher visual areas such as the V3B, LO, and MT. However, CF maps are generally cruder than the pRF maps, especially as pRF represents polar angle and eccentricity with smoother gradients. For polar angle, this means that maps based on CF (with either template) would be harder to delineate. Not only are the CF maps overall cruder, the borders of V2 and V3 in the CF maps appear to be clearer in dorsal (lower) areas, but less noticeable in the ventral (upper) areas. Borders of other areas such as V3A, V3B, and V4 also appear to be much weaker in the CF maps. For eccentricity maps, CFa appears to show a reversal around the medial borders where the peripheral edge should be with maximum eccentricity being lower than in the pRF maps. This is because of a statistical artefact when using a group average. The idea here is that, as the template is based on the group average, the mapping of eccentricity beyond the group average tends to be restricted. The probabilistic template used for CFb maps can thus overcome this issue. Figure 2: Statistical analyses. Statistical tests were run to assess differences between the analyses for all vertices that passed the threshold of R2 > 0.01 and separately for each visual area, V1-V3B. (A) Friedman’s analysis of variance testing the difference between the coverage between the three analyses — pRF, CFa and CFb. (B) The Euclidean 4 distance between the pRFs in the pRF map and the CFa and CFb maps, respectively. Plots comparing the difference between the z-converted eccentricity (C) and polar (D) correlation between the pRF map and the CFa and CFb maps, respectively. *** p<0.001, ** p<0.01, * p<0.05 Next, we quantified the differences between the groups in terms of coverage, polar/angular correlation, eccentricity correlation, and the Euclidean distance. We wanted to be sure that any differences noticed in the qualitative analysis weren't due to our own subjective judgement. The results are shown in Figure 2. To assess the coverage differences between three groups (Figure 2A), we conducted a non-parametric ANOVA. We found that coverage for pRF maps was significantly lower than for CF maps. This difference was observed across all vertices (the data points across the brain template) that passed the set threshold, as well as separately in all individual regions. To put it another way, we found that there are differences in map coverage between the two methods, with pRF coverage being significantly lower than the CF. The results here agree very well with the qualitative analysis we conducted. Next, we compared the CFa and CFb maps to the pRF map; this assumes that the pRF constitutes something of a ground truth for the best possible map that can be obtained for this data. Comparing the correlation between the pRF polar angle estimates (Figure 2D) and those in the two CF maps using a paired t-test showed a significantly stronger correlation for CFa across all vertices and in regions such as V1, V2, V3, and V3B. The results indicate that CFa polar angle maps correlate better with the pRF map than the CFb maps. Despite the CFb template being smoother, CFb polar angle maps lack details and are very crude. This means that on several occasions, polar reversals displayed on CFb are represented by large patches of polar angle reversal with a lack of precise location. Meanwhile, the CFa maps are also cruder than the pRF map, but the locations of polar reversals resemble the conventional pRF map better. In contrast, we found a significantly stronger correlation between pRF eccentricity (Figure 2C) and the CFb eccentricity in all vertices, and in individual regions such as V1, V2, and V3. This means that the CFb template is more closely correlated to the pRF maps when it comes to the eccentricity mapping. The better correlation found in CFb may be attributed to the template covering the full 90 degrees eccentricity [8], while the CFa template is constrained by the 10 degrees limit of the stimulation screen in the scanner. Since CFa is based on a group average, it contains a statistical artefact manifesting as an eccentricity reversal. As CFb does not pose the same constraints, the eccentricity maps no longer display reversals on the peripheral edge of the visual cortical regions. This therefore becomes more consistent with the conventional pRF eccentricity mapping. We further investigated the map similarity quantified by the mean Euclidean distance between the position of pRFs and CFs in each map (Figure 2B). Euclidean distances were significantly larger for CFb maps across all vertices and in all individual regions. This indicates that CFb maps captured pRF positions less accurately. That is to say, the CFa maps are overall more similar to the pRF map than the CFb maps. Future Implications The current research suggests that the new method of retinotopic mapping can open a window of opportunities. It can help us achieve new ways of testing and research we couldn't have previously done. Conventional visual field mapping studies are prone to several confounds. The prolonged fixation required in the study can pose several problems for studying a wide range of the population. The subject's ability to move their eyes freely is crucial for those with visual disorders such as amblyopia [9] and nystagmus (involuntary repetitive movement of the eyes), and those with other ocular or neurological pathologies. Thus, the new method may provide robustness in the presence of eye movements or a blurred and obstructed visual image [9-10]. Taken together, the method has potential for revealing new insights about visual cortical organisation in various pathologies, as well as in healthy participants at the extremes of the human lifespan. Another promising implication that this research may lead to is mapping human peripheral vision. Due to several factors, such as the fixation and technological limitations, retinotopic mapping of the human periphery has thus far been considered difficult, or even impossible. The stimulus typically used for conventional retinotopic mapping is often very small, insufficient, and presented near the centre of gaze; therefore, they cannot produce a response in the periphery. The CF method permits movie watching and free eye movements. Thus, it allows the subjects to explore their entire visual field rather than fixating on a single spot [7]. With free eye movements, we can now ensure that the subject's eye movements cover the whole visual field, including the far periphery. Moreover, it enables researchers to use more engaging and interesting stimuli than what was used in conventional pRF mapping experiments, thus helping enhance the participant’s motivation, and improve data quality. Figure 3: Polar angle maps. Side-by-side comparison between the right hemisphere pRF (right) and the CFb polar angle maps (left). Retinotopic mapping with connective field modeling is relatively new, and there are yet to be many studies on the topic. This is the first comprehensive comparison that assesses the quality of retinotopic maps generated by the connective field modeling method. Our results show that there is still substantial room for improvement for this cutting-edge methodology. The results from the current study will help us point towards methodological improvements. We have already begun work on a new approach: while our analysis shown here determined the peak correlation between the position in the visual field and the voxel response to determine the predicted pRF coordinates, our new approach does the opposite. It uses the predicted pRF coordinates from the template to project the correlations into visual space, and then fits a pRF to those correlations in visual space. In addition to estimating pRF position, this approach therefore also estimates pRF size. Moreover, it frees the CF analysis from the constraint that it needs to be conducted separately in each cortical hemisphere. Once we test these improvements, the next step we intend to do is to quantify the hypothetical robustness of this new CF map in the presence of eye movements. References B. A. Wandell, S. O. Dumoulin, and A. A. Brewer, “Visual field maps in human cortex.,” vol. 56, no. 2, pp. 366–383, 2007, doi: 10.1016/j.neuron.2007.10.012. https://pubmed.ncbi.nlm.nih.gov/23684878/ S. O. Dumoulin and B. A. Wandell, “Population receptive field estimates in human visual cortex,” 2007, doi: 10.1016/j.neuroimage.2007.09.034. [Online]. Available: www.elsevier.com/locate/ynimg S. Lee, A. Papanikolaou, N. K. Logothetis, S. M. Smirnakis, and G. A. Keliris, “A new method for estimating population receptive field topography in visual cortex,” vol. 81, pp. 144–157, Nov. 2013, doi: 10.1016/J.NEUROIMAGE.2013.05.026. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/23684878/ M. Farahbakhsh et al., “A demonstration of cone function plasticity after gene therapy in achromatopsia,” p. 2020.12.16.20246710, Oct. 2021, doi: 10.1101/2020.12.16.20246710. [Online]. Available: https://www.medrxiv.org/content/10.1101/2020.12.16.20246710v2 V. K. Tailor, D. S. Schwarzkopf, and A. H. Dahlmann-Noor, “Neuroplasticity and amblyopia: Vision at the balance point,” vol. 30, no. 1, pp. 74–83, 2017, doi: 10.1097/WCO.0000000000000413. K. V. Haak et al., “Connective field modeling,” vol. 66, pp. 376–384, Feb. 2013, doi: 10.1016/J.NEUROIMAGE.2012.10.037. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/23110879/ T. Knapen, “Topographic connectivity reveals task-dependent retinotopic processing throughout the human brain,” vol. 118, no. 2, Jan. 2021, doi: 10.1073/PNAS.2017032118/-/DCSUPPLEMENTAL. [Online]. Available: https://www.pnas.org/content/118/2/e2017032118 N. C. Benson, O. H. Butt, R. Datta, P. D. Radoeva, D. H. Brainard, and G. K. Aguirre, “The retinotopic organization of striate cortex is well predicted by surface topology,” vol. 22, no. 21, pp. 2081–2085, Nov. 2012, doi: 10.1016/J.CUB.2012.09.014. [Online]. Available: https://pubmed.ncbi.nlm.nih.gov/23041195/ S. Clavagnier, S. O. Dumoulin, and R. F. Hess, “Is the Cortical Deficit in Amblyopia Due to Reduced Cortical Magnification, Loss of Neural Resolution, or Neural Disorganization?,” vol. 35, no. 44, p. 14740, Nov. 2015, doi: 10.1523/JNEUROSCI.1101-15.2015. [Online]. Available: /pmc/articles/PMC6605231/ B. Barton and A. A. Brewer, “fMRI of the rod scotoma elucidates cortical rod pathways and implications for lesion measurements,” vol. 112, no. 16, pp. 5201–5206, Apr. 2015, doi: 10.1073/PNAS.1423673112/-/DCSUPPLEMENTAL. [Online]. Available: https://www.pnas.org/content/112/16/5201

  • What makes the shark form optimal?

    By Jasmine Gunton It is commonly known that sharks have existed for a long time on Earth. They are near the top of the ocean food chain and have few predators. However, few people know that sharks appeared in the fossil record before trees [1],[2]. Sharks are thought to have existed before the formation of Saturn's rings[3]. For context, the first recognisable shark fossils appeared in the fossil record 450 million years ago during the Ordovician period [1]. The first true tree genus, archaeopteris, appeared 370 million years ago, during the Late Devonian [2]. Since the first sharks appeared, an inconceivable number of marine species have emerged and then died out. Some notable extinct marine predator species include the thalassomedon and the mosasaurus. These were giant animals, much larger than many present-day species of shark [4],[5],[6]. So what makes the shark form optimal for surviving several thousand millennia of natural disasters and mass extinctions? Photo by Colton Jones on Unsplash Humans frequently misunderstand sharks. In film and media, sharks are commonly portrayed as indestructible killing machines, consuming every creature they encounter. Although sharks are highly specialised predators, they do have several physical features that can act as a hindrance. For example, unlike other fish, sharks cannot swim backwards due to the structure of their respiratory system[7]. In fact, certain species of shark such as the great white suffocate and die if they don’t keep moving [8]. Sharks have likely received their formidable reputation because of the occurrence of several shark attack events on humans. However, shark attacks usually occur because the shark has mistaken the human for a seal [9]. The majority of shark species prefer to hunt marine animals rather than take on a scary hairless primate. Because of this, marine biologists are able to study sharks and their unique adaptations. Often exploited by shark scientists is the behavioural phenomenon known as ‘tonic immobility’. Essentially, if one were to flip a shark so that it was floating on its back, it would enter a trance-like state similar to hypnosis. The shark would become temporarily paralysed until it managed to flip back to a normal swimming position [10]. If a large animal were able to move a shark in such a way, then the shark would be powerless to stop its attack. Dolphin species such as the orca (yes, orcas are dolphins, not whales) use this technique to their advantage [11]. The relationship between great white sharks and orcas is especially interesting. Orcas are thought to hunt and kill great white sharks, usually consuming only their livers [12]. This behaviour is likely explained by the fact that shark livers are very fatty and contain up to 270 kilograms of meat [13]. Some studies suggest that the presence of orcas in an area drives the population of great whites away [14]. The evident domination of orcas over sharks suggests that orcas must have also existed for a very long time. However, the oceanic dolphin family has only existed for around 11 million years [15]. One tends to wonder how sharks have managed to outlive so many of their natural predators. Since their first appearance in the Late Devonian, sharks have survived five mass extinctions [16]. After each mass extinction event the shark family diversified, filling several ecological niches [17]. The diversity of ecological niches can still be seen in modern sharks. For example, the cookiecutter shark (Isistius brasiliensis) is an ectoparasite that feeds on the tissue of large marine animals [18]. The complete opposite to the cookiecutter shark is the whale shark (Rhincodon typus), which is 33 times larger [19],[20]. However, the whale shark mainly feeds on zooplankton through a filter, similar to an actual whale [21]. The great biodiversity and range of physical adaptations seen in sharks can explain why they have survived for so long. Another reason for sharks’ longevity can be explained by their diet. Sharks are generalist predators, meaning they have a wide range of food sources [22]. Therefore, if one prey species disappears, the shark will easily be able to find other food. Furthermore, it cannot be ignored that sharks are excellent predators. One adaptation that separates sharks from other fish species is their cartilaginous skeleton. The light cartilage tissue enables sharks to expend little energy swimming long distances [23]. Moreover, the shape of the shark and its specialised scales allow high-speed movement through the water. Indeed, the fastest known species, the mako shark (Isurus oxyrinchus), can swim up to 70 kilometres per hour [24]. This is 1.5 times higher than the fastest recorded human running speed on land [25]. Fossil evidence suggests that early sharks maintained the same tapered form as modern sharks [26]. This makes the shark form one of the most efficient for surviving in the ocean. Photo by Gerald Schömbs on Unsplash Just when you thought the shark couldn't get any cooler, it turns out the shark has a specialised organ that can sense electromagnetic fields. Through this adaptation, there arise two distinct benefits. Firstly, the shark can efficiently navigate long distances through the expansive ocean [27]. Secondly, sharks are able to sense the electromagnetic fields of their prey and therefore locate camouflaged benthic animals [28]. However, this is not the only sense that sharks rely on to detect prey. Like other fish, sharks possess a sensory organ known as a ‘lateral line’ across the middle of their torsos. This lateral line allows the shark to sense vibrations in the water created by small animals [29]. Essentially, if you are a fish in the ocean and you encounter a shark, it's game over. Unfortunately, it is not other marine animals that are the sharks’ greatest predators. Instead, it is humans to blame for the steadily declining global shark population. This is mainly due to the highly popular shark fin fishing industry [30]. Consequently, many shark species are now considered endangered, and some are on the verge of extinction [31]. This issue is especially important to resolve as sharks are extremely important to their ecosystems. One of their crucial ecological roles is controlling potentially destructive fish populations [32]. If these fish populations were not predated on, they could take over certain areas and devastate the other local marine species. Sharks further influence the spatial distribution of their prey through intimidation tactics [32]. In conclusion, sharks are both very important to the marine ecosystem and a particularly hardy species. The biological and ecological importance of sharks should be more heavily prioritised when managing the global fishing industry. References [1] G. W. Litman, “Sharks and the Origins of Vertebrate Immunity,” Scientific American, vol. 275, no. 5, pp. 67-71, Nov. 1996. [Online]. Available: https://www.jstor.org/stable/24993448 [2] B. Meyer-Berthaud, S. E. Scheckler, and J. Wendt, “Archaeopteris is the earliest known modern tree,” Nature, vol. 398, pp. 700-701, April. 1999. [Online]. Available: https://doi.org/10.1038/19516 [3] L. Iess et al. “Measurement and implications of Saturn’s gravity field and ring mass,” Science, vol. 364, no. 6445, pg. 2965, Jan. 2019, doi: 10.1126/science.aat2965 [4] J. P. O’Gorman, “A Small Body Sized Non-Aristonectine Elasmosaurid (Sauropterygia, Plesiosauria) from the Late Cretaceous of Patagonia with Comments on the Relationships of the Patagonian and Antarctic Elasmosaurids," Ameghiniana, vol. 53, no. 3, pp. 245-268, June. 2016. [Online]. Available: https://doi.org/10.5710/AMGH.29.11.2015.2928 [5] D. V. Grigoriev, "Giant Mosasaurus hoffmanni (Squamata, Mosasauridae) from the Late Cretaceous (Maastrichtian) of Penza, Russia" in Proceedings of the Zoological Institute RAS, 2014, pp. 148-167. [Online]. Available: https://www.zin.ru/journals/trudyzin/doc/vol_318_2/TZ_318_2_Grigoriev.pdf [6] J. E. Randall, “Size of the Great White Shark (Carcharodon)”, Science, vol. 181, no. 4096, pp. 169-170, Jul. 1973, doi: 10.1126/science.181.4095.169 [7] E. K. Ritter and M. Levine, “Bite Motivation of Sharks Reflected by the Wound Structure on Humans,” Forensic Medicine and Pathology, vol. 26, no. 2, pp. 136-140, June. 2005, doi: 10.1097/01.paf.0000164231.99750.2b [8] M. L. Kelly, S. P. Collin, J. M. Hemmi, and J. A. Lesku, “ Evidence for Sleep in Sharks and Rays: Behavioural, Physiological, and Evolutionary Considerations,” Brain, Behaviour, and Evolution, vol. 94, no. 1-4, pp. 37-50, Jan. 2020. [Online]. Available: https://doi.org/10.1159/000504123 [9] L. A. Ryan et al. “A shark's eye view: testing the ‘mistaken identity theory’ behind shark bites on humans,” Journal of the Royal Society Interface, vol. 18, no. 183, pg. 20210533, Oct. 2021. [Online]. Available: https://doi.org/10.1098/rsif.2021.0533 [10] P. S. Davie, C. E. Franklin, and G. C. Grigg, “Blood pressure and heart rate during tonic immobility in the black tipped reef shark, Carcharhinus melanoptera,” Fish Physiology and Biochemistry, vol. 12, no. 2, pp. 95-100, Feb. 1993. [Online]. Available: https://link.springer.com/content/pdf/10.1007/BF00004374.pdf [11] P. Pyle, M. J. Schramm, C. Keiper, and S. D. Anderson, “Predation on a white shark (Carcharodon carcharias) by a killer whale (Orcinus orca) and a possible case of competitive displacement,” Marine Mammal Science, vol. 15, no. 2, pp. 563-568, April. 1999. [Online]. Available: https://doi.org/10.1111/j.1748-7692.1999.tb00822.x [12] T. M. Engelbrecht, A. A. Knock, and M. J. O’Riain, “Running Scared: when predators become prey,” Ecosphere, vol. 10, no. 1, pg. E02531, Jan. 2019. [Online]. Available: https://doi.org/10.1002/ecs2.2531 [13] T. Lingham-Soliar, “Caudal fin allometry in the white shark Carcharodon carcharias: implications for locomotory performance and ecology,” Naturwissenschaften, vol. 92, pp. 231-236, Jan. 2005, doi: 10.1007/s00114-005-0614-4 [14] S. J. Jorgensen et al. “Killer whales redistribute white shark foraging pressure on seals,” Scientific Reports, vol. 9, no. 1, pg. 6153, Apr. 2019, doi: 10.1038/s41598-019-39356-2 [15] M. Murakami, C. Shimada, Y. Hikida, Y. Soeda, and H. Hirano, “Eodelphis kabatensis, a new name for the oldest true dolphin Stenella kabatensis Horikawa, 1977 (Cetacea, Odontoceti, Delphinidae), from the upper Miocene of Japan, and the phylogeny and paleobiogeography of Delphinoidea,” Journal of Vertebrate Paleontology, vol. 34, no. 3, pp. 491-511, May. 2014. [Online]. Available: https://doi.org/10.1080/02724634.2013.816720 [16] M. Schobben, B. Schootbrugge, and P. B. Wignall, “Interpreting the Carbon Isotope Record of Mass Extinctions,” Elements, vol. 15, no. 5, pp. 331-337, Oct. 2019. [Online]. Available: https://doi.org/10.2138/gselements.15.5.331 [17] M. Bazzi, N. E. Campione, P. E. Ahlberg, H. Blom, and B. P. Kear, “Tooth morphology elucidates shark evolution across the end-Cretaceous mass extinction,” PLOS Biology, vol. 19, no. 8, pg. E3001108, Aug. 2021. [Online]. Available: https://doi.org/10.1371/journal.pbio.3001108 [18] E. A. Widder, “A predatory use of counterillumination by the squaloid shark, Isistius brasiliensis,” Environmental Biology of Fishes, vol. 53, pp. 267-273, Nov. 1998. [Online]. Available: https://doi.org/10.1023/A:1007498915860 [19] M. Hoyos-Padilla, Y. P. Papastamatiou, J. O’Sullivan, and C. G. Lowe, “Observation of an Attack by a Cookiecutter Shark (Isistius brasiliensis) on a White Shark (Carcharodon carcharias),” Pacific Science, vol. 67, no. 1, pp. 129-134, Jan. 2013. [Online]. Available: https://doi.org/10.2984/67.1.10 [20] C. R. Mclain et al. “Sizing ocean giants: patterns of intraspecific size variation in marine megafauna,” PeerJ, vol. 3, pg. E715, Jan. 2015, doi: 10.7717/peerj.715 [21] P. J. Motta et al. “Feeding anatomy, filter-feeding rate, and diet of whale sharks Rhincodon typus during surface ram filter feeding off the Yucatan Peninsula, Mexico,” Zoology, vol. 113, no. 4, pp. 199-212, Aug. 2010, doi: 10.1016/j.zool.2009.12.001. [22] N. E. Hussey, M. A. MacNeil, M. C. Siple, B. N. Popp, S. F. J. Dudley, and A. T. Fisk, “Expanded trophic complexity among large sharks,” Food Webs, vol. 4, pp. 1-7, Sept. 2005. [Online]. Available: https://doi.org/10.1016/j.fooweb.2015.04.002 [23] M. E. Porter, C. Diaz Jr, J. J. Sturm, S. Grotmol, A. P. Summers, and J. H. Long Jr, “Built for speed: strain in the cartilaginous vertebral columns of sharks,” Zoology, vol. 117, no. 1, pp. 19-27, Feb. 2014. [Online]. Available: https://doi.org/10.1016/j.zool.2013.10.007 [24] F. Patricia, D. Guzman, B. Inigo, I. Urtzi, B. J. Maria, and S. Manu, “Morphological Characterization and Hydrodynamic Behavior of Shortfin Mako Shark (Isurus oxyrinchus) Dorsal Fin Denticles,” Journal of Bionic Engineering, vol. 16, pp. 730-741, Jul. 2019. [Online]. Available: https://doi.org/10.1007/s42235-019-0059-7 [25] M. Krzysztof and A. Mero, “A Kinematics Analysis Of Three Best 100 M Performances Ever,” Journal of Human Kinetics, vol. 36, pp. 149-160, Mar. 2013, doi: 10.2478/hukin-2013-0015 [26] P. C. Sternes and K. Shimada, “Body forms in sharks (Chondrichthyes: Elasmobranchii) and their functional, ecological, and evolutionary implications,” Zoology, vol. 140, pg. 125799, June. 2020, [Online]. Available: https://doi.org/10.1016/j.zool.2020.125799 [27] C. G. Meyer, K. N. Holland, and Y. P. Papastamatiou, “Sharks can detect changes in the geomagnetic field,” Journal of the Royal Society, Interface, vol. 2, no. 2, pp. 129-130, Mar. 2005, doi: 10.1098/rsif.2004.0021 [28] A. J. Kalmijn, “Electric and Magnetic Field Detection in Elasmobranch Fishes,” Science, vol. 218, no. 4575, pp. 916-918, Nov. 1982, doi: 10.1126/science.7134985 [29] H. Bleckmann and R. Zelick, “Lateral Line System of Fish,” Integrative Zoology, vol. 4, no. 1, pp. 13-25, Mar. 2009. [Online]. Available: https://doi.org/10.1111/j.1749-4877.2008.00131.x [30] D. S. Shiffman and N. Hammerschlag, “Shark conservation and management policy: a review and primer for non-specialists,” Animal Conservation, vol. 19, no. 5, pp. 401-412, Mar. 2016. [Online]. Available: https://doi.org/10.1111/acv.12265 [31] E. Bonaccorso et al. “International fisheries threaten globally endangered sharks in the Eastern Tropical Pacific Ocean: the case of the Fu Yuan Yu Leng 999 reefer vessel seized within the Galápagos Marine Reserve,” Nature, vol. 11, pg. 14959, Jul. 2021. [Online]. Available: https://doi.org/10.1038/s41598-021-94126-3 [32] G. Roff et al. “The Ecological Role of Sharks on Coral Reefs,” Trends in Ecology & Evolution, vol. 31, no. 5, pp. 395-407, May. 2016. [Online]. Available: https://doi.org/10.1016/j.tree.2016.02.014

  • Benthic-Dwelling Heroes: How Soft Sediment Creates Healthy Oceans

    By Ella Speers Marine science is a diverse branch of the life sciences that spans a broad range of disciplines from fluid dynamics to the biological web of life. While each subdiscipline within marine science is as important as one another, it is the ecological study of marine systems that focuses on living organisms and the environments in which they interact. I believe this is the most fascinating aspect of all due to the scope of life that exists beneath the surface. Ecosystem function is an imperative element of biological science across both terrestrial and aquatic realms, which is paramount to the survival and success of all the species that inhabit the ecosystem in question. It can be defined as the flow of matter and energy through biological organization, which involves primary and secondary production and decomposition [1]. Each occupying species has a particular niche in which it carries out a set of roles, each with their own specific functions. Subsequently, the loss or gain of these species and their niches alters the net effects of their ecosystem [2]. All processes and species are deeply interconnected, and are therefore essential for the functioning of the ecosystem [1]. Image by Katya Wolf from Pexels As the vast majority of our biosphere is aquatic, the seafloor (hereafter referred to as the benthoscape) comprises 70% of the earth’s surface, and therefore is one of the largest landscapes on Earth [3]. As per the ecological theory, species richness tends to increase with ecosystem heterogeneity [4], and such is true within a benthoscape. Typically, benthoscapes tend to be sediment patches of minimum relief defined by their sediment type and any abiotic features that may be present, such as sandwaves [3]. The immense richness of interstitial species can be attributed to a benthoscape’s specific framework which allows organisms at high concentrations to live in its three-dimensional structure. Across time and space, characteristics of soft-sediment communities that are of importance for global oceans range from animal-sediment relationships to disturbance-recovery and succession processes [3]. Within marine ecology, there is a diverse array of systems and communities that impact one another, yet many of these are not readily understood by the general public. I, too, am guilty of associating only well-adored pelagic swimmers such as dolphins and whales with the ocean before majoring in Marine Science. After becoming aware of the microscopic world which lay beneath the sediment, I became fascinated with this ecosystem that exists unbeknownst to us, despite being so vital in its operation. Tiny benthic-dwelling organisms (hereafter referred to as microphytobenthos) that live in the upper layers of marine sediment also play a significant role in contributing to the healthy cycling of our global oceans. The richness of these unicellular eukaryotic algae and cyanobacteria species that inhabit the surface layers of sediment means that the upper several millimeters are a zone of intense microbial activity. It is therefore under constant physical reworking [5]. The dense aggregations of microphytobenthos play an especially significant role in coastal ecosystems through their contribution to primary production, food web functioning, and sediment stability [6]. The density of these primary producers can be significantly attributed to the amount of solar irradiance, temperature, and nutrient availability. The reactive zone in which microphytobenthos occupy therefore represents a region of strong gradients across physical, fluid, sediment, chemical, and biological properties [5]. Microphytobenthos can inhabit a range of aquatic systems from high-energy beaches to mudflats [5]. The output of their physical sediment reconstruction (known as habitat engineering) can be understood as critical to these regional environmental dynamics, as it creates habitat heterogeneity. This in turn creates habitat opportunities for other species in the same ecosystem. Their close proximity to the sediment-water interface allows these microscopic organisms to play a key role in modulating the exchange of nutrients between the sediments and the water column [5]. Through biodeposition and bioturbation, microphytobenthos species enhance organic matter mineralisation, which is a vital element in nutrient cycling [6]. Image by Terya Elliott from Pexels Oxygen is vital in all marine ecosystems as it is a key element in metabolic processes [7]. However, as the dissipation of sunlight does not sustain the life processes of these photosynthetic species at increasing depth, the crucial turnover of oxygenated sediment does not occur. As a result, sediment is often black and anoxic. Anoxic sediment cannot sustain the same amount of life that normoxic sediments can, so this zone tends to be barren in comparison. With a growing global population, there is increased pressure on resource extraction and facilitation. Warming of the ocean can significantly limit the growth and diversity of microphytobenthic species, which will have severe implications for the global nutrient cycle [7]. Furthermore, an increase in anthropogenic activities in many coastal areas in recent decades has been proposed as the culprit for the declining trends in bottom water oxygen concentrations [8]. Microphytobenthos are autotrophs, so their productivity is directly linked to the amount of sunlight they receive. Rubbish, sedimentation, and toxic algal blooms caused by nitrogen runoffs are preventing optimum levels of sunlight from reaching the seafloor. Without normal levels of productivity, the level of nutrient cycling is severely impacted, and thus the layer of anoxic sediment increases. The loss of benthic keystone species may further remove larger pelagic species from the ecosystem as their food sources become depleted. These microscopic species are evidently crucial for our oceans, and our activities on land need to become wholly more sustainable in order to prevent the creation of anoxic habitats. If we do this, we can continue to marvel at the marine life we all love so much. References [1] Influence of benthic macrofauna community shifts on ecosystem functioning in shallow estuaries, Frontier Marine Science, Sept. 2014, doi: 10.3389/fmars.2014.00041 [2] Schulze, E. D., & Mooney, H. A, “Biological Diversity and Terrestrial Ecosystem Biogeochemistry,” in Biodiversity and Ecosystem Function. New York, Springer Science & Business Media, 2012. Available https://books.google.co.nz/books?hl=en&lr=&id=T5trCQAAQBAJ&oi=fnd&pg=PA3&dq=ecosystem+function&ots=Wazb2HcLMy&sig=mt3_flbKAWIzaTdBBK5eeWVKZXc&redir_esc=y#v=onepage&q&f=false [3] Challenges in marine, soft-sediment bethoscape ecology, Landscape Ecology, Jan. 2008, doi: 10.1007/s10980-007-9140-4 [4] Spatial heterogeneity increases the importance of species richness for an ecosystem process, Oikos, Aug. 2009, doi: 10.1111/j.1600-0706.2009.17572.x [5] Microphytobenthos: The ecological role of the “secret garden” of unvegetated, shallow-water marine habitats. I. Distribution, abundance and primary production, Estuaries, Jun. 1996, doi: 10.2307/1352224 [6] Subtidal microphytobenthos: a secret garden stimulated by the engineer species Crepidula fornicata, Marine Ecosystem Ecology, Dec. 2018, doi: 10.3389/fmars.2018.00475 [7] The role of cyanobacteria in marine ecosystems, Russian Journal of Marine Biology, July, 2020, doi: 10.1134/S1063074020030025 [8] Marine benthic hypoxia: a review of its ecological effects and the behavioural responses of benthic macrofauna, Oceanography and Marine Biology, 1995. [Online]. Available https://www.researchgate.net/profile/Robert-Diaz-6/publication/236628341_Marine_benthic_hypoxia_A_review_of_its_ecological_effects_and_the_behavioural_response_of_benthic_macrofauna/links/02e7e526a7c717396d000000/Marine-benthic-hypoxia-A-review-of-its-ecological-effects-and-the-behavioural-response-of-benthic-macrofauna.pdf

  • The 11 lines of code which broke the internet.

    By Struan Caughey One of the founding principles of the internet was openness, which has led to incredible communities that build upon each other’s works. Even large companies such as Facebook and Google rely on other people and companies’ work so that they do not have to write their code from scratch. This system is brilliant— it allows for faster development of programs and means that people do not have to rewrite the same code. The name behind this principle is open-source code [1] and while generally accepted as an important and integral part of the internet there are some unintended consequences. Firstly you are relying on the code being efficient and secure. Secondly, this makes you reliant on the developers of that package to keep it up to date if any issues do occur. Lastly, this can also create chains where packages are dependent on packages resulting in the final programmer not knowing exactly what code their own program is executing. Most programming languages contain all of the essential functions within their ‘standard library’, meaning that you do not have to rely on third-party authors' javascript, however, utilises third-party depositories instead of a standard library [2]. One of the largest of these is called npm [3]. People essentially place their code onto npm and then others can access it just through their code. One contributor to this platform was called Azer Koçulu. Koçulu was a high school graduate who taught himself how to code. In an email he sent to Quartz magazine he stated that “I owe everything I have to the people who never gave up with the open-source philosophy” [4]. With this belief, one of the core principles is pushing against the commercialisation of code and instead empowering creators such as Azer to have full control over their code. Photo by Mohammad Rahmani on Unsplash The issues started with a single project of Koçulu’s which was under a package named “kik” (unrelated to the code which would later cause further problems). The instant messaging app also named Kik based out of Canada decided that they too wanted to create a depository called kik, however, they would not be able to due to Koçulu’s existing project [5]. Initially, Kik reached out to Koçulu to ask if he would remove his project. He refused their request, as he perceived it as an overly aggressive response. This in turn resulted in Koçulu asking for $30,000 (USD) “for the hassle of giving up with my pet project for [a] bunch of corporate dicks” [6]. Kik then reached out directly to npm who sided with them, and agreed to turn over the package’s name to them from Koçulu. On this they said, “In this case, we believe that most users who would come across a kik package, would reasonably expect it to be related to kik.com,” [6]. As you would expect, this did not go down well with Koçulu, who had been an avid proponent of open-source philosophy and npm. He sent out an email stating that he was very disappointed and no longer wanted to be a part of npm— that he wanted all of his packages registered on npm to be taken down [6]. This ended up happening and sure enough, two days later coders around the world started getting the error message: “Npm ERR! 404 ‘left-pad’ is not in the npm directory” [4]. Left pad was an incredibly simple piece of code written in full below: [4] All this code does is add characters to the left of the text to ensure that the length of a line stays consistent. For example, if I was to feed in “‘17’, 5, ‘0’” into the code it would print “00017”, adding padding to the left of the text [7]. It soon became evident what had occurred. This one small piece of code had been used in several other packages, which, in-turn, was propagated across all sorts of different programs, many of which did not explicitly use this package. One of these large packages which indirectly used it was called Babel which utilised left-pad; it was itself used by the likes of Facebook, Netflix, and Reddit amongst others. With left-pad gone, Babel became unable to install. For context, despite being relatively unknown, left-pad at its peak was downloaded over 4 million times per week [7]. Ironically, things came full circle when on March 24, 2016 Mike Roberts (head of messaging at Kik) found that his team was encountering the exact same issue due to their use of a package called LSCS, which through a long chain of dependencies relied on left-pad. In Mike’s piece on the self publishing site Medium, he also discusses the situation from Kik’s side as well as publishing all the emails between Kik’s patent agent, Koçulu and npm [6]. Immediately there was a scramble for a fix. The removal of this simple package was causing errors across the globe with people from Australia, Germany, the US, and more commenting on the left-pad npm page trying to find out what happened. Babel quickly replaced the package and within 2 hours npm took the un-un-published left-pad code. On the restoration of the code, npm said that “Un-un-publishing is an unprecedented action that we’re taking given the severity and widespread nature of breakage, and isn’t done lightly,” [4]. While in the end there were no long term issues resulting from this action, it did show there are some significant vulnerabilities within our current systems. While dependencies are vital, to ensure security of your code you should know what your code contains. If you think that issues like this must have been resolved you would be half right - npm has instituted new policies to make sure that unpublished code won’t cause widespread breakages [5], however, the system still has other issues, such as trust that your dependencies are secure. In November 2021 we saw in a catastrophic bug that this may not be the case. On November 24th 2021 a 0-day bug was found (0-day meaning 0 days notice was given to fix it before the exploit went public). It was given a 10/10 criticality rating and is part of the language Java [8]. The flaw called Log4j immediately began being exploited [9]. On December 22, 2021, Tenable found that 10% of all assets that they assessed were vulnerable to the exploit and yet only 70% of organisations had begun looking at whether they are vulnerable [10]. While there have been several patches we are yet to see whether these can be navigated around. One of the big issues which lie with log4j is again dependencies, as even if Java updates their code all the programs which are dependent on Java also have to update theirs and so on. Because of this the issues created by log4j could last for years and has been described by some in the industry as the “most serious (cybersecurity flaw) in decades” [11]. The world of tech is exciting and fast-paced but as it continues to grow new systems will have to be put in place to ensure that these kinds of domino effects cannot happen in the future. In no other industry would we accept industry-wide vulnerabilities as something that just happens and needs to be fixed, we instead need to learn to prevent this from occurring. References [1] Opensource. “What is open source?”, accessed 9/01/2022, https://opensource.com/resources/what-open-source [2] Developedia.org. “Standard Library”, accessed 9/01/2022, https://devopedia.org/standard-library. [3] W3shools. “What is npm”, accessed 9/01/2022, https://www.w3schools.com/whatis/whatis_npm.asp [4] Collins, Keith. “How one programmer broke the internet by deleting a tiny piece of code”, 28/03/2016, https://qz.com/646467/how-one-programmer-broke-the-internet-by-deleting-a-tiny-piece-of-code/ [5] Miller, Paul. “How an irate developer briefly broke JavaScript”, 24/03/2016, https://www.theverge.com/2016/3/24/11300840/how-an-irate-developer-briefly-broke-javascript [6] Roberts, Mike. 24/03/2016, https://medium.com/@mproberts/a-discussion-about-the-breaking-of-the-internet-3d4d2a83aa4d#.edmjtps48 [7] Mao, Steve. “left-pad”, Accessed 9/01/2022 https://www.npmjs.com/package/left-pad [8] Apache Software Foundation, “CVE-2021-44228 Detail”, accessed 9/01/2022, https://nvd.nist.gov/vuln/detail/CVE-2021-44228 [9] Tung, Liam. “Log4j flaw hunt shows how complicated the software supply chain really is”, 7/01/2022, https://www.zdnet.com/article/log4j-flaw-hunt-shows-how-complicated-the-software-supply-chain-really-is/ [10] Yoran, Amit. “One in 10 Assets Assessed Are Vulnerable to Log4Shell”, 22/12/2021, https://www.tenable.com/blog/one-in-10-assets-assessed-are-vulnerable-to-log4shell [11] Wayt, Theo. “Why is the Log4j cybersecurity flaw the ‘most serious’ in decades?”, 20/12/2021, https://nypost.com/2021/12/20/why-is-the-log4j-cybersecurity-flaw-the-most-serious-in-decades/

  • Facilitating Friendship: The Future of Mathematics Education?

    By Alicia Anderson My relationship with maths class over the years has been sinusoidal, to say the least. I really took to the patterns and logic of mathematics as a kid. I loved measuring my classmates’ arm-spans and heights to use as coordinates on a line graph to show a linear trendline. Then, somewhere in the silence of paper pages and the scribble of pens, my spark went out. Equations that used to glitter, faded into a dull, anxious greyscale. While some subtopics were fun, I mostly slogged through NCEA and university maths in a disengaged haze only because it was a requirement for my new love — physics. My mathematics journey — one in which I grit my teeth for the sake of another pursuit — is not unique. Maths has a notorious reputation for “gatekeeping” career opportunities — being necessary for finance, construction, and technology to name a few examples — which makes its pervasiveness of low achievement and low engagement all the more troubling. In the 2019 Trends in International Mathematics and Science Study (TIMSS), the scores of New Zealand year nine students had the most significant drop since the study began in 1994 [1]. And while the 2020 NCEA Annual Report shows improvement in NCEA Level 1 literacy attainment over the last ten years, very little has changed with regard to gaining NCEA Level 1 numeracy. Figure 1 shows a steadily increasing trendline from 79.3% to 85.1% of year eleven students attaining Level 1 literacy from 2011 to 2020, respectively, with a range of 7.8% across the data set. For level 1 numeracy, however, the range is only 5.1% over the last ten years, with 82.4% of year eleven students in 2011 passing numeracy requirements and 83.6% passing in 2020 [2]. Many students, both in New Zealand and overseas, stop learning mathematics altogether as soon as it’s no longer compulsory [3]. For many schools in New Zealand, this is after NCEA Level 1. Most students are 16 years old by this time as well and therefore are no longer required by law to attend school. Numeracy attainment data at NCEA levels 2 and 3 is skewed to be in the 90% [2]. A likely reason for this is that those who failed the compulsory NCEA Level 1 have either dropped maths, or dropped school entirely. These achievement statistics don’t speak for those who stay in maths solely for a particular career path where maths is still necessary, but learning those skills feels more like removing wisdom teeth. Literacy and Numeracy for NCEA Level 1 Table I shows the percentage of Year 11 students attaining NCEA Level 1 Literacy and Numeracy by the end of each year * This figure has been adapted from the NZQA 2020 annual report [2] So although I had the persistence to continue until I was back on the positive gradient, what I couldn’t understand for the longest time was how I could love physics as much as I hated maths. If maths is so essential to physics, then what happened for a paradox like this to occur? Despite ongoing research and pleas from mathematics education academics to make significant changes in teaching practice, maths is still being taught in classrooms in a very outdated, solitary manner that is no longer seen in the rest of the sciences or the humanities. The International Academy of Education has recommended collaboration through small-group learning as an effective teaching tool since at least as early as 2000 [4]. However, classrooms across schooling levels still operate under solitary textbook exercises and worksheets from even as early as year three [5]. For me as a struggling student, this made the absence of noise the most memorable aspect of my first university mathematics tutorial. We were permitted to raise a hand for assistance, but that semester I was the only student to ever do so. The silence offered no anonymity, so, with no friends to lean on, each request for help required fresh courage. Every time the tutor taught a question to me, I felt the entire class learn how stupid I was. It reinforced my internal rhetoric that I was the dumbest student in the room; no one else needed help because they must be getting all the answers right. When I spent more tutorial time crying in the bathroom stall than getting help from the tutor, I stopped attending tutorials. I failed that paper. More research has been done in investigating students who attend education but under-achieve. One journal summarised their findings under the acronym T.I.R.E.D for Tedium, Isolation, Rote-learning, Elitism, and Depersonalisation [6]. Varying combinations of these reasons result in the limiting belief that the only people who succeed in mathematics are those who are exceptionally talented. I believe solutions targeted towards addressing isolation would have knock-on effects that would mitigate the remaining four issues listed. When I had a polar opposite experience to that first tutorial in a later semester, my performance and engagement skyrocketed. In the first session, the tutor asked who liked group work, in which there were a fair few of us who answered that we did. Having a tutor who promoted collaboration gave me a study group I could sit in lectures with, which led to a more efficient understanding of the content due to how much less stressful it was receiving explanations from my classmates, who were now becoming my friends [6]. My newfound sense of belonging in maths resulted in grades of As and Bs. From these new friendships, I felt a new identity as a learner of mathematics develop within myself. This belonging continued into the following semesters where I was now confident enough to ask questions during the lectures, which often had around 40 students in attendance. With the additional hurdle of classmates spectating, dynamic participation in higher-level maths requires confidence that isn’t just overcoming shyness but being either certain of contributing intelligent answers and questions or being unafraid of the contrary. Overcoming such feelings is not an instantaneous process, but class friendships appear important in mitigating them [3], [4], [6]. The 2019 TIMSS results were also better for students who felt a belonging in the classroom [1]. Feeling a sense of belonging through friendship with peers fosters increased participation in front of the whole class. This leads on to both greater engagement with the class material and high test performance, which is why it would benefit educators to prioritise facilitating such connection in their classes. Of all the schooling subjects, mathematics class evokes the most emotive response from the general population. Such strong opinions on maths are created informally through social experiences and social interactions. Schools and universities by nature are socialising hubs, implying a need for teachers to set up a classroom culture, so students become familiar with each other quickly [3]. Games, when selected carefully, can also help reinforce learning outcomes and team-building [7]. The games which are most successful at building classroom belonging are ones that encourage teamwork and creative problem solving, rather than rote-memorisation and speed [8]. Opportunities to collaborate on problems in groups of about four or five, then presenting worked solutions as a group are also needed [3], as is class material which acknowledges the cultural identities in the classroom [9]. The art of learning has always been a social affair: from learning to walk, to being taught your first curse words, much to the distaste of caregivers. It turns out I didn’t magically start hating maths, then re-enjoying it just as mythically. As much as I liked applying maths skills to the physics experiments we were writing reports on, my true joy stemmed from tackling challenges together with my classmates as a community of learners. Put simply, physics just allowed me to do maths with some friends. References [1] RNZ, “NZ students record worst results in maths and science,” Dec. 09, 2020. https://www.rnz.co.nz/news/national/432451/nz-students-record-worst-results-in-maths-and-science. [2] A. Gray and D. G. Klinkum, “Annual Report NCEA, University Entrance and NZ Scholarship Data and Statistics 2020,” NZQA, 2021. [Online]. Available: https://www.nzqa.govt.nz/assets/About-us/Publications/stats-reports/NCEA-Annual-Report-2020.pdf?fbclid=IwAR16EDWZzYMnYDB96-6Pu-2a_oJg7hkHl5mJK1z9geW196kPNfWqdA-dNB8. [3] L. Darragh, “Constructing confidence and identities of belonging in mathematics at the transition to secondary school,” vol. 15, no. 3, pp. 215–229, Jul. 2013. [4] D. A. Grouws and K. J. Cebulla, “Improving student achievement in mathematics,” no. 4, 2000, [Online]. Available: https://www.iaoed.org/downloads/prac04e.pdf. [5] F. Walls, “‘Doing Maths;’ Children Talk About Their Classroom Experiences,” vol. 2, pp. 755–764, 2007, [Online]. Available: https://content.talisaspire.com/auckland/bundles/60dccdd21b02c94e56609a94. [6] E. Nardi and S. Steward, “Is Mathematics T.I.R.E.D? A Profile of Quiet Disaffection in the Secondary Mathematics Classroom,” vol. 29, no. 3, pp. 345–367, 2021, [Online]. Available: http://www.jstor.org.ezproxy.auckland.ac.nz/stable/1502257. [7] G. Anthony and M. Walshaw, “Effective pedagogy in mathematics,” no. 19, Aug. 2009. [8] L. Darragh, “Playing maths games for positive learner identities,” no. 1, pp. 36–42, 2021, [Online]. Available: https://doi-org.ezproxy.auckland.ac.nz/10.18296/set.0166. [9] Hunter, J., Miller, J. Using a Culturally Responsive Approach to Develop Early Algebraic Reasoning with Young Diverse Learners. Int J of Sci and Math Educ 20, 111–131, 2022, [Online]. Available: https://doi-org.ezproxy.auckland.ac.nz/10.1007/s10763-020-10135-0.

  • The James Webb Space Telescope Time Machine

    By Celina Turner It’s a question everyone asks sooner or later: how did the universe start? How are planets and stars and galaxies created? What else is out there? Throughout the thousands of years during which humans have had consciousness that allowed them to question the origins and vastness of the universe, they lacked an ability to find definitive answers. However, we are now living in a time where the possibility of seeing the universe at its earliest is becoming reality. In the last century, astronomy has made great strides in our understanding of what is possible, and what tools are needed to discover the creation of both the universe and the structures within it. The James Webb Space Telescope (JWST) has been a twenty-year project costing $10 billion (USD), with the aim of providing us the information that shows us the beginning of our universe [1]. But first, let’s back up a bit to understand how it will be able to do that, and why it is a technological marvel. Image credit: NASA/Chris Gunn We know there was a beginning to the universe — otherwise, as Olbers’ Paradox points out, the sky would be perpetually blindingly-bright from infinite stars in infinite directions sending infinite rays of light towards Earth. The Big Bang theory explains how our universe was created: in a fraction of a second (1/1043 of a second, to be precise), the universe exploded into existence— just a billion-degree hot pool of protons and neutrons and intense radiation that continuously expanded to fill a limitless space [2]. Particles collided to create elements such as hydrogen and helium, and eventually the first star blazed into existence. Because light takes time to travel over distances, we can look back in time by looking at light originating from really far points; i.e. if you were to look at a star that is one light-year away, one year is required for the light from that star to be seen. Thereby, looking at stars that are billions of lightyears away shows us what they were like billions of years ago, as their light has only just reached us. However, just as the Doppler Effect changes the wavelength of an ambulance’s siren when it passes from behind you to in front of you, the expansion of the universe stretches the wavelengths of the light traveling to us— known as redshift. This puts some of the possible observations in the infrared spectrum or lower instead of the optical spectrum as observable light. Even high-energy radiation that was released in the Big Bang, which is still found to be travelling across the universe, has had its wavelengths stretched into the microwave spectrum. This leftover radiation was found (accidentally) in 1965; now known as the Cosmic Microwave Background, this discovery is what ultimately convinced those who were on the fence about the Big Bang— that the universe was not a steady state, and that some beginning had to exist. Image: ESA/NASA The Hubble telescope gave us a much better view into deep space, providing data that allowed us to deepen our understanding of some intricacies relating to what happened after the beginning, but it simply wasn’t enough to answer questions fully. In 1995, Bob Williams (the then director of the Space Telescope Institute) used his allocated time with Hubble to do something many astronomers considered a waste at the time— he pointed it at the darkest spot in the sky to see if it was truly dark [3]. The 100 hours of exposure time would allow the telescope to soak up as much faint light that may shine from within the black void, and indeed, there was light. Now known as the Hubble Deep Field photo, his resulting image changed how astrophysicists understood the evolution of galaxies. Thousands of galaxies revealed themselves from the depths of the tiny fraction of the sky. The seemingly endless collection of worlds of different ages painted a picture of the universe over millions to billions of years. A colleague of Bob Williams stated that “what Hubble succeeded in doing with the Hubble Deep Field is finding that there were galaxies at redshifts much higher than we thought” [3]. This conclusive proof of an evolving universe, however, only led to more questions about its very beginning, and how galaxies form at all. Seeing the universe closer in time to the Big Bang would require light that not even Hubble can detect. Interestingly, the idea of a Hubble successor that could look further back in time began to form even before the launch of the Hubble itself [3]. The majority of celestial bodies emit infrared radiation, but this of course also includes Earth, which drowns out anything that equipment on the ground could detect [4]. Thus, any invention used for the purpose of seeing the cosmos in infrared would need to be off-planet, and the concepts for such a telescope were already being discussed before Hubble was in orbit. With the discoveries of the Hubble Deep Field, such as that the oldest galaxies would be redshifted so far that there would be no other way to view them, the need for the James Webb Space Telescope became clear. Hubble uses a mirror with approximately 4.5 m2 of collecting area, and operates in the optical and ultraviolet spectrums while orbiting Earth [5]. The JWST instead will have approximately 25.4 m2 of collecting area, while operating in the infrared spectrum from the L2 Sun-Earth Lagrange point [5]. Dr Elizabeth Howell explains: “A Lagrange point is a location in space where the combined gravitational forces of two large bodies, such as the Earth and the sun or the Earth and the moon, equal the centrifugal force felt by a much smaller third body. The interaction of the forces creates a point of equilibrium where a spacecraft may be ‘parked’ to make observations” [6]. The L2 point will enable the JWST to sit in perpetual darkness with the sun being blocked by the Earth. Aside from there being no infrared radiation produced by the sun or the Earth, there are the added benefits of this placement such as allowing for constant observations and helping keep the JWST incredibly cold, which is crucial for its detectors to ensure accurate readings as they “need to be at a temperature of less than 7 kelvin to operate properly” [7]. However, this means that if reparations are needed, as Hubble has in the past, the JWST will be out of luck as it is too far away from Earth to send a service mission. Knowing that mistakes will not be able to be fixed has required every aspect to be expected to work flawlessly on its first and only try [1]. Image: ESA/NASA The threat of having a single attempt becomes more intimidating when considering the technological requirements JWST needs to fulfill while adhering to the limitations of bringing the rocket into space. In order to have as much precision as the Hubble telescope, the mirror of the JWST needs to be much larger, as the wavelengths of the light it is capturing are larger and fainter. But with the diameter of the mirror at 6.5 m and the diameter of the Ariane 5 rocket carrying it being only 5.4 m, engineers needed to design a mirror that could fold for launch, unfold itself in space, and align each mirror with incredible precision [8]. "Aligning the primary mirror segments as though they are a single large mirror means each mirror is aligned to 1/10,000th the thickness of a human hair. What's even more amazing is that the engineers and scientists working on the Webb telescope literally had to invent how to do this," says Lee Feinberg, the Webb Optical Telescope Element Manager [9]. The solution includes 18 hexagonal mirrors, each 1.32 m wide and made of beryllium with gold plating, that can unfold similarly to a flower blossoming [9]. Additionally, the sunshield will also need to be able to unfold in space. It may be 22 m by 12 m in size (roughly the size of a tennis court), but it is made of only five thin layers of kapton which carry the responsibility of keeping the scientific equipment cool as JWST makes its voyage to L2 [5]. Said equipment includes the NIRCam, NIRSpec, and MIRI— each of which have their own part to play. NIRCam is the primary imager that will detect smaller infrared wavelengths between 0.6 µm and 5 µm in size [10]. Equipped with coronagraphs, it will be able to take photos of exoplanets without being blinded by their stars. Although there are various methods for imaging exoplanets, using coronagraphs helps to image them directly by working similarly to using something to block the sun in order to see the road while driving. This is where NIRSpec takes over: creating spectroscopies of exoplanets to identify their chemical composition [11]. Similar to the coronagraph NIRCam has, NIRSpec has a microshutter system that blocks out irrelevant areas in order to reduce light pollution [11]. However, as stated, NIRCam and NIRSpec work in the near-infrared spectrum, and are therefore more useful in exoplanet research. MIRI is both an imager and a spectrograph, but detects infrared radiation with wavelengths from 5 µm to 28.3 µm in size [12]. These longer waves can travel through clouds of dust that are scattered throughout the cosmos. MIRI will be the instrument for detecting the objects with the highest redshift. However, in order for MIRI to work accurately, its temperature needs to stay below 7 K or it begins to detect its own heat [12]. Diagram adapted from NASA's Infrared sensitivity of Webb's instruments Each piece of the JWST has required multiple decades of designing and testing. With the capability of looking at the universe at a younger age than ever before, the JWST will undeniably change the field of astrophysics. We will be able to peer at the birth of some of the first galaxies, and understand how stars and planets begin to accrue mass. When pointed at nearby stars, such as Trappist, we will be able to see each exoplanet’s atmospheric composition and determine if they are indeed habitable. Possibly, if we are lucky, JWST could identify biosignatures from life on other planets and redefine our existence. Learning about the beginning of the universe and looking for signs of life and habitable planets doesn’t simply fulfill the curiosity we innately have. This knowledge can help us to understand how our own solar system and galaxy was created, how to reveal the mysteries of dark matter, whether life is rare and how it comes into existence, and could even help us with concepts such as how we could terraform other planets such as Mars and Venus. Any findings made by the JWST will certainly build the foundations for future missions or research studies in any range of fields. The JWST is arguably one of the most important inventions of the 21st century with the potential to forever change the way we view the universe and live within it. Launched on Christmas Day of 2021, it is now taking its month-long passage to get to the L2 point [1]. If all goes well, we can expect some incredible new discoveries throughout 2022 and onwards as it looks into the depths of our universe. References [1] D. Dobrijevic, “NASA's James Webb Space Telescope: The ultimate guide,” Space.com, 25-Dec-2021. [Online]. Available: https://www.space.com/amp/21925-james-webb-space-telescope-jwst.html. [2] A. Todd, “Story and Origins of the Universe,” ArcGIS StoryMaps, 06-Apr-2021. [Online]. Available: https://storymaps.arcgis.com/stories/2e16464e7d3549069e13a18c2689ce99. [3] N. Wolchover, “The Webb Space Telescope Will Rewrite Cosmic History. If It Works.,” Quanta Magazine, 03-Dec-2021. [Online]. Available: https://www.quantamagazine.org/why-nasas-james-webb-space-telescope-matters-so-much-20211203/. [4] A. May, “James Webb Space Telescope: Origins, Design and Mission Objectives,” LiveScience, 22-Nov-2021. [Online]. Available: https://www.livescience.com/amp/james-webb-space-telescope. [5] “Comparison: Webb vs Hubble Telescope - Webb/NASA,” NASA. [Online]. Available: https://www.jwst.nasa.gov/content/about/comparisonWebbVsHubble.html. [6] E. Howell, “Lagrange points: Parking Places in Space,” Space.com, 22-Aug-2017. [Online]. Available: https://www.space.com/amp/30302-lagrange-points.html. [7] “Cryocooler Webb/NASA,” NASA. [Online]. Available: https://webb.nasa.gov/content/about/innovations/cryocooler.html. [8] “Ariane 5,” Arianespace, 17-Feb-2021. [Online]. Available: https://www.arianespace.com/vehicle/ariane-5/. [9] “Mirrors Webb/NASA,” NASA. [Online]. Available: https://webb.nasa.gov/content/observatory/ote/mirrors/index.html. [10] “Near Infrared Camera (NIRCAM) instrument Webb/NASA,” NASA. [Online]. Available: https://jwst.nasa.gov/content/observatory/instruments/nircam.html. [11] Perception, “The James Webb Space Telescope Explained In 9 Minutes,” YouTube, 17-Jul-2021. [Online]. Available: https://youtu.be/tnbSIbsF4t4. [12] B. Kruizinga, H. Visser, J. W. Pel, K. Moddemeijer, and C. Smorenburg, “MIRI spectrometer optical design,” 2004. [Online]. Available: https://adsabs.harvard.edu/full/2004ESASP.554..263K.

  • The Dynamics of Time Perception

    The Dynamics of Time Perception How much time does one spend waiting in a lifetime? This is a fascinating question to which no one seems to have a definite answer. We have perhaps spent more hours than we could have ever imagined, waiting for both events that have happened, as well as those that never happened. Though, what is curious about this question is that it is rather disparate from the question of how much time does one 'feel' like they have spent waiting in a lifetime. One way to understand this is by using the subjective versus objective reality debate—does the reality that one perceived, equate to what actually happened? Indeed, the representation of time is subjected to one's own reality. Context substantially influences one’s own time perception. The passage of time is thus dependent on intrinsic contexts, such as the concurrent emotional state [1], or extrinsic contexts such as the rhythm of other actions (i.e., the rate of speech), and the feedback of our actions (i.e., the reaction of others). So, despite an intact and fully functional biological clock, homogenous temporal perceptions do not seem to exist [1]. This multiplicity of factors can thus explain why no single experience of time is the same—days may shrink into minutes, minutes may stretch into eternity. In this literature review, we will take a deep dive into the progress of research and explore the dynamics of time perception. "on doit mettre de côté le temps unique, seuls comptent les temps multiples, ceux de l'expérience' (we must put aside the idea of a single time, all that counts are the multiple times that make up experience) ([2] as cited in [1])" In 2009, Droit-Volet and Gil [1] wrote an intriguing paper on the time-emotion paradox. The paper highlighted that individuals' emotional context is greatly capable of distorting time perception, creating discrepancies between one's own experience of time and the objective measurement. A plausible mechanism explaining this phenomenon can be understood using the internal ticking clock metaphor [1], [3], [4]. Droit-Volet and Meck [3] conceptualised that our resources (i.e., cognitive capacity) get redirected away from our internal clock as our attention is captured by a pleasant event. As a result of the resources being diverted away, the 'tick' of the clock may be missed. This causes underestimation of time perception, which means that the subjective time perception becomes shorter than its objective reality. Contrarily, the overestimation of time can be induced by increased arousal, such as a stressful event. What these events do is increase the ticks on our internal clock. As the same amount of time passes by, but with a larger number of ticks in-between, the subjective experience of time is lengthened— we start to feel like the duration is getting longer than it actually is [1], [3]. The essence of this notion is that our internal clock is incredibly vulnerable to manipulations and distortions, giving rise to subjective time experience. The nature of time perception— its vulnerability to distortions and manipulations— is undoubtedly captivating. The body of literature surrounding time perception is continuously growing, and numerous studies have closely investigated the fragility of time perception and its relationship with multitudes of events, particularly waiting. Rankin et al. [5] conducted interesting studies that looked closely at the association between subjective time perception during stressful waiting periods. They examined undergraduate and graduate students as they waited for their exam results. The results from the studies suggest that there is a robust association between distress and time perception. At the between-individuals level, those who reported greater levels of worry and anxiety also reported perceiving the time waited to have been moving slower. The results also suggest that there is an intra-individual variability. That is, the time passage is slowest for a person experiencing the most anxiety and distress. These findings are very closely related to prior research conducted by different laboratories on the relationships between emotions and temporal perception using facial expression [6-8], which may offer us another plausible mechanism explaining the results from Rankin et al. [5]. The original study by Droit-Volet et al. [6] revealed an association between the overestimation of time and emotional arousal. Later, these findings were well replicated by Tipples [7], [8] and Effron et al. [9]. Tipples [7] suggests that the increased level of overestimation of time perception can, indeed, be linked to negative emotionality such as angry and fearful facial expressions. These findings offer us a new explanation that time perception can not only be explained by attention-based processes (such as the ticking clock metaphor mentioned previously), but can also be understood in terms of emotional arousal. That is, the modulation of emotions can influence a person's sense of time, and the fact that multiple processes can influence one's time perception well-reconciles its fragility. Rankin et al. [5] wasn't the only group who found the relationship between stressful events and waiting time perception. Recently, another study was conducted by Droit-Volet et al. to investigate time perception and Covid-19 stress [10]. The study revealed that the significant increase in boredom and sadness during lockdown had constituted changes in time perception. They stated that boredom and sadness were predictors of the experienced “slowing down” of time during the lockdown [10]. The negative relationship between time perception, sadness, and boredom came as no surprise. As sadness and boredom exacerbate, the perceived passage of time becomes slower. Indeed, the results here are congruent with the previous study that pointed out the negative emotional arousal and the slowing down of time [6], [7], [8], [9]. During the pandemic when negative emotional arousal such as sadness, fear, anger, and boredom are endured for an extensive period of time, a slowing down of time can take a toll on mental health and wellbeing [11]. If this is the case, is it possible for us to take advantage of what we know about time perception? If our temporal perception is truly fragile, the modulation should work both ways. Because our sense of time is subjected to factors such as emotions [6], [8], [12] and attention [13], [14] that can slow down time perception, the reverse should also be true. A study by Sweeny and colleagues [15] may have offered us an example of how we may achieve this; the study probes into the potential coping resources such as mindfulness and flow during COVID-19. Flow can be simply understood as the state of being 'in the zone’ [16]. It is when one is completely absorbed into what they are doing, thereby influencing their sense of time [17]. Sure enough, flow was found to help mitigate the deleterious effects of lengthy quarantines [15]. The paper suggests that pleasant activities can fully attract a person’s attention. As they fully immerse themselves in those activities, days will feel shorter. This finding thus fits very well with the attention-based ticking clock metaphor [3], [4]; activities such as writing, gaming, and exercising that one may enjoy doing during stressful events such as lockdowns can, indeed, divert one's attention away from the actual amount of time [18]. Ultimately, engaging in flow will not only help speed up subjective time experiences, but also draw benefits to one's mental health and wellbeing. Time and perceived time are not always necessarily the same. As the existing body of literature grows, we are starting to gain much more insight into how it works and its implications. It may be safe to say that the existing models and metaphors have yet to cover a myriad of phenomena. Besides the cognitive perspective, there is so much to learn about how neurobiological processes are tied into one's sense of time. Our temporal perception is closely associated with our wellbeing in different ways. It will be interesting to see additional studies in the future as this knowledge may help improve the well-being of individuals during stressful times. References [1] S. Droit-Volet and S. Gil, “The time–emotion paradox,” vol. 364, no. 1525, pp. 1943–1953, Jul. 2009, doi: 10.1098/rstb.2009.0013. [2] H. Bergson, Durée et simultanéité. France, Paris, Presses Universitaires de France, 1968. [3] S. Droit-Volet and W. H. Meck, “How emotions colour our perception of time,” vol. 11, no. 12, pp. 504–513, 2007, doi: 10.1016/j.tics.2007.09.008. [4] S. Droit-Volet, S. Monceau, M. Berthon, P. Trahanias, and M. Maniadakis, “The explicit judgment of long durations of several minutes in everyday life: Conscious retrospective memory judgment and the role of affects?,” vol. 13, no. 4, p. e0195397, Apr. 2018, doi: 10.1371/journal.pone.0195397. [5] K. Rankin, K. Sweeny, and S. Xu, “Associations between subjective time perception and well-being during stressful waiting periods,” vol. 35, no. 4, pp. 549–559, Oct. 2019, doi: 10.1002/smi.2888. [6] S. Droit-Volet, S. Brunot, and P. M. Niedenthal, “Perception of the duration of emotional events,” vol. 18, no. 6, pp. 849–858, 2004, doi: 10.1080/02699930341000194. [7] J. Tipples, “Negative emotionality influences the effects of emotion on time perception,” vol. 8, no. 1, pp. 127–131, Feb. 2008, doi: 10.1037/1528-3542.8.1.127. [8] J. Tipples, “Increased Frustration Predicts the Experience of Time Slowing-Down: Evidence from an Experience Sampling Study,” vol. 6, Jun. 2018, doi: 10.1163/22134468-20181134. [9] D. A. Effron, P. M. Niedenthal, S. Gil, and S. Droit-Volet, “Embodied temporal perception of emotion,” vol. 6, no. 1, pp. 1–9, Feb. 2006, doi: 10.1037/1528-3542.6.1.1. [10] S. Droit-Volet et al., “Time and Covid-19 stress in the lockdown situation: Time free, «Dying» of boredom and sadness,” vol. 15, no. 8, p. e0236465, Aug. 2020, doi: 10.1371/journal.pone.0236465. [11] E. A. Holman and E. L. Grisham, “When time falls apart: The public health implications of distorted time perception in the age of COVID-19,” vol. 12, no. S1, pp. S63–S65, 2020, doi: 10.1037/tra0000756. [12] Y. Yamada and T. Kawabe, “Emotion colors time perception unconsciously,” vol. 20, no. 4, pp. 1835–1841, Dec. 2011, doi: 10.1016/j.concog.2011.06.016. [13] R. A. Block and R. P. Gruber, “Time perception, attention, and memory: A selective review,” vol. 149, pp. 129–133, Jun. 2014, doi: 10.1016/j.actpsy.2013.11.003. [14] D. Zakay and R. Block, “An Attentional-Gate model of Prospective Time Estimation,” pp. 167-178, Nov. 1994. [15] K. Sweeny et al., “Flow in the time of COVID-19: Findings from China,” vol. 15, no. 11, p. e0242043, 2020, doi: 10.1371/journal.pone.0242043. [16] J. Nakamura and M. Csikszentmihalyi, The concept of flow. New York, NY, US: Oxford University Press, 2002, pp. 89–105. [17] K. Cherry, “How to Achieve Flow.” https://www.verywellmind.com/what-is-flow-2794768 (accessed Dec. 23, 2021). [18] F. Nuyens, D. Kuss, O. Lopez-Fernandez, and M. Griffiths, “The Potential Interaction Between Time Perception and Gaming: A Narrative Review,” vol. 18, Oct. 2020, doi: 10.1007/s11469-019-00121-1.

  • Einstein’s Miracles Part 1: Quanta

    By Caleb Todd “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement.” —Lord Kelvin, 1900 “Hold my beer.” —Albert Einstein, at some point in 1905 probably As the 19th century came to a close, there was a strong sense that physics was complete. Isaac Newton had long since formulated his three laws of motion, which described the behaviour of all physical objects. James Clerk Maxwell’s electromagnetic theory explained how light and electrically charged matter interacted. Even systems too complex for analysis from first-principles were being conquered using the tools of statistical mechanics. Sure, there were a few wrinkles to iron out, but physicists had formalisms to deal with any conceivable problem. These formalisms — collectively called the tools of classical physics — appeared to work. To many, it seemed that there was nothing really new to be discovered. However, within the first five years of the new century, all illusions of the completeness of physics were utterly dispelled. The wrinkles in classical mechanics instead turned out to be loose threads which, when pulled on, would unravel the entire tapestry. A series of death blows were dealt to the very foundations of how we understood the universe, and the chief executioner was a little-known patent clerk called Albert Einstein. In 1905, fresh out of his PhD, the 26-year-old Einstein published four papers that revolutionised physics. Producing any one of them was itself an extraordinary achievement; to publish all four in a single year was nothing short of miraculous. 1905 is now called Einstein’s annus mirabilis — his “miracle year” — and is arguably the most inspired single year in the history of science. There are three main topics that a university course on so-called ‘modern physics’ will cover: quantum mechanics, atomic physics, and special relativity. Quantum mechanics was established by the first of Einstein’s 1905 papers. The existence of atoms was proven by his second. His third paper founded the field of special relativity. Einstein’s fourth paper presented what is now the most famous equation on Earth: E = mc2. Einstein’s work in 1905 was seminal in taking physics beyond the classical domain. A series of articles this year, of which this is the first, will discuss what each of these papers really mean. I hope to convey how extraordinary they are as intellectual achievements, and how significant they have been to physics as a whole. (1) “On a Heuristic Viewpoint Concerning the Production and Transformation of Light” Annalen der Physik, June 9th, 1905 To understand why this paper was so important in the early days of quantum mechanics, we must first discuss how classical physicists understood light. Maxwell’s equations were developed to describe how electric and magnetic fields interact with each other and with charged particles. The moment Maxwell discovered that his equations predicted the existence of electromagnetic waves that travelled at the speed of light was one of the greatest moments in physics. The nature of light had finally been revealed. Young’s double-slit experiment had confirmed that light was fundamentally a wave long before [1], but now physicists knew what was ‘waving’ — the electric and magnetic fields. Imagine you and a friend are holding one end of a rope each. Now suppose you begin rapidly moving your end up and down so as to induce wave motion in the rope. How much energy you put into the rope wave can be varied continuously. You can increase or decrease how vigorously you move the rope by any amount you choose (within the limits of your strength). Such is the case with any wave in classical physics, and such was assumed about electromagnetic waves. However, this assumption caused some issues; in particular, it was responsible for the ‘ultraviolet catastrophe’. If you assume that electromagnetic waves can transfer energy to matter in arbitrary amounts, a neat piece of mathematics demonstrates that the intensity of light being radiated at each wavelength increases vastly as you decrease the wavelength [2]. Not only does this prediction disagree with experiments (as shown in Fig. 1), it also implies the matter is radiating infinite energy, which is an impossibility. The assumption must be incorrect. Figure 1: The intensity of radiated light from an ideal blackbody. The black curve shows the prediction obtained when assuming light and matter can exchange arbitrary quantities of energy at a temperature of 5000 K. The blue, green, and red curves show the experimental measurements at 5000 K, 4000 K, and 3000 K respectively. There is a clear disagreement between the classical prediction and the true values, particularly at short wavelengths — this is referred to as the ultraviolet catastrophe. Max Planck resolved the ultraviolet catastrophe in 1901 by proposing an alternative theoretical starting point. He assumed that energy would be exchanged between light and matter only in discrete chunks — so-called “quanta” of energy [3]. In our rope analogy, this would correspond to you being able to increase the amplitude of your oscillation only in increments of, say, 10 cm. You could have 40 cm tall waves or 50 cm tall waves, but not 45 cm tall waves. An absurd assumption at face value, but it leads to correct mathematical predictions regarding the intensity of radiation from matter [3]. Planck’s proposal is now regarded as the birthplace of quantum mechanics, but Planck himself thought very little of his technique. Einstein, however, dared to consider what would happen if you took the idea of quanta seriously. Rather than supposing only that energy left and entered the electromagnetic field in discrete chunks, he proposed that light itself was separated into discrete chunks. Rather than a continuous electromagnetic wave, completely distributed throughout space, light is instead composed of localised, particle-like packets that we call photons. The transfer of energy quanta as per Planck’s work was really the absorption or emission of these photons. This is the “heuristic viewpoint concerning the production and transformation of light” of which the paper’s title speaks. On what basis, though, could Einstein make this claim? Planck’s theory required only that the exchange of energy happens discretely — for light itself to be discretised is a far stronger statement. Einstein’s approach was to show how his new theory could explain phenomena beyond Planck’s radiation intensities. There were two principle phenomena he dealt with: he showed that the entropy of a light field behaved as a gas of particles, and he used his quantum theory to explain the photoelectric effect. While the photoelectric effect is the most famous result from this paper, Einstein’s explanation of it does not necessarily require light itself to be quantised — it depends only on the exchanges of energy being quantised, as in Planck’s paper. So, let’s talk about entropy. The entropy of a system, in essence, describes how many ways that system can be configured without changing its macroscopic state. For example, you could switch the position of two atoms, but the temperature or volume (indeed, any important large-scale quantity) will be unchanged. The more configurations exist for a given state, the more likely it is that the system will be found in that state, so entropy tells you how probable different states are for a given system. For example, given a gas of helium molecules confined in a box, you can determine the probability that all of the molecules are found in the top half of the box at any given time using entropy. Einstein demonstrated a correspondence between the entropy of light and that of a gas of particles; in particular, their volume-dependence. If an ideal, low-density gas of N particles is confined in a box of volume V0, then the probability, P, that all N particles will be simultaneously found in a sub-volume, V < V0, is P = (V/V0)N. Einstein showed that the entropy of a low-density light field in a box had exactly the same form as that of the ideal gas. Moreover, by comparing the two expressions he deduced that the probability of all the light being found in a sub-volume V < V0, is P = (V/V0)E/(hf), where E is the total energy in the light field, h is Planck’s constant, and f is the frequency of the light (i.e. its colour, determined by its wavelength). The product hf is precisely the size of one of Planck’s quanta of energy (hence why the constant h is given his name), which, if Einstein is right about the light field itself being quantised, would make E/(hf) exactly the number of photons in the box. In other words, light in a box behaves exactly as a collection of discrete particles. Light is quantised. This result had exceptional significance in the development of quantum physics. As you may have guessed, quantum physics gets its name from the discretisations — the quantisations — which occur as a motif in the theory. One of the core features of quantum mechanics is that quantities (like energy or the amount of light) must often take on discrete values, and photons were the first quanta to be (knowingly) discovered. From this beginn ing, supplied by Einstein and Planck, quantum theory would rewrite virtually everything we thought we knew about physics. Indeed, quantum theory and general relativity together now form the basis for all of physics as we understand it. Quantum theory was accepted only slowly at first, but it would eventually become the most influential idea of the 20th century. This paper, Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt in its original language, won Einstein the Nobel Prize in 1921. It would prove to be his only Nobel prize, but a strong case can be made that at least one of his other papers in 1905 deserved to win him a second. In the next edition of the UoA Scientific, we will delve into Einstein’s work on the statistical mechanics of atoms and how he finally put to rest the question of their existence — a result we now take for granted. 1. This quote is almost certainly apocryphal, but who wants to let facts get in the way of a good story? 2. You can’t prove I’m wrong. 3. This fact will be extremely significant when we come to discuss Einstein’s third paper. 4. Imagine you have a friend if needed References [1] T. Young, “I. The Bakerian Lecture. Experiments and Calculations Relative to Physical Optics,” Philosophical transactions of the Royal Society of London, no. 94, pp. 1–16, 1804 [2] M. Vazquez and A. Hanslmeier, Ultraviolet radiation in the solar system, vol. 331. Springer Science & Business Media, 2005. [3] M. Planck, “On the Law of Distribution of Energy in the Normal Spectrum,” Annalen der Physik, vol. 4, no. 553, p. 1, 1901.

  • Mysteriously Massive Muon Magnetic Moment Might Mean Missing Maths

    Kevin Stitely Image from Wallpaperbetter In 2013, a huge effort was taken to transport a giant 15 m wide superconducting electromagnet over 5,000 kilometres from Brookhaven National Laboratory in New York, to FermiLab National Accelerator Laboratory in Illinois, in the United States. The trip took 35 days of painstaking care, as even a few degrees of bending the extremely sensitive equipment would cause irreparable damage. The magnet was being moved in preparation for a series of experiments aimed at probing a fundamental particle called the muon, the electron’s more massive cousin. The magnet is used to maintain an extremely uniform magnetic field inside of which a beam of muons travels in a circle at nearly the speed of light. The experiments aim to measure a tiny wiggling motion that the particles undergo when subjected to a magnetic field, which might tell us something about the fundamental structure of matter. Fig. 1. The Muon g-2 storage-ring superconducting electromagnet nearly arriving at FermiLab National Accelerator Laboratory. Photo from FermiLab Creative Services. According to the theory of quantum electrodynamics (QED), particles with a peculiar property called “spin” behave as tiny, tiny bar magnets, with a strength described by their so-called magnetic moment, which is often quantified in terms of their “g-factor.” Particles which possess spin will undergo a sort of spinning-top-like motion when in the presence of a magnetic field, called precession. The magnetic moment of the particle then dictates the speed at which the particles precess. The prediction and subsequent measurement of the g-factor of the electron then became one of the first precision tests of the theory of QED. The first theoretical calculation was performed by Paul Dirac nearly a century ago, where he noted a value of exactly two [1]. Figure 2. Schematic of the Muon g-2 experiments.ure The calculation depends on how electrons interact with photons, quantum particles of light. Dirac used the simplest (relatively speaking) possible interaction, but as quantum theory was further developed it became clear to physicists that there was more to the story. As well as the particles already being considered, namely an electron and a photon, there are also “virtual” particles that can, briefly, appear out of the fabric of space itself to provide additional interaction pathways. Interactions with virtual particles of the so-called vacuum field cause the g-factor of particles to be ever so slightly larger than two. The part of the g-factor that is the result of these virtual particles is then g-2, which quantifies the extent of the particle’s interaction with the vacuum field. The corresponding modification to the magnetic moment is called the anomalous magnetic moment. The theoretical calculation of the anomalous magnetic moment of the electron was carried out by Julian Schwinger in 1948 [2], and stands as a contender for the most accurate prediction ever performed in all of science, with over ten significant figures of agreement with experimental results. The result is engraved on Schwinger’s tombstone. The story is similar for the muon. The bare calculation without the interactions of virtual particles again yields g = 2, and the inclusion of virtual particle effects slightly increases the value. However, the muon is much more massive than the electron, and therefore interacts with virtual particles in the vacuum more strongly. This is the key reason behind the interest in the magnetic moment of the muon specifically. Whilst the virtual particles that the electron interacts with are primarily photons, the muon features virtual interactions with a much wider class of particles that result in the weak nuclear force, called neutrinos and W and Z bosons. These particles are responsible for the stability of atomic nuclei. Interactions of the muon with virtual W and Z bosons cause a further increase to the anomalous magnetic moment, and make the theoretical calculations of the g-factor of the muon much trickier. Figure 3. Example Feynman diagrams of muon interactions via QED (a), weak interactions (b) and (c), and interaction with virtual hadrons (d). Illustrations of some of the most basic types of particle interactions that the muon can experience are shown in Fig. 3. The diagrams, called Feynman diagrams, show particles as lines incoming and undergoing interactions where lines meet. The simplest virtual particle interaction in QED is shown in Fig. 3(a). Here a muon mu and an antimuon mu(bar) (the muon's antiparticle) exchange a virtual photon gamma before colliding and annihilating, creating another photon gamma. These are the types of interactions that cause the anomalous magnetic moment in the electron. The muon, on the other hand, is also more strongly affected by interactions of particles belonging to the weak nuclear force, W and Z bosons. Two of these possible interaction pathways are shown in Fig. (b) and (c). As well as interactions mediated by the weak nuclear force, there are also effects brought about by another fundamental force - the strong nuclear force. This force is associated with composite particles called hadrons, such as protons and neutrons. As shown in Fig. 3(d), the strong nuclear force contributes to the anomalous magnetic moment of the muon via the creation of virtual hadrons. The first results of the experiments were published this April [3]. The results of the muon g-factor, along with the currently accepted theoretical value [4], are g theory = 2.00233183620 +/- 0.00000000086 g experiment = 2.00233184121 +/- 0.00000000082 While the difference between the theoretical prediction and the experimental result may seem extremely close, the experiments are so incredibly precise that the observed difference here is significant. The important question that scientists are asking now is: is the difference significant enough? The difference observed here is 4.2 standard deviations, meaning that the probability of observing such an extreme result by chance is roughly 1 in 40,000. However, the currently accepted standard in particle physics to mark the discovery of new physics is 5 standard deviations difference, which represents a probability of about 1 in 2 million. The currently observed discrepancy between theory and experiment, which is in agreement with previously obtained experimental values from Brookhaven National Laboratory [5], is very exciting for physicists hoping that this could be an early indication of physics beyond the Standard Model. The Standard Model of particle physics is our current de facto understanding of the quantum world. It includes the quantum theory of electromagnetism (QED), the weak nuclear force, and the strong nuclear force. The idea here is that the theoretical calculation, which includes all known interactions given by the Standard Model, could be missing either some interactions between particles already known to exist, or is missing contributions from particles as yet unknown. Either case would give rise to a wealth of new physics and would glean new insight into the fundamental constituents of matter. There is, however, another camp of theorists which contends that the experimental results of Brookhaven and FermiLab can indeed be calculated via the Standard Model. The issue here comes down primarily to the muon interactions with virtual hadrons, such as illustrated in Fig. 3(d). In any theoretical calculation of interactions of fundamental particles with the vacuum field, each and every interaction pathway cannot be accounted for, as there are actually infinitely many of them. Instead, only the most important types of interactions are considered. This approach, called perturbation theory, works excellently in the case of the electron because the more complicated the interaction pathway, the smaller the contribution. As a result, only finitely many of them need to be calculated and the rest can be thrown away if the role they play is so small they won't be detectable anyway. However, this situation is more complicated in the muon because the strong nuclear force, which governs how the hadrons interact, cannot be calculated in a perturbative manner. Instead, theorists resort to using data-driven approaches that use results gathered from previous experiments to estimate contributions of hadronic interactions [4]. This is the main source of uncertainty in the theoretical predictions of the muon magnetic moment. In a paper published the same day the FermiLab experimental results were unveiled [6], a team of theorists known as the BMW collaboration (so-called because of the cities most of the physicists are from - Budapest, Marseille, and Wupppertal) revealed a new theoretical calculation of the anomalous muon magnetic moment based on the Standard Model using lattice quantum field theory. Usually quantum field theories treat spacetime as a continuum which is infinitely divisible — at least to wherever our current theories fail. Instead, lattice quantum field theory treats spacetime as an extremely fine mesh of gridpoints, with particles only allowed to exist on points, not the spaces in between. This allows for forces such as the strong nuclear force to be simulated on a (very large) computer in a brute-force fashion. Using this procedure, the BMW team calculated a value for the muon magnetic moment that appears to be much closer to the experimental results than the currently accepted theoretical prediction using the data-driven approach. This suggests that the currently observed discrepancy between theory and experiment could be reduced with Standard Model physics alone. In conclusion, the results as they currently stand are inconclusive. They recently divulged measurements of the muon magnetic moment of FermiLab align remarkably with previously established results taken at experiments in Brookhaven. The results indicate a discrepancy between the experiments and the theoretical predictions offered by the Standard Model, but not to an extent that can definitively be called statistically significant to the standard set in particle physics. Nonetheless, there are high hopes that there is physics outside the Standard Model to be found in the muon. The situation is further muddled by suggestions that with the different computational methods offered by lattice quantum field theory, the current experimental observations can be adequately explained by the Standard Model. As of right now, the scientific community is yet to reach a consensus until theoretical and experimental techniques are honed over the coming decades. In either case, the stage is set for a wealth of new physics to be discovered as a result of the curiously large muon magnetic moment. References [1] P. A. M. Dirac, “The quantum theory of the electron. Part II,” Proc. R. Soc. A, vol. 118, no. 351, 1928. [2] J. S. Schwinger, “On quantum electrodynamics and the magnetic moment of the electron,” Phys. Rev., vol. 73, no. 416, 1948. [3] Muon g-2 Collaboration, “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm,” Phys. Rev. Lett., vol. 126, no. 141801, 2021. [4] T. Aoyama et al., “The anomalous magnetic moment of the muon in the Standard Model,” J. Phys. Rep., vol. 887, pp. 1-166, 2020. [5] Muon g-2 Collaboration, “Final report of the E821 muon anomalous magnetic moment measurement at BNL,” Phys. Rev. D, vol. 73, no. 072003, 2006. [6] S. Borsanyi, et al., “Leading hadronic contribution to the muon magnetic moment from lattice QCD,” Nature, vol. 593, pp. 51-55, 2021.

bottom of page