Search Results
76 results found with an empty search
- Comorbidity in Mental Health and Medicine
By Gene Tang Photo by Joshua Fuller on Unsplash Comorbidity is a major challenge that has emerged in the fields of psychology, psychiatry, and medicine within the last few decades. While many people may not have heard this term before, the concept behind it may be quite familiar. Comorbidity is associated with adverse outcomes at various scopes, from personal to clinical health care level. To this end, comorbidity is an issue that needs to be addressed. Comorbidity as a term was first introduced by A. R. Feinstein, a well-known American doctor and epidemiologist. The term referred to the co-occurrence of multiple mental or physical health conditions within individuals. The term 'comorbidity' became somewhat fashionable not only in the field of psychiatry but also in general medicine [1], as its uses became increasingly frequent across different fields. The prevalence of comorbid disorders is not limited to the co-occurrence of multiple mental disorders or multiple physical disorders, but simultaneous mental and physical disorders are also possible. Comorbid diseases and disorders have undoubtedly increased over the past few decades and it is likely that this trend will continue in the following years. This issue applies across many demographics; people of all ages can still suffer from numerous conditions, whether they are young or elderly. Norman Sartorious, a former president of the World Psychiatric Association, saw comorbidity as more of a rule than an exception [2] as it is now more prevalent than ever before. Thus, it is not at all uncommon for individuals to be diagnosed with comorbid illnesses. So what causes comorbidity? Several factors may be contributing to its increasing prevalence. It is undoubtedly tricky, potentially even impossible, to pinpoint a singular root of the problem. However, one of the possible causes might be linked to people’s lifestyles in our contemporary world. An epidemic of unhealthy lifestyles, including changes in consumption and increased exposure to detrimental environments, may offer us an explanation for this phenomenon. These lifestyle changes can lead to an increased intake of pollutants and mutagens, which in turn, can play a part in one's immunological susceptibility against comorbid diseases and disorders. Another reason may be the success and advances in the field of medicine. Being able to prolong and sustain life without completely curing one disease could make it easier for patients to contract multiple illnesses simultaneously [2]. Even though external factors mentioned previously could partly inform the observed prevalence of comorbidity, the use of psychiatric classifications may explain another part of this story, particularly when it comes to mental health comorbidity. DSM (Diagnostic and Statistical Manual of Mental Disorders) plays a central role in the theoretical debate on comorbidity [3]. The proliferation of having such diagnostic categories was amongst other arguments made to inform the emergence of psychiatric comorbidity [1]. The argument asserts that the increase in the number of diagnostic categories may result in a higher likelihood of individuals being diagnosed with an illness and, consequently, increased comorbidity rates. Another argument points to the problem of diagnostic inflation. Due to frequent revisions of diagnostic criteria [5], new diagnoses and lowered, easy-to-meet thresholds were introduced, thus significantly increasing the comorbidity rates [4]. The diagnostic criteria for anorexia nervosa can well exemplify this issue. In DSM-V, the number of necessary symptoms required for any patient to be diagnosed with anorexia nervosa was reduced from four to three [6]. This can be problematic because patients will not only have an increased likelihood of being diagnosed with anorexia nervosa but also potentially higher comorbidity rates. With this in mind, we could probably agree that it is not unlikely for an individual to be diagnosed with additional disorders once diagnosed with one. Studies from different countries have found a similar statistical trend regarding this issue. In the US, it was found that 54% of those who met the criteria for at least one mental health disorder at some stage in their lives, also met the criteria for two or more other disorders. Meanwhile in Australia, 40% of those who met criteria for at least one disorder in the period of 12 months also met the criteria for two or more disorders [7]. For New Zealand, the number falls to around 37%, meaning that over a third of people with any disorders will also have more than one disorder [7]. Now let us look at a smaller scale, focusing on the patterns of comorbidity on specific mental disorders. In NZ, of the people diagnosed with anxiety disorders, approximately 27% also suffered from a comorbid mood disorder, and 9% had a comorbid substance use disorder. We can see that comorbidity occurs quite often in anxiety disorders. In fact, social anxiety disorder (SAD, which is a major type of anxiety disorder), was reported to have a considerably high comorbidity prevalence rate across different populations. This rate could sit as high as 90% [8]. The high rate of comorbidity we see in SAD may partly be due to how this disorder was found to be a predictor for the development of other disorders [9,10]. SAD subsequent disorders like mood disorders (e.g., depression, bipolar disorder, and dysthymia), therefore commonly co-occur with anxiety disorders [8]. This overlap was also the most common comorbidity in the NZ population, as reported in Te Rau Hinengaro. Even though numerous studies on mental health comorbidity pay attention to anxiety disorders, we cannot disregard the fact that comorbidity can occur with any disorders of any form. That is, comorbidity does not necessarily have to come in the form of mental-mental comorbidity, but it can also be a mental-physical or physical-physical one. Now that we know the potential causes of comorbidity and its prevalence in the population, we might ask ourselves, why is this a problem? Why is this important? Comorbidity is associated with a variety of adverse outcomes. First of all, overlapping symptoms or co-existing disorders may make prognoses extremely difficult. The presence of multiple disorders may create serious complications that limit clinicians from producing accurate diagnoses, and increase the rate of misdiagnoses. Secondly, comorbidity is associated with worse health outcomes. People who have comorbid diseases are more likely to have increased severity of the disease. More than 59% of the patients in NZ who suffered three or more disorders were classified as serious or severe cases [7]. This will undoubtedly impact the mortality as well as the general quality of life of the patients. Another significant issue that emerged from comorbidity and has been highlighted in several papers is the treatment difficulty [2,5,8]. The nature of co-existing diseases might result in inadequate and inappropriate treatment responses. When multiple diseases are comorbid, overlapping medication could be one of the problems. Some medications may restrict the effects of other medication required for the additional disorders suffered by the patient. That is, the potential interactions between medications can induce unwanted side effects. For example, a drug prescribed for chronic obstructive pulmonary disease will have an antagonistic effect on the diabetes treatment [11]. This shows that the medication or treatments prescribed in comorbid disease have the potential to be inefficacious. Photograph by Robina Weermeijer on Unsplash (June 2019) Another major concern raised when it comes to diagnosing comorbidity is that sometimes clinicians fail to recognize or overlook the comorbidity that exists [2,8]. A psychiatrist we previously mentioned, Norman Sartorious, had made some noteworthy arguments regarding this. He argued that clinicians are usually focusing on the disorders or diseases they are already familiar with. This seems to be commonly the case for mental-physical comorbidity. Non-psychiatric specialists tend to avoid making the diagnosis of mental health disorders due to the unfamiliarity and uncertainty about the treatments and diagnosis. Clinicians often would proceed with a single-disease treatment, expecting psychological symptoms to fade after treating the physical disease [2]. This is also the case for psychiatrists. Because they are unfamiliar with physical diseases, they might avoid conducting the examinations necessary to detect the presence of another concomitant disease. At the end of the day, we all know that comorbidity is something that will continue to persist and we cannot expect the rate to drop anytime soon. However, what can be done or changed is the way professionals deal with this issue. Health care should not just focus on treating one specific disease, but rather should treat the patient holistically [12]. Hence, clinicians need to be trained and become competent in treating comorbid conditions. They need to understand their responsibility in dealing with various diseases and the diagnoses, even if some conditions are not their area of expertise. Non-psychiatric specialists should be able to confidently identify psychiatric disorders and likewise, psychiatrists should also be able to deal with physical illnesses competently. Having said that, we still need to understand that this may not be entirely possible—not without the reorientation of medical education. Therefore, clinicians may want to consider involving other specialists in the patients' diagnostic and treatment strategies. This will offer patients more accurate diagnoses as well as an assurance that comorbid diseases are not left undetected. To this end, we could say that the coordination and cooperation of professionals are essential in dealing with comorbidity. Comorbidity is a big challenge that people are often unaware of. Even if we may not have heard about it before, it does not mean that it is not present in the world around us. We cannot forget that there are people out there whose lives have been devastatingly affected by this problem. The seriousness of this issue should not be underestimated, regardless of whether we are health professionals or not. References [1] Maj, Mario. “‘Psychiatric comorbidity’: an artefact of current diagnostic systems?,” British Journal of Psychiatry 186. no. 3 (Jan 2005): 182-184. [2] Sartorious, Norman. “Comorbidity of mental and physical diseases: a main challenge for medicine of the 21st century,” Shanghai Archives of Psychiatry 25. no. 2 (Apr 2013): 68-69. [3] Van Loo, Hanna M., and Jan-Willem Romeijn. “Psychiatric comorbidity: fact or artifact?,” Theoretical Medicine and Bioethics 36. no. 1 (Feb 2015): 41-60. [4] Vella, G, M. Aragona, and D. Alliani. “The complexity of psychiatric comorbidity: a conceptual and methodological discussion,” Psychopathology 33. no. 1 (Feb 2000): 25-30. [5] Batstra, Laura, and Frances Allen. “Diagnostic Inflation : Causes and a Suggested Cure,” Journal of Nervous and Mental Disease 200. no. 6 (June 2012): 474-479. [6] American Psychiatric Association. Diagnostic and statistical manual of mental disorders: DSM-5. Arlington: American Psychiatric Association, 2013. [7] Scott, Kate M. Te Rau Hinengaro: The New Zealand Mental Health Survey. Chapter 5: Comorbidity. Wellington: Ministry of Health, 2006. [8] Koyuncu, Ahmet, Ezgi İnce, Erhan Ertekin, and Raşit Tükel. “Comorbidity in social anxiety disorder: diagnostic and therapeutic challenges,” Drugs in Context 8, (2019): 21573. [9] Ohayon, Maurice M., and Alan F. Schatzberg. “Social phobia and depression: Prevalence and comorbidity,” Journal of psychosomatic research 68. no. 3 (2010): 235-243. [10] Kessler, R. C., P. Stang, H.-U. Wittchen, M. Stein and E. E. Walters. “Lifetime co-morbidities between social phobia and mood disorders in the US National Comorbidity Survey,” Psychological Medicine 29. no. 3 (May 1999): 555-567. [11] Valderas, Jose M., Barbara Starfield, Bonnie Sibbald, Chris Salisbury, and Martin Roland. “Defining Comorbidity: Implications for Understanding Health and Health Services,” Annals of Family Medicine 7. no. 4 (July 2009): 357-363. [12] Gijsen, Ronald, Nancy Hoeymans, Francois G. Schellevis, Dirk Ruwaard, William A. Satariano, and Geertrudis A. M. van den Bos. “Causes and consequences of comorbidity: A review,” Journal of clinical epidemiology 54. no. 7 (July 2001): 661-674.
- Opinion: Dissonance in Attitudes between Blood Clotting in Vaccines and Oral Contraceptives
By Stella Huggins The combined oral contraceptive pill is known to cause blood clots in some people who use it. Photo from Reproductive Health Supplies Coalition on Unsplash (2019). As vaccines of all stripes begin to be distributed throughout the population, inevitable panic over side effects descends with them. New technology is always daunting if you do not understand it. The psychology lying behind the fear of new phenomena is complex [1], and often begins with denial and outright refusal to partake [2]. Every individual has the right to refuse anything they wish — bodily autonomy is absolutely a human right. The recent pandemic has created a fascinating development in modern science; three distinct variations of vaccines [3]. Inevitably, some have unpredictable side effects, the most concerning being blood clots associated with the Johnson & Johnson vaccine. The mechanisms through which the vaccine developed by Johnson & Johnson acts are not new concepts. It functions through the utilisation of adenoviruses, a common subgroup of medium-sized, double-stranded viruses that infect humans. They were discovered in 1953 [4]. However, it hasn’t been in such wide use throughout the population. The public has clutched onto the pitfalls of the vaccine with merciless scrutiny; this is a good thing in some respects. We should be wary of the scientific narrative. Scientists themselves are critical of it and are constantly reshuffling ideologies and things considered to be the ultimate truth. However, an interesting observation is that the scrutiny placed on COVID-19 vaccines does not appear to be attributed to other medications distributed nearly just as frequently in the population. To me, a striking difference in the priorities of public debate lies in who is being affected by the issues. Johnson & Johnson has produced adverse side effects in a small majority of individuals. Blood clots, namely, have caused immense pressure on governments from certain sectors of society to discontinue their use [5]. Walking the line of caution is no easy feat, and the relative newness of the technology adds another layer of complexity to the matter. However, blood clots as a side effect of oral contraceptives are a well-documented occurrence; so where is the outcry for females taking this medication? The advent of birth control was in the 1960s [6]. It provided an incredible amount of liberation to women, uprooting the narrative that a female must have a child if she were to enjoy the same sexual liberation as men. There is an obvious imbalance in the burdens placed upon each gender when it comes to pregnancy, and technologies to alleviate this imbalance gifted us with strides towards bodily autonomy for females. Johnson & Johnson has a 66% efficacy rate against symptomatic infection, and an 85% efficacy rate against severe COVID-19 that produces hospitalisation. Oral contraceptives have a 99% efficacy rate with perfect use (taking the pill at the exact same time every day) and 91% success when human error is factored into the equation. Blood clots have occurred in 6 of the 7 million patients receiving the Johnson & Johnson vaccine (in the USA, one of the places Johnson & Johnson is being distributed) [7], whilst oral contraceptives (using Ava30 ED as an example, there are numerous oral contraceptives) have a risk of causing blood clots in about 5-7 out of every 10,000 females taking the medication, annually. In some cases, the blood clot risk is unknown in oral contraceptive types (estradiol + nomegestrol hormone pill Zoely, and estradiol + dienogest hormone pill Qlaira [8]). This, to some extent, could reflect the priorities of the system: why is there an unknown, for such a significant side effect risk? The medications are clearly inherently different in a myriad of ways — a vaccine requires no effort from the patient in terms of administration. Oral contraceptives are clearly more established, and have more data in terms of the long-term risks associated with these blood clots. Females are educated, generally, on the potential side effects of their birth control. There may be less quantifiable motives underlying a female’s decision to accept the risks, and the raft of side effects, social and economic gain, bodily autonomy, are all factors that play into the decision. It’s a highly personal choice in both cases. But the general social attitudes towards them seem so wildly different when the side effects are so similar. Of course, 100% success is an enormous task — body chemistry is ridiculously complex, and side effects are to be expected in some capacity. The curious part is the attitudes towards such side effects. Birth control is distributed at a similar frequency in the population. In New Zealand, 89% of women aged between 35-69 use the pill [9]. Our vaccination rollout programme is still in progress (though it’s important to note New Zealand is using Pfizer, a vaccine not associated with blood clots), the intent being to vaccinate as many people as possible, and cultivate herd immunity [10]. These numbers obviously do not pertain to the issue of blood clot parallels, but instead illustrate the frequency at which vaccines and oral contraceptives are distributed in populations. It begs the question; would the standard of birth control’s efficacy in relation to its side effects (which are by no means limited to blood clots), be tolerated today? Would the outcry be so outrageous, passionate and personal? Do groups who are unaffected by the immediate effects of such medications, have any regard for the health of women? The climates under which both medications were produced were both ones of urgency, relating to a health issue. It’s clear that the COVID-19 pandemic needed a far more rapid solution, but women were still dying of at-home abortions — an epidemic of sorts, though a far more socially oriented one. I am well aware that the drugs are not directly comparable, though I do urge you to consider society’s overall attitudes to women’s health, and compare these with the public’s attitudes to the new vaccines becoming available. It appears to reflect an insidious structural problem in the healthcare system, rooted in a historically male-dominated perspective to female health. It’s something to ponder, when women across the globe are accepting such a risk, and passionate outcry is happening for a new medication. Part of this can be attributed to the fact that people born into an age where oral contraceptives are the norm may have less trouble accepting their risks — after all, their predecessors have coped with this subpar quality of life, how bad can it be? I can imagine that there was a degree of hesitancy when the drug was first rolled out worldwide. Again, I am by no means advocating we blindly accept subpar accuracy of vaccines, and erring on the side of caution is perfectly acceptable. We should absolutely not stride forward with a solution that isn’t quite right. However, the perception of the facts of the situation needs to change. Oral contraceptives are well established and socially accepted in most parts of the world — of course, as I have detailed, so the standards are slightly different — but maybe it’s time we reconsider these entrenched attitudes. Image from Hakan Nural on Unsplash (2020) Health psychology is difficult. Our attitudes to medication are understandably deeply personal, intertwined with our identity, our perceptions of our personal safety, the safety of our loved ones, and cultural attitudes. It’s not easy to ask people to unpick these things, especially in a tumultuous period in history — in times of change, people burrow further into their previously entrenched beliefs, taking the global financial crisis of 2008 as an example [11]. It’s likely that a pandemic is no exception to these psychological rules. Oral contraceptives are by no means the only drugs that cause blood clots as a side effect. In an analysis by Ramot, Nyska and Spectre, a detailed consolidation of all known medicines to cause blood clots revealed a fascinating trend. A huge number of blood clotting drugs exist, however a huge majority of their uses relate to life-threatening or seriously quality of life impairing conditions. Chemotherapy, antipsychotics, antidepressants, acute skin conditions, pain relievers, muscular degeneration, and anemia are among the conditions treated by classes of medication that cause blood clotting side effects [12]. Whilst not to degrade the physical benefits that contraception can bring (hormone and mood regulation, alleviation of adverse menstrual experiences, treatment of mild endometriosis and PCOS cases, etc.), as well as the social-societal benefits of females having increased choice in their bodily autonomy, it seems as if the costs sometimes outweigh the benefits [13,14,15]. Or, at least, the technology simply isn’t being improved, due to a disregard (whether intentional or unintentional), for improvement and progress, and a satisfaction with subpar female healthcare. The priorities of our society are overwhelmingly directed towards other areas of progress, and the treatment of female specific ailments is decidedly average. It’s coming to light now, too little too late, that medical research done on men and extrapolated to women is simply not directly transferable [16]. Women are not miniature versions of men, and their biochemistries are not parallel — at the most basic level, estrogen and testosterone production in each sex are inherently different. Female healthcare has long been neglected. The profession of medicine is finally becoming more gender-balanced [17], but this does not necessarily change the structures it was built on. This is not to say that we tear down frameworks of healthcare in a critical period, but rather, that we are acutely aware of their existence and origins. We must be cognisant of the fact that marginalised groups can easily fall under the radar in times of chaos [18,19]. The attitudes towards side effects are just one symptom of a larger ailment afflicting health perspectives. References [1] Riezler, K. (1944). The Social Psychology of Fear. American Journal of Sociology, 49(6), 489–498. https://doi.org/10.1086/219471 [2] David Ropeik (2013) How society should respond to the risk of vaccine rejection, Human Vaccines & Immunotherapeutics, 9:8, 1815-1818, DOI: 10.4161/hv.25250 [3] World Health Organization. (n.d.). Coronavirus disease (COVID-19): Vaccines. World Health Organization. https://www.who.int/news-room/q-a-detail/coronavirus-disease-(covid-19)-vaccines?adgroupsurvey=%7Badgroupsurvey%7D&gclid=CjwKCAjwtJ2FBhAuEiwAIKu19lqptp526TPSd6wN8OsiLlpctk5_oFG9wFwOhtK9XvhmOyIomEoDzBoCVF0QAvD_BwE. [4] Desheva, Y. (2019). Introductory Chapter: Human Adenoviruses. In Y. Desheva (Ed.), Adenoviruses. essay, IntechOpen. DOI: 10.5772/intechopen.74757 [5] Ledford, H. (2021, April 16). COVID vaccines and blood clots: five key questions. Nature News. https://www.nature.com/articles/d41586-021-00998-w. [6] Public Broadcasting Service. (2010, May 11). A brief history of the birth control pill. PBS. https://www.pbs.org/wnet/need-to-know/health/a-brief-history-of-the-birth-control-pill/480/. [7] Centers for Disease Control and Prevention. (2021, April 23). Agencies Underscore Confidence in Vaccine's Safety and Effectiveness Following Data Assessment; Available Data Suggest Potential Blood Clots Are Very Rare Events. Centers for Disease Control and Prevention. https://www.cdc.gov/media/releases/2021/fda-cdc-lift-vaccine-use.html. [8] MEDSAFE. Oral Contraceptives and Blood Clots. (n.d.). https://www.medsafe.govt.nz/consumers/leaflets/oralcontraceptives.asp. [9] Chesang, J., Richardson, A., Potter, J., & Coope, P. (2016). Prevalence of contraceptive use in New Zealand women. The New Zealand medical journal, 129(1444), 58–67. [10] World Health Organization. (n.d.). Vaccines and immunization: What is vaccination? World Health Organization. https://www.who.int/news-room/q-a-detail/vaccines-and-immunization-what-is-vaccination?adgroupsurvey=%7Badgroupsurvey%7D&gclid=CjwKCAjwtJ2FBhAuEiwAIKu19p7EEwGbHCdtTp0jib3vNjxP_749a_wuGlEvrYg8IhlaZ1TSDkSsXhoCMPYQAvD_BwE. [11] Sufi, A. (2018, September 23). Why You Should Blame the Financial Crisis for Political Polarization and the Rise of Trump. Evonomics. https://evonomics.com/blame-financial-crisis-politics-rise-of-trump/. [12] Ramot, Y., Nyska, A., & Spectre, G. (2013). Drug-Induced Thrombosis: An Update. Drug Safety, 36(8), 585–603. https://doi.org/10.1007/s40264-013-0054-6 [13] Martinelli, I., Battaglioli, T., & Mannucci, P. M. (2003). Pharmacogenetic aspects of the use of oral contraceptives and the risk of thrombosis. Pharmacogenetics, 13(10), 589–594. https://doi.org/10.1097/00008571-200310000-00002 [14] Coata, G., Ventura, F., Lombardini, R., Ciuffetti, G., Cosmi, E. V., & Di Renzo, G. C. (1995). Effect of low-dose oral triphasic contraceptives on blood viscosity, coagulation and lipid metabolism. Contraception, 52(3), 151–157. https://doi.org/10.1016/0010-7824(95)00148-4 [15] Rosing, J., Middeldorp, S., Curvers, J., Thomassen, M. C., Nicolaes, G. A. F., Meijers, J. C. M., Bouma, B. N., Büller, H. R., Prins, M. H., & Tans, G. (1999). Low-dose oral contraceptives and acquired resistance to activated protein C: a randomised cross-over study. The Lancet, 354(9195), 2036–2040. https://doi.org/10.1016/s0140-6736(99)06092-4 [16] Zucker, I., & Prendergast, B. J. (2020). Sex differences in pharmacokinetics predict adverse drug reactions in women. Biology of Sex Differences, 11(1). https://doi.org/10.1186/s13293-020-00308-5 [17] Gender equality in medicine: change is coming. The Lancet. (2019, December). https://www.thelancet.com/journals/langas/article/PIIS2468-1253(19)30351-6/fulltext. [18] Cairns, D., Growiec, K., & de Almeida Alves, N. (2014). Another ‘Missing Middle’? The marginalised majority of tertiary-educated youth in Portugal during the economic crisis. Journal of Youth Studies, 17(8), 1046–1060. https://doi.org/10.1080/13676261.2013.878789 [19] Kantamneni, N. (2020). The impact of the COVID-19 pandemic on marginalized populations in the United States: A research agenda. Journal of Vocational Behavior, 119, 103439. https://doi.org/10.1016/j.jvb.2020.103439
- University Rankings: For Who, for What, and Why?
By Alex Chapple Photo by Vasily Koloda on Unsplash In year 13, many students are faced with an important decision of their life: which university should they go to? You may have seen videos online called “college decision videos”, where high school students in other countries record their live reaction of finding out if they got into the university of their dreams. Some jump up for joy and hug their families, and some crumble into tears as reality hits them. Perhaps not so much in New Zealand but in much of the world (for example Japan), society is built around an academic meritocracy where which university you study at determines much of your life after graduation. There are huge industries built around university admissions and entrance exams around the world, like tuition services for specific universities and private college counsellors. Often parents will do whatever they can to get their kids into prestigious universities because for them it’s a status symbol. This was even adapted into a Netflix Original called “Operation Varsity Blues: College Admission Scandals”, where parents would bribe athletic coaches through a middleman to get a recommendation to the admissions office for their kids. Universities are a place for research and higher education, but they can be just as much about prestige and status. Often this status is driven by University rankings. There are many different rankings around the world, but the three global rankings that are most prominent are: The Quacquarelli Symonds (QS) World University Rankings, the Times Higher Education World University Ranking, and the Academic Ranking of World Universities (also known as the Shanghai Ranking). In New Zealand, all eight universities are ranked in the top 500 globally [1]. At the time of writing, the University of Auckland is ranked at 81st on the QS World Ranking, 147th on the Times Higher Education World University Ranking , and somewhere between 201st-300th on the Academic Ranking of World Universities. From this, we can already see that there is great variation between the rankings. There are over 20,000 universities globally [2], so a university ranked in the top 500 is already in the top 3% of universities worldwide (You probably have seen the buses around Auckland with this sort of statistic about AUT). Despite many universities being in the top 3%, we often look at only the universities ranked in the top 50 or 100 as the “worthy” universities. This is concerning because many other universities not ranked in the top 100 worldwide have excellent courses, world-leading research being done, and a great learning environment. These rankings shine the light only on the “top” universities, driving resources and attention to them and them only, leaving other universities in the dust. This isn’t beneficial for society as a whole at all, because the few get it all. Additionally, we’ve already seen that there can be large variations between the rankings. What does it really mean to be a “top” university? Which rankings, if any, can we trust? Each of the three main rankings has its ranking methodology public, so let’s look at how they rank universities and how they differ from one another. Firstly the QS World University Rankings [3]. The ranking is broken down into 6 categories with each weighted as follows: Academic Reputation - 40% Citations per Faculty - 20% Student to Faculty ratio - 20% Employer Reputation - 10% International Faculty ratio - 5% International Student ratio - 5% Academic reputation is calculated from a survey sent out to over 100,000 academics around the world. It’s now the largest survey conducted of its kind. Citations per faculty are calculated based on the number of citations the publications from the university has gained in the past 5 years, divided by the number of academics at the institution. It’s a measure of the research output of the institution, suggesting that the number of citations that the research gets is the indicator of how important and valuable the research is. The student to faculty ratio is the ratio between the number of students and the number of staff at the university. The QS World Ranking suggests that this is a measure of the quality of teaching done at the university. The international faculty/student ratio is a measure of how good the university is at attracting talent from overseas. Much of the QS World University Ranking is based on institution reputation, student to staff ratio, and citations per faculty. It can be a good indicator of how the university is perceived by others, but it is not very indicative of the experience students can expect at the university. Next, the Times Higher Education World University Rankings [4]. Their methodology is broken into 5 main categories, which are further broken down into smaller categories. The main 5 categories are as follows: Teaching - 30% Reputation survey - 15% Staff to student ratio - 4.5% Doctorate to Bachelor’s ratio - 6% Institutional income - 2.25% Research - 30% Reputation survey - 18% Research income - 6% Research productivity - 6% Citations - 30% International Outlook - 7.5% Industry Income - 2.5% The teaching is further broken down into another 5 categories, with 15% a reputation survey, 4.5% staff to student ratio, 2.25% Doctorate to bachelor’s ratio, 6% doctorates awarded to academic staff ratio, and 2.25% institutional income. The research category is also further broken down into 3 categories. 18% is attributed to a reputation survey, 6% to research income (which they say describes the importance of the research being done), and 6% to research productivity. Interestingly, Times Higher Education admits that research income is a controversial metric because it can heavily depend on national policy and economic circumstances [5]. Nevertheless, they argue that it’s an important metric because research income is vital to conduct world-class research. Research productivity is measured by the number of papers published per academic. The Times Higher Education says that this is a measure of how good the university is at getting its publications published in a high-quality peer-reviewed journal. The industry income is funding received from commercial sources, for example, businesses commissioning research. This category describes the commercial impact the university's research has, which is a unique category not found in the other two rankings. The international outlook category is a measure of how good the university is at attracting students and academics from around the world. This is calculated by looking at the international staff and student ratio, as well as the number of research publications with international co-authors. The Times Higher Education has a stronger focus on teaching and education than the other two rankings, but this can be quite subjective and hard to understand from a student’s perspective. Despite the strong teaching focus, the ranking is still more than 60% determined by the research output of the institution. Finally, we have the Academic Rankings of Universities (Shanghai Ranking) [6]. This ranking originally was created as a way for Chinese universities to see how they stack up against the global competition. As the name suggests, it’s a ranking based almost solely on the academic performance and prestige of the institute. The methodology is as follows: Number of alumni winning Nobel Prizes and Fields Medals - 10% Number of staff winning Nobel prizes and Fields Medals - 20% Number of highly cited researchers in 21 broad subject categories - 20% Number of articles published in Nature and Science - 20% Number of articles indexed in Science Citation Index - 20% Per capita academic performance of an institution - 10% The graph shows the percentage of the 15+ year old population with a tertiary degree over time. Source: https://ourworldindata.org/tertiary-education. The Academic Rankings of World Universities is heavily influenced by whether or not academic staff/alumni at the institution have won these prestigious awards and medals. When you look at the actual rankings, you see that it’s dominated by institutions that have been around for centuries. By using Nature, Science, the Nobel Prize and Fields Medal as indicators of high academic performance, the Shanghai Ranking itself amplifies the prestige and status of these prizes and journals. Note: the Fields Medal is sort of like the Nobel Prize equivalent for mathematics. You may disagree with the methodology of these rankings, or the general philosophy behind ranking institutions in this way. However, there is one undeniable fact, university rankings matter. They matter to the students who go to or want to go to these universities, they matter to the universities themselves, as they heavily determine the future of an institution, and they matter to governments too as they often use these metrics to decide on the allocation of funding. Although the three rankings are transparent about how the total score is calculated, how they decide the weighting of each category is quite arbitrary. Who is to say that citations per faculty should be 20% of the overall score instead of 15%? In a survey conducted by professor Ellen Hazelkorn at Technological University Dublin, it was found that more than half of the universities had taken strategic actions because of the rankings [7]. Sometimes the strategic decisions that universities make can have a negative influence on the teaching or learning experience for students at the university, but it is justified for the university because being highly ranked can lead to more students, more funding, and more prestige. For many smaller and lesser-known universities, the rankings can mean life or death as it’s crucial in attracting students, and getting funding from the government. It should also be noted that most of the data used for these rankings are self-reported by the universities. As you can imagine, this has caused some issues in the past, and there have been numerous reports/scandals of universities bending the statistics or outright cheating to climb up the rankings. In 2020, Temple University was fined $700,000 by the US Department of Education for sending fraudulent data about its online MBA program which helped the course to be ranked at the top in the country for several years. This year, the course is ranked 88th, tied with six other universities [8]. Back in 2015, Trinity College Dublin was accused of trying to influence academics that are part of the annual reputation surveys. Trinity College Dublin issued a statement of regret but said their intentions were in good faith [9]. There are other concerns around the legitimacy of the rankings as well. Recently a paper was published by researchers at the Centre for Studies in Higher Education at the University of California, Berkeley, raising concerns about conflict of interest between QS World University Rankings and some universities. QS has a consulting business that helps universities in various aspects of their business [10]. Igor Chirikov, a senior researcher at the Centre for Studies in Higher Education argues that this consulting business is inappropriately influencing the rankings of universities [11]. The study looked at 28 universities, of which 22 of them had spent collectively nearly three million US dollars in QS consulting services. These universities that had used this consulting service frequently, rose approximately 140 positions than they would have otherwise if they hadn’t used the service. Though other ranking institutions hold events for universities, their revenue is not nearly as reliant on universities as QS is. For example, Times Higher Education makes money from advertising and through its subscription-based content. Igor says that this type of conflict of interest is similar to those that are seen in other sectors of the economy where consultation leads to a biased evaluation of the rankings. So should you trust university rankings, and are they useful? The fact of the matter is that rankings aside, there are plenty of great universities for us all to get a great education at. The differences between universities can be quite arbitrary and whatever methodology these rankings employ, they will never be able to encapsulate the entire experience and the entire calibre of a university down to a number. It’s important for us to be aware of how universities are “ranked” and what this actually says, or doesn’t say, about the quality of a university. A university is a place with vast responsibilities, from educating the next generation, to doing cutting edge research that changes the world and the way we think, to helping other sectors of the economy to grow. Universities play a huge role in our society, and for better or for worse, universities will continue to make strategic decisions around the rankings. One thing is for certain, these rankings aren’t going anywhere anytime soon. So maybe it’s time to not take these rankings too seriously ourselves. References: [1] Universities NZ, 2021, Introducing NZ’s eight universities, https://www.universitiesnz.ac.nz/universities [2] TruOwl, 2018, How many universities exist in the world?, https://truowl.com/university/how-many-universities-exist-in-the-world/ [3] QS Quacquarelli Symonds Limited, April 20 2021, Ranking Methodology, https://www.topuniversities.com/qs-world-university-rankings/methodology [4] Duncan Ross, Times Higher Education, September 2020, Times Higher Education Ranking Methodology, https://www.timeshighereducation.com/sites/default/files/breaking_news_files/the_2021_world_university_rankings_methodology_24082020final.pdf [5] Times Higher Education, September 2019, Times Higher Education Ranking Methodology, https://www.timeshighereducation.com/world-university-rankings/world-university-rankings-2020-methodology [6] Shanghai Ranking, 2020, Shanghai Ranking Methodology, http://www.shanghairanking.com/ARWU-Methodology-2020.html [7] Hazelkorn, Ellen. (2019). University Rankings: there is room for error and "malpractice". http://doi.org/10.5281/zenodo.2592196 [8] Scott Jaschik, December 7 2020, Education department fines Temple $700000, https://www.insidehighered.com/admissions/article/2020/12/07/education-department-fines-temple-700000-rankings-scandal [9] Carl O’Brien, March 22 2016, Trinity College Dublin accused of trying to sway world university rankings https://www.irishtimes.com/news/education/trinity-college-dublin-accused-of-trying-to-sway-world-university-rankings-1.2582286 [10] Scott Jaschik, April 27 2021, Buying Progress in Rankings?, https://www.insidehighered.com/admissions/article/2021/04/27/study-charges-qs-conflicts-interest-international-rankings [11] Chirikov, I. (2021). Does Conflict of Interest Distort Global University Rankings? . UC Berkeley: Center for Studies in Higher Education. Retrieved from https://escholarship.org/uc/item/8hk672nh
- Explained: The New Era of Macs
By Struan Caughey Apple is moving from Intel x86 processors to M1 ARM processors. Photo by David Monje on Unsplash (2020). The end of 2020 led to the introduction of Apple's next generation hardware, all powered by their in-house M1 processors [1]. They announced the chip on November 10th, and one week later it could be purchased with their Macbook Pro, Air and their Mac Mini computers [2]. This change was not just to an in-house processor, but to a new architecture altogether — ARM or Advanced RISC Machines. The last time Apple went through a similar architecture change was in 2006 [3] when they shifted from PowerPC chips which they helped develop into Intel’s x86 design. They made this change as at the time Intel and therefore x86 appeared to have the strongest roadmap ahead which, considering the partnership lasted 15 years, appears to have been a good decision. When we look to other areas of the industry, we can see a similar shift towards ARM. The current holder of the world's most powerful supercomputer goes to Fugaku, which is based in Japan. It is the first computer to hold this title which also runs on the ARM architecture [4]. On the other end of the processing power scale we have phones. Almost all mobile computer chips are ARM-based, while currently, nearly all Windows PCs are x86 [5]. To introduce the concept of these architectures, we have to work from first principles. Computers and Processors A computer is an electronic device designed to store and manipulate data, usually through a system of 1’s and 0’s: binary. They range from what most people think of when they hear about a computer — a laptop or desktop — but also phones, smart light bulbs, cars, and even vending machines either are or contain a computer. The powerhouse of these computers which processes data is called the processor. The processor is a chip made up of billions of the simplest logical units: transistors [6]. Processors come in a range of forms, each of which is optimised for its specific tasks. In places like the aforementioned Fugaku supercomputer, this is maximising computational power while maintaining some level of efficiency to manage costs. For high-end home PCs and high-performance servers, the weighting is towards power at all costs. Vending machines, light bulbs, and other small items are purely about whatever is cheapest. Lastly, for phones, and to a similar extent laptops, the decision is all to do with power efficiency and what will draw the least charge from the battery. These can be optimised in many ways, such as processor size. The smaller the transistors, the denser the processor, which can make it more powerful and more efficient. In contrast, processors with larger transistors are much cheaper to produce. While there are many ways that processors differ to each other, in this article we will be looking at architecture. Architecture Architecture is the design of the processor. It represents how data is processed within the chip, and because of this, it also changes how data is sent and received. There are many types of architecture, each with its own benefits and drawbacks; however, they generally come in one of two designs, RISC and CISC. RISC stands for Risk Instruction Set Computer and CISC, Complex Instruction Set Computer. We will break this down word by word. First, you have "reduced" versus "complex" — the one letter that differs between both acronyms. This is in reference to the type of instructions the architecture can handle. RISC will only process simplified or reduced instructions, whereas CISC can handle more specialised and complex tasks. Next, we have instruction. This can be anything from ADD to STORE or PROD. Essentially, they are directions for what the computer is to do at that moment. Each instruction will trigger a specific sequence to be carried out, such as storing a piece of data or adding two numbers. Next, we have “set”, which states that there isn’t just a single instruction but that these reduced or complex instructions come in a set. Below I have illustrated the different approach these two systems use to combat the multiplication of two numbers [7]. CISC: MULT loc1, loc2 This takes the number stored in location 1, multiplies it with location 2 and saves the result in location 1. This all happens in one line. RISC: LOAD register1, loc1 LOAD register2, loc2 PROD register1, register2 STORE loc1, register1 This does the same thing as before; however, it has been reduced to the most fundamental instructions. The data can only be processed from the registers and not from their locations, meaning you have to do extra steps to store your values in the processors’ registers. Registers are small storage areas which are on the processor itself making them super quick to access but have very limited capacity. From this, it may seem that CISC is significantly better only having one step. It appears to be simpler to implement as well as not having to use the register space. This isn’t correct, however, as the MULT instruction incorporates all the LOAD and STORE functions within it. This brings out the main difference. CISC can have one instruction take multiple clock cycles, whereas RISC will only ever take one. However, there are genuine benefits to CISC’s method. The lower number of instructions require less memory to store, which makes CISC less RAM intensive. Also, the computer's compiler, the system which converts high-level human-written code into the computer language above, has to put in more work to entirely reduce the instructions with RISC, again using more resources. CISC must be better then… right? It’s complicated. While the above comments are true, they also pose some issues. The complex instructions require more complex hardware and more specialised circuitry within the chip to operate. This may make the processor better at these specific tasks; however, that makes much of the processor unable to be used for more general tasks, as sections are reserved for these additional instructions. Due to RISCs much fewer base instructions, the circuitry can be designed to have far more general-purpose circuitry, which can be better optimised, resulting in smaller and more efficient processors. The simpler circuits can also be made in higher density meaning the same sized processor can be more powerful. RISC essentially relies on software to do much of what CISC does in hardware. x86 vs ARM This brings us back to x86 vs ARM, where we can see how this all ties in. x86 was developed way back in 1978 by Intel [8]. This was based on CISC, and while it has undergone iterations, it is essentially the same as it was over 40 years ago. Intel owns x86, and currently, only AMD and Intel hold licences to produce 64 bit desktop processors with the x86 architecture [9]. 64 bit refers to the amount of data a processor can handle in one cycle, almost all modern home computers are 64bit however older ones may still be on 32 bit. ARM processors, on the other hand, utilised the RISC design and their current generation was released in October 2011 [10]. ARM is owned by itself, and it does not construct its own chips, instead licencing them out to many different chip manufacturers. Due to ARM’s more straightforward design and greater efficiency, they are increasingly becoming more dominant within the smartphone market due to phones' reliance on batteries. A similar trend is happening with smaller, cheaper devices such as wifi modems and similar low powered devices. The openness of their licencing method also helps, since many more companies are able to produce these chips, thus generating more competition which should lead to better value chips with better features being developed. A close up of a silicon wafer. Photo by Laura Ockel on Unsplash (2018). Apple's Approach Apple has decided to pivot away from intel and x86 altogether, now producing their M1 chips in-house, which all operate off the ARM implementation. Because of this shift, Apple has had to completely redesign many of its programs and its operating system as a whole. Not only this, but all programs made for macOS now will have to be adapted to this new architecture which relies on third-party developers to restructure their programs entirely. There is a system called Rosetta 2 which can emulate x86 applications on M1 chips, and while this works surprisingly well, it cannot work as well as a natively designed program. It is a temporary solution [11]. For these reasons, Apple's move to M1 is a massive change, much more than a standard CPU upgrade. This also explains why Apple is going for a full hard swap instead of a gradual shift or running on both systems simultaneously. For as long as developers support the older x86 platform, they will have to be developing two apps simultaneously. Because of this, Apple, as well as third-party app developers, will inevitably move away from legacy support making the x86 apple programs eventually redundant and we do not know how long this will be [12]. Microsoft's Approach Apple's move to ARM has led to cheaper laptops that have improved battery life and greater performance [13], which begs the question: why haven’t Windows gone down the same route? The answer is that Apple has complete control over their entire system from hardware to software, and the third-party developers have to follow their lead. On the other hand, Microsoft only is in charge of the operating system, while other companies are in charge of the hardware side of things. This means that there is far more pressure on Microsoft to maintain their current version of Windows. Interestingly a version of windows was made for ARM back in 2012 called Windows RT [14], but the lack of software leads to a chicken and the egg problem. Poor software support meant not many manufacturers produced computers for Windows RT, again leading to software developers having little market and therefore little incentive to program for this version. This has resulted in a version that is clunky and has significant compatibility issues. Emulation is working to resolve these issues [15], but this has proved a complex cycle to break without the control Apple holds. Another point that makes it difficult for Windows to shift across is backwards compatibility. Many features are still supported back from MS DOS in the 1980s. For instance, the fact that a folder cannot be named ‘CON’ as it was reserved back then, to ensure compatibility, is still there now [16]. While Apple has often opted for a fresh start in each OS iteration, Microsoft has maintained these features, so shifting entirely across to ARM would require them to change this policy or rewrite much of these into the updated edition. Future With the constant improvements and attention, ARM is getting along with the fact that unification between different platforms such as tablets, phones, and computers could reduce programming redundancy if they shared systems. Coupled with better emulation to improve compatibility, we may see in the future Windows and their hardware partners follow Apple's lead into ARM. Google looks to be pursuing a new OS called Fuchsia which will potentially unify their Android OS and Chrome OS together [17]. Chrome OS runs on both x86 and ARM natively [18]; however, if they shift all of their chips onto one architecture with one OS, this may not be required. Lastly, these are not the only two architectures. x86 is the industry standard and has been for a long time, whereas ARM is the new kid on the scene; however, you also have other options such as RISC-V, which is entirely open-source and does not have licencing [19]. NVIDIA was looking at making their future GPU’s* on this system [20], but this may be in question now as they are trying to acquire ARM. This is also RISC based like ARM but currently doesn’t have the same industry backing. *A GPU or graphical processing unit is much like a CPU except it excels at running many less intensive processes at the same time, making it ideal for graphics and games. Conclusion Over this article, we have outlined why ARM versus x86 has become a recent discussion with the new Mac M1 chip being introduced. This further expanded by outlining the difference between these two architectures, ARM being RISC and x86 CISC. Next, we broke down the difference between the two design methods, how they work, and the benefits of both of them. This was expanded to talk about the history, differences and licencing methods of x86 and ARM. Furthering this discussion, we looked at Apple’s vs Microsoft's business models and how that has affected their relationships with their architecture. We then had a brief look into the future of computer architecture and a potential third option. This should give the reader a bit more insight into what the change in new Macs mean and why it was such a large decision for them to pursue. To draw a conclusion from this we can see that Apple assessed the large short-term cost of changing architecture against the longer term benefits of this swap. ARM’s inherent greater efficiency, more lenient licensing and cheaper construction were too good to resist which lead Apple to end their 15 year partnership with intel. The question is, now, not if, but when Microsoft will follow suit. References [1] Apple. "Apple Event November 10, 2020", https://www.apple.com/apple-events/november-2020/. [2] Hanson, Matt. “Apple MacBook Air (M1, 2020) review”, April 2020,https://www.techradar.com/nz/reviews/apple-macbook-air-m12020. [3] Apple. “Apple to Use Intel Microprocessors Beginning in 2006”, 06/06/2005, https://www.apple.com/newsroom/2005/06/06Apple-to-Use-Intel-Microprocessors-Beginning-in-2006/. [4] TOP 500. "TOP500 EXPANDS EXAFLOPS CAPACITY AMIDST LOW TURNOVER" November 2020, https://www.top500.org/lists/top500/2020/11/press-release/. [5] Bajarin, Tim. ARM Aims to Take a Bite Out of Intel's PC Market Share, 27/07/2018 https://au.pcmag.com/processors/58283/arm-aims-to-take-a-bite-out-of-intels-pc-market-share. [6] Apple. “Apple unleashes M1”, 10/11/2021, https://www.apple.com/nz/newsroom/2020/11/apple-unleashes-m1/. [7] Stanford University. "RISC Architecture: RISC vs CISC", accessed 26/05/2021, https://cs.stanford.edu/people/eroberts/courses/soco/projects/risc/risccisc/. [8] Intel. “Intel’s First 16-bit Microprocessor”, accessed 26/05/2021, https://www.intel.com/content/www/us/en/history/virtual-vault/articles/the-intel-8086.html. [9] Tang, Greg. “Intel and the x86 Architecture: A Legal Perspective”, 04/01/2011, https://jolt.law.harvard.edu/digest/intel-and-the-x86-architecture-a-legal-perspective. [10] ARM. “ARM Discloses Technical Details Of The Next Version Of The ARM Architecture”, 27/10/2011, https://web.archive.org/web/20190101024118/https://www.arm.com/about/newsroom/arm-discloses-technical-details-of-the-next-version-of-the-arm-architecture.php. [11] Apple. “If you need to install Rosetta on your Mac”, 15/01/2021, https://support.apple.com/en-nz/HT211861. [12] Mah Ung, Gordon. “Why Apple's move from Intel to ARM means we should stop buying Macs”, 10/11/2020, https://www.pcworld.com/article/3563892/why-apples-move-from-intel-to-arm-means-we-should-stop-buying-macs.html. [13] Apple. “Apple M1 Chip - Apple (AM)”, Accessed 26/05/2021, https://www.apple.com/am/mac/m1/. [14] Bisson, Simon. "CES: Windows to run on ARM chips, says Microsoft", 06/01/2011, https://www.zdnet.com/article/ces-windows-to-run-on-arm-chips-says-microsoft/. [15] Pulapaka, Hari. "Introducing x64 emulation in preview for Windows 10 on ARM PCs to the Windows Insider Program" 10/12/2020, https://blogs.windows.com/windows-insider/2020/12/10/introducing-x64-emulation-in-preview-for-windows-10-on-arm-pcs-to-the-windows-insider-program/. [16] Tiwari, Aditya. "Windows Doesn’t Allow You To Create Folder Named ‘CON’, PRN, NUL, etc. Here Is How You Can Still Create It", 20/02/2017, https://fossbytes.com/windows-reserved-folder-con-create/. [17] Priday, Richard. "Google's Fuchsia could replace Android and unite all devices", 07/04/2018, https://www.wired.co.uk/article/google-fuchsia-chrome-os-android-demo. [18] Hildenbrand, Jerry. "Should I buy an x86 or ARM-powered Chromebook?", 20/06/2019, https://www.androidcentral.com/should-i-buy-x86-or-arm-powered-chromebook. [19] RISC-V. "RISC-V: About", Accessed 26/05/2021, https://riscv.org/about/. [20] Hashim, Shakeel. "The Nvidia-Arm deal hasn't boosted RISC-V. But it soon could." 2509/2021, https://www.protocol.com/risc-v-chips-arm-nvidia.
- Opinion: Skincare's Pseudoscience
By Stella Huggins Image from Colin Lloyd on unsplash How’s your skin feeling right now? Radiant? Hydrated? Are you giving off an effervescent glow that touches every stranger you walk past? If you are, you’ve probably got a great diet going, with an exercise routine to match. However, an industry preys on your worries about crows feet, that dark scarring on your chin, and the stretch marks that your body naturally creates when your tissue increases. Skincare; the bane of every collagen cell’s existence. Littered with emotive, fantastical language that touches the heart, scientific terminology that appeases the mind, and a looming lump of pseudoscience that leaves your pores wanting more, the billion-dollar industry keeps itself in operation day in, day out. Long have I wondered if the various goops, creams, foaming washes, plant-based protein creams, acids, and face masks actually do anything to improve the condition of my largest organ. Just anecdotally, some products of course feel better than others — that $10 cleanser probably has some harmful chemicals that don’t do you any favours. But the more expensive and elaborate skincare gets, the harder it becomes to pick holes in the iron-clad marketing ploys cooked up by the industry. First, let me differentiate between dermatology and skincare. Dermatology is absolutely not a pseudoscientific practice. It’s a study of the skin, involving extensive undergraduate study in a Bachelor of Medical Science. Dermatology’s focus is to treat diseases of the skin. Skincare, on the other hand, is purely cosmetic. This cosmetic obsession is what I am referring to when I talk about the skincare industry. Cosmetics are complicated. Deeply intertwined with numerous complications of the world; capitalism, misogyny, to name a few, as well as more personal matters of self-identity, dysmorphic views of your own appearance, and personal wealth, it’s a dense topic to unpack. The issue is complicated further by rapidly evolving narratives in social media, not yet touched by the literature of self-image. The notion of ‘self-care’, and new hyper-versions of self-image that are symbiotic with modern feminism, can make the use of cosmetic products sometimes too complex to even bear thinking about. Companies know this. Marketing strategies work significantly faster than journal literature does when it comes to penetrating the public perception of a topic, making skincare one of the most insidious pseudoscience industries. Differentiating between a discussion of intrinsic and extrinsic degradation of the skin is important to note here. Intrinsic degradation describes cellular processes that regenerate tissue as a result of normal ageing. It occurs in the absence of harmful substances due to free radical production, normal hormonal shifts and other biological processes. It is safe to say that a definitive answer to slow the effects of this process has not yet been found. Extrinsic degradation describes environmental factors or lifestyle choices that cause deterioration of the skin condition. Skincare in of itself is not completely invalid. It can alleviate some negative effects; moisturising protects the skin’s function of protecting the body from dehydration or desiccation [1], cleansing daily immensely reduces the risk of infections and open wounds becoming unsightly [2], and SPF application (arguably the most important long-term product use) protects against harmful ultraviolet rays [3]. However, breaking past these very basic routines lies a plethora of products that use carefully concocted language. Image from Doğancan Özturan on unsplash Intended to convince the consumer of its efficacy, the marketing aims to succeed at moving products whilst avoiding concrete claims that could expose them to legal action. Capitalist models of beauty are designed to keep the consumer buying, and believing the narrative of external beauty — the idea that one’s appearance should fit a certain mold, and that certain products will help you get there. It seems apparent that the issue of misleading information within the skincare industry lies mostly within the emphasis placed on products, and a gross extrapolation of the extent to which products can aid appearance to the consumer’s desire. Most societies grasp the benefits of exercise, diet, and significance of environment in overall health — skin is no exception. 80% of extrinsic skin health can be attributed to the quality of several factors, including: UV exposure, pollution, diet, hygiene, drug use and sleep [4]. That is to say, a large majority of skin appearance is dictated by your lifestyle, and where you live. Topical treatment of amino acids is an example of a popular practice in skincare. Select amino acids are able to be absorbed through the skin. However, evidence for this was procured through in vitro experiments, and generally performed on animal skin [5]. Animal testing has been a long-standing feature of the skincare industry. A common experimental design utilises the Franz diffusion chamber [6], and measures the amount of amino acids that pass over the tissue, rather than the quantities of amino acids that are retained [7]. Chemical absorption can occur through a number of pathways in the skin. Intercellular routes, intracellular routes, through sweat glands and through hair follicles are all places where absorption can occur [8]. This is all good and well, but it means that the evidence that amino acids are retained by topical treatments is shaky. This is not helped by the motives of researchers. Most skincare research is performed or funded by companies themselves [9]. This should ring alarm bells in a reader’s head. A clear conflict of interest is present here. It’s highly unlikely that companies would engage in malpractice, altering results to fit their marketing ploys. However conscious or unconscious bias may have an effect on how results are presented to corporate bodies, and subsequently to consumers [10]. This is a common dilemma of commercial science; when profit and margins drive results, personal influence can become increasingly confounding. The darker end of the skincare industry is that the developing science sometimes confidently oversteps, promising safety and efficacy — when the long-term results harbour quite the opposite effects. The commercial model aids this mindset, with profit motivating the message that the treatment in question is safe. Intentionally or unintentionally, this sometimes leads to adverse outcomes for consumers [11]. Nanoparticles are a producer’s dream — they do in fact increase efficacy [12]. But their newness comes with downfalls. We do not yet understand the limits to which these transportation agents can travel [13]. The effects of interactions of carbon nanoparticles with DNA are not yet fully understood, though the mechanisms through which they interact are. Small enough nanoparticles can enter cells through the nuclear pore, and potentially bind to DNA — this could inhibit replication [14]. Nanoparticles may also produce free radicals, an example being metals that interact with hydrogen peroxide (present in every cell [15]), in turn causing the conversion to the hydroxyl radical. Titanium dioxide nanoparticles, used commonly in cosmetics, have been proven to produce excessive free radicals in the presence of both light and ultraviolet light [16]. The heterogeneity of ageing effects also confounds a multitude of claims products make about efficacy [17]. Ageing is highly variable among individuals — what works for some in slowing the ageing process can depend largely on the epigenetics of that individual [18]. Of course there are general rules that apply to most — SPF protection being beneficial is a prime example — but at the nitty gritty level, we’re all painstakingly individual. Skincare is important for maintaining general health, but there’s only so far that products can take you. After the basic routine of cleansing, moisturising and applying sunscreen, the rest of it lies on shaky science, carefully presented to imply fantastical results. Choose your goops wisely. References [1] Epidermis Hydration. (2006). Handbook of Non-Invasive Methods and the Skin, Second Edition, 327–327. https://doi.org/10.3109/9781420003307-45 Edited by Jorgen Serup, Gregor B.E. Jemec & Gary L. Grove [2] Larson, E. (2001). Hygiene of the Skin: When Is Clean Too Clean? Emerging Infectious Diseases, 7(2), 225–230. https://doi.org/10.3201/eid0702.010215 [3] Flament, F., Bazin, R., Rubert, Simonpietri, Piot, B., & Laquieze. (2013). Effect of the sun on visible clinical signs of aging in Caucasian skin. Clinical, Cosmetic and Investigational Dermatology, 221. https://doi.org/10.2147/ccid.s44686 [4] Bielach-Bazyluk, A., Zbroch, E., Mysliwiec, H., Rydzewska-Rosolowska, A., Kakareko, K., Flisiak, I., & Hryszko, T. (2021). Sirtuin 1 and Skin: Implications in Intrinsic and Extrinsic Aging—A Systematic Review. Cells, 10(4), 813. https://doi.org/10.3390/cells10040813 [5] Cosmetics testing FAQ. The Humane Society of the United States. (n.d.). https://www.humanesociety.org/ resources/cosmetic-testing-faq#:~:text=Although%2 0they%20are%20not%20required,rabbits%2C%20without%20any%20pain%20relief. [6] Baert, B., Boonen, J., Burvenich, C., Roche, N., Stillaert, F., Blondeel, P., Van Boxclaer, J., & De Spiegeleer, B. (2010). A New Discriminative Criterion for the Development of Franz Diffusion Tests for Transdermal Pharmaceuticals. Journal of Pharmacy & Pharmaceutical Sciences, 13(2), 218. https://doi.org/10.18433/j3ws33 [7] Myer, K., & Maibach, H. I. (2013). A Dermatological View—Percutaneous Penetration of Amino Acids. Cosmetics and Toiletries. [8] Rodrigues, F., & Oliveira, M. B. (2016). Cell-based in vitro models for dermal permeability studies. Concepts and Models for Drug Permeability Studies, 155-167. doi:10.1016/b978-0-08-100094-6.00010-9 [9] Caulfield, T. (2017, February 08). The pseudoscience of beauty products. Retrieved May 22, 2021, from https://www.theatlantic.com/health/archive/2015/05/the-pseudoscience-of-beauty-products/392201/ [10] Boy, J., Pandey, A. V., Emerson, J., Satterthwaite, M., Nov, O., & Bertini, E. (2017). Showing people behind data. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. doi:10.1145/3025453.3025512 [11] Khan, A. D., & Alam, M. N. (2019). Cosmetics and their associated adverse effects: A review. Journal of Applied Pharmaceutical Sciences and Research, 1-6. doi:10.31069/japsr.v2i1.1 [12] Larese Filon, F., Mauro, M., Adami, G., Bovenzi, M., & Crosera, M. (2015). Nanoparticles skin absorption: New aspects for a safety profile evaluation. Regulatory Toxicology and Pharmacology, 72(2), 310-322. doi:10.1016/j.yrtph.2015.05.005 [13] Saunders, F. (n.d.). DNA Damage and Nanoparticles. Retrieved May 22, 2021, from https://muckrack.com/fenellasaunders [14] Li, K., Zhao, X., K. Hammer, B., Du, S., & Chen, Y. (2013). Nanoparticles inhibit dna replication by binding to dna: Modeling and experimental validation. ACS Nano, 7(11), 9664-9674. doi:10.1021/nn402472k [15] Halliwell, B., Clement, M. V., & Long, L. H. (2000). Hydrogen peroxide in the human body. FEBS Letters, 486(1), 10-13. doi:10.1016/s0014-5793(00)02197-9 [16] Bhattacharya, K., Davoren, M., Boertz, J., Schins, R. P., Hoffmann, E., & Dopp, E. (2009). Titanium dioxide nanoparticles induce oxidative stress and dna-adduct formation but not dna-breakage in human lung cells. Particle and Fibre Toxicology, 6(1), 17. doi:10.1186/1743-8977-6-17 [17] Cevenini, E., Invidia, L., Lescai, F., Salvioli, S., Tieri, P., Castellani, G., & Franceschi, C. (2008). Human models of aging and longevity. Expert Opinion on Biological Therapy, 8(9), 1393-1405. doi:10.1517/14712598.8.9.1393 [18] Ganceviciene, R., Liakou, A. I., Theodoridis, A., Makrantonaki, E., & Zouboulis, C. C. (2012). Skin anti-aging strategies. Dermato-Endocrinology, 4(3), 308-319. doi:10.4161/derm.22804
- Ultralight Dark Matter
By You-Rong F. Wang Ultralight Dark Matter (ULDM), also called “Fuzzy” Dark Matter, is a class of hypothetical dark matter candidates in cosmology. It postulates that very-low-mass particles (so-called ultralight axions) from beyond the Standard Model are responsible for some observed large-scale structures of the universe but remain elusive at lab-accessible scales of physics. Due to the axions’ low mass, ULDM behaves almost entirely according to the laws of quantum mechanics. With a typical de Broglie wavelength at the order of kiloparsecs, i.e. tens of quadrillions of kilometres, ULDM can give rise to a range of galactic-scale phenomena analogous to what you might only associate with a microscopic particle (of course, the effects are mediated here by gravity instead), such as superposition, interference and tunnelling. Therefore, ULDM is poised to leave behind unique signatures in a range of astrophysical systems, from stellar rotational curves in core regions of galaxies to the origin of certain gravitational wave events. The dynamics of ultralight axions may be described collectively as a single wavefunction evolving according to modified Schrödinger Equations. In the simplest of ULDM models, one takes into consideration the gravitational potential generated by the wavefunction itself. This modification, known as the Schrödinger-Poisson equation, introduces nonlinearity to our system of partial differential equations. In addition, there also exist more exotic models that hypothesise complex self-interaction terms beyond just gravity, requiring even more sophisticated numerical techniques to simulate and understand. As my first project during my PhD at Auckland Cosmology under Prof. Richard Easther, I am working on such numerical simulations, developing a scheme through which we can study the influence that Ultralight Dark Matter (ULDM) has on N-body particle systems. This effort offers us an opportunity to test the validity of ULDM theories and is relevant for our understanding of, for example, the interaction between supermassive black holes during galaxy mergers. The figure shows a massive point particle’s motion into a ULDM halo. The colour represents ULDM density in a plane, and the quantum “fuzziness” is evident as the mass reaches the core. This simulation is achieved using Auckland Cosmology’s PyUltraLight2 simulation program, and a paper with detailed discussions of such interaction models is currently in preparation. Further Reading [1] John Preskill, Mark B. Wise, and Frank Wilczek. Cosmology of the Invisible Axion. Phys. Lett. B 120:127–132, 1983. [2] Lam Hui, Jeremiah P. Ostriker, Scott Tremaine, and Edward Witten. Ultralight scalars as cosmological dark matter. Phys. Rev. D, 95(4), 2017. [3] Faber Edwards, Emily Kendall, Shaun Hotchkiss, and Richard Easther. PyUltraLight: A pseudo- spectral solver for ultralight dark matter dynamics. J. Cosmol. Astropart. Phys., 2018(10), 2018. [4] Elisa G. M. Ferreira. Ultra-Light Dark Matter. 2020. https://arxiv.org/abs/2005.03254
- Cichlids – An Evolutionary Enigma
By Jasmine Gunton Cichlids are an enigma for having evolved so many species sympatrically. Photograph by Munheer Ahmed on Unsplash (December 2019). The concept and processes of evolution have always been highly controversial and contested amongst academics. Since the publication of Charles Darwin’s famous scientific literature, ‘On the Origin of Species’, great advancements have been made in understanding how evolution works. However, biology is still a messy subject, with many species displaying exceptions to commonly accepted evolutionary theories. A salient example that demonstrates exceptions to the rule is the case of freshwater cichlid fish, native to tropical America, Africa, and southern Asia. Cichlids evolved via a process known as sympatric speciation. During this process, hundreds of species of cichlid evolved from a single ancestor while still all occupying the same geographical region. From DNA analysis we can determine that over 1,650 species of cichlids [1] evolved from one ancestor in just 100,000 years (approx.) [2]. To understand why this phenomenon is exceptional, one must have a basic understanding of the principal mechanisms and causes of evolution. Formation of Species Most biologists agree that a species can be defined as populations of individuals that are able to interbreed in the wild to produce viable offspring. Most commonly, new species are created when populations get physically separated. This can result from either a geographic event (e.g. formation of a mountain range, moving of tectonic plates, etc.) or when a subset of the population migrates to a different area. The two groups of organisms are often not representative of each other and are also subjected to different selection pressures provided by the two different habitats. More importantly, the two populations are now reproductively isolated and are therefore not influenced by gene flow (migration of organisms from one population to another) [3]. Over a long period of time, two new species are formed. This type of speciation is known as allopatric speciation, ‘allo’ meaning ‘different’, and ‘patric’ meaning homeland. In the case of cichlids, there was no such geographic barrier preventing different populations of fish from interbreeding. This phenomenon has been called sympatric speciation (‘sym’ meaning ‘same’). Due to the large number of distinct cichlid species, we specifically call this type of evolution ‘adaptive radiation’. Gradualism Another once commonly accepted theory of cichlid evolution was the concept of gradualism. This theory has since been contested using examples from a number of evolutionary events. Gradualism suggests that a species evolves over a very long period of time, as beneficial phenotypes rarely arise within a given population. However, a noticeable lack of transitional forms exist between the original cichlid ancestor and its descendants, indicating a rapid rate of speciation. This theory of evolution has since been called the punctuated equilibrium model. Contrary to the idea of gradualism, the punctuated equilibrium model suggests that species experience long periods of stasis, followed by short bursts of evolutionary change. Photograph by Michael Rodock on Unsplash (January 2019) An Alternative Hypothesis When considering this question, one may ask whether the 1,650 species of cichlid fish can actually all just be classified under the same genus. This judgement can be disproven using a number of taxonomical techniques. First, cichlids show an incredibly large range of morphological differences, including different colour, pattern, size, and mating behaviour. They have also been shown to exhibit sexual behaviour which prohibits them from forming hybrid offspring. Finally, molecular analysis of individual species shows that under the speciation continuum, the Cichlidae family can be classed as encompassing a number of truly distinct species [4]. The Current Verdict With this information, biologists have long pondered why this species suddenly diverged for seemingly no reason. Currently, no other vertebrate species have been found to display this type of adaptive radiation [4]. For this reason, East African cichlids were one of the first species of fish to undergo extensive genome sequencing. New research has uncovered the importance of considering ecology in the process of speciation. By analysing the DNA of several cichlid species it is also thought that speciation can still occur with some levels of gene flow between different populations. Current hypotheses propose that the hundreds of cichlid species may have formed by competition and sexual selection. Essentially, it is thought that to avoid competition, different cichlids would occupy slightly different areas within the same habitat, subsequently forming new ecological niches. Sexual selection of male colour patterns further prevented these populations from interbreeding. This current theory might explain how the 1,650 cichlid species evolved from just one ancestor. Future Research Research into cases of adaptive radiation is relatively limited in the biological science field, and the underlying mechanisms of adaptive radiation are yet to be fully understood. Therefore, academics are reluctant to label a species' evolutionary history under this category. Due to a lack of research, further analysis is required to fully understand the strange evolution of the Cichlidae family. Sympatric speciation in fish shows that scientific theories are not rigid in nature, but rather ever-changing and ‘evolving’ (bad pun, I know), with new theories being constantly proposed and integrated into our understanding of the world. Although greatly appreciated as one of the most revolutionary pieces of scientific literature, concepts in ‘On The Origin of Species’ have since been contested by members of the scientific community. The same will inevitably occur with our current conclusions about evolutionary biology. Cichlids, on the other hand, will simply keep on living their lives in rivers and lakes, completely unaware of the mystery that surrounds their history. References [1] Fishbase. Accessed May 20, 2021. URL: https://www.fishbase.se/Nomenclature/NominalSpeciesList.php?family=Cichlidae [2] Brawand, David, Catherine Wagner, Yang I. Li, Milan Malinsky, Irene Keller, Shaohua Fan, Oleg Simakov et al. “The genomic substrate for adaptive radiation in African cichlid fish”. Nature 513 (2014): 375-381. URL: https://www.nature.com/articles/nature13726 [3] Ellstrand, Norman & Loren Rieseberg. “When gene flow really matters: gene flow in applied evolutionary biology”. Evol Appl 9 (2016): 833-836. URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4947145/ [4] Salzburger, Walter. “Understanding explosive diversification through cichlid fish genomics”. Nat Rev Genet 19 (2018): 705–717. URL: https://doi.org/10.1038/s41576-018-0043-9
- The State of Quantum Computing Today
Interview with Professor Cristian Calude by Alex Chapple IBM quantum computer - Photo credit IBM Quantum computers are a new class of computers that have been receiving a lot of media attention in the last decade. These computers' underlying structures are entirely different from the modern computers we're all familiar with, which are known as "classical computers". The underlying architecture of classical computers is based on "classical physics, " which is the macroscopic physics we experience in our daily lives. In contrast, quantum computers, like the name suggests, manipulate quantum behaviours to do computations. In classical computers, the information is stored as bits, which take on values of either 1 or 0. Quantum computers are built on qubits (quantum bits). Instead of taking on 1 or 0 as their value, they take on both simultaneously (Yes, freaky I know). So if you have ten quantum bits, 2 states are being represented at the same given time. Because of this, quantum computers promise to be exponentially quicker at certain computing tasks and may revolutionise fields like computational biology, cryptography, quantum chemistry, quantum simulations, and more. These computers have certainly been gaining traction from the media, most notably when two years ago Google claimed to be the first to achieve "quantum supremacy". Quantum supremacy, also known as quantum advantage, is a term coined by California Institute of Technology Professor John Preskill. It is the notion that a quantum computer can compute things that a modern classical computer cannot in a meaningful amount of time. Perhaps what drives the most media attention is the multi-billion dollar investments companies such as Google, IBM, and Microsoft and large governments like the United States, China, and the UK are putting into research and engineering. In December 2018, the United States Congress passed the National Quantum Initiative Act, which aimed to advance quantum technologies in the next ten years by further supporting research and engineering. It seems as though governments and large tech companies around the world are betting heavily on a future filled with quantum technologies, but is the hype and media attention around quantum computing justified? The following is a conversation I had with Professor Cristian Calude from the School of Computer Science. Professor Calude is the director of the Centre for Discrete Mathematics and Theoretical Computer Science, and a research consultant for the Quantum Computing Research Initiatives at Lockheed Martin, USA. We talked about the state of quantum computing today, where it may be heading, and why the media attention that quantum computing is getting may not be for the right reasons. How does your research tie in with quantum computing? I'm a mathematician and a theoretical computer scientist with interests in quantum physics and computing. All my papers in quantum areas have been done jointly with physicists (Professor Karl Svozil from Vienna is my longest collaborator) to ensure that the physics is correct. Initially we used finite automata with outputs to model quantum phenomena. For example, we've described Bell's inequalities with finite automata. (Finite automata are simple idealised machines used to recognise patterns) In the last ten years, I was involved in two quantum projects. One was to study quantum annealing, because we've got support to use the D-Wave machines. It makes a huge difference when you work in quantum computing if you have access or not to a real quantum computer. (D-Wave is a Canadian quantum computing company based in Burnaby, British Columbia, Canada. They use a particular technique called quantum annealing to solve problems. The technique finds the global minimum of a particular function by manipulating a quantum system). So you look at the machine and say, what can you do with it? Is there something useful one can do with it? How far can one push the limits of the machine? The other project is connected with my work for many years in algorithmic information theory, a very beautiful and powerful mathematical theory of randomness. So at some stage, I said `quantum randomness is believed to be the best form of real life randomness, so is algorithmic information theory relevant to understand it?’ I have been also interested in quantum randomness because physicists believe that quantum randomness is perfect randomness while mathematically there is no perfect randomness, so one should only look at degrees of randomness. We also proposed protocols for quantum random generators and proved theoretically that they are better than any pseudo-random generators. We were fortunate that a lab at University of Queensland led by Professor Arakady Fedorov did the experiments. We published the protocols, and the physicists published the experiments. We were very interested in the experimental results because one is a theoretical protocol written on paper, and the other is an experiment that cannot be done under ideal conditions. We developed tests for assessing the quality of the quantum random bits generated in the lab and analysed to what extent theoretical results are reflected in the experimental results. Why does and doesn't quantum computing deserve the media attention it gets? Well, I am not a media expert, but I have some guesses. There are many promises about quantum computing, which, assuming that the dreams come true (which I don't believe they will) will change many technologies that are used today. Encryption and security are examples. So, if you are a government or a big IT company with lots of money, you cannot afford to leave the competition to develop a technology which can be used against you, even if there are very few solid arguments that it will. Google cannot accept that Microsoft can do it, and Americans cannot accept that the Chinese can do it, and vice versa. So in a word, it's driven by fear. It has escalated into a huge race, and none of the big actors are bold enough to stop. But this race cannot continue forever if critical results are not delivered. For the time being, the media is a very strong supporter of the field because writing about this race will bring readers and, like most businesses, the media is channelled on making a profit. Yes, I agree with that. When I read stuff about quantum computing, especially if it's from less reputable news websites, it's so clear that they're sending false hope because quantum computing is not very close to being what they think it is. They don't know what it is in the first instance. Can quantum computers really solve the problems that the media is saying it will solve? I should probably say, first of all, that even in an ideal scenario, quantum computers can compute much less than classical computers. This is because quantum computers can compute only total functions. If you have a function that divides two integers, X divided by Y, you have to exclude the possibility to divide by zero. So you can't return an answer for X divided by zero. That is undefined and illegal. This kind of test, which is trivial for classical computers, cannot be performed by any quantum computer. Let me give you a picture. Let's imagine the Pacific Ocean is a set of all mathematical problems. How many of them can be solved by classical computers? A small drop. Most of them cannot be solved with any classical computer. From this drop, only a smaller part can be solved by quantum computing. So what's the point of quantum computing? The only justification is in this small area where quantum computers can solve problems of practical interest. If these problems could be solved with quantum computing tremendously faster than with classical computers, then the effort would be justified. In the early 80s the American physicist Richard Feynman and the Russian mathematician Yurin Manin came with the idea of quantum computing. Both of them were talking about simulations and Feynman said `look, I have this kind of quantum system I want to simulate and I know that I can simulate it with a classical computer, but it will take an exponential time. Can I do it faster?’ And Manin said that it's possible if the machine is quantum itself. But Manin noticed something even deeper. He said that to simulate a quantum system, a classical machine needs to understand a lot of quantum theory and incorporate it into the program, and this takes time to develop and run. But a quantum computer will not need this because it is already based on the same quantum principles, so that it will be faster. It's a shortcut. Quantum computing is intrinsically interdisciplinary. You have people from engineering with their cultures, businesses with their cultures, mathematicians with their own, programmers with their own, etc. It's a very young field, and it doesn't have its own sound culture. A 128 qubit D-Wave processor. This is one of their earlier models, currently they have systems with up to 5000 qubits. - Photo credit to D-Wave Yes, it's an interesting field like you said because it's so young. There are many engineering problems, physics problems, and mathematical problems. Yeah, lots of problems, I'm not saying that this field is not interesting or exciting. And there will be benefits, possibly not those discussed so much in the media. For example the idea of "de-quantisation", where you take a fast quantum algorithm, find a way to rewrite it for classical computers and obtain a much faster classical algorithm than the current algorithms. The hype for quantum computing is damaging because if you claim things you cannot prove or deliver, at some stage people will say, oh, that's not serious. This will be detrimental for the field. To change the topic: there are many ways, many different architectures for building quantum computers. Google uses superconducting qubits, others like ion Q use ion traps, and there are many other ways. I know Microsoft recently thought of giving up on their topological qubit because its engineering was too complicated. That one is from a mathematical point of view the most interesting… If the engineers feel that this is beyond the capability of the current technology, maybe it's best to shelve it, maybe for 20 years, and then look at it again. Which type do you believe is most promising or will be the most fruitful. Well, I don't know, you cannot predict even the past, and you're asking me to predict the future. I think that quantum annealing, which is the form D-Wave machines use, will survive many years and be fruitful just because they take a more pragmatic attitude. So their architecture is to use thousands of weaker (less connected) qubits, as opposed to using 50 well connected (strong) qubits like Google's qubits. So you say they have weaker qubits: true, but this is an advantage. They use qubits that are weaker (meaning fewer problems, in particular with error correction), but strong enough to solve problems. I was amazed by the engineering decisions made by D-Wave even before we started working with them. One was to solve one single problem. It's a discrete optimisation problem, but this problem is generic, so many practical problems can be reformulated as instances of this problem and solved by D-Wave. This makes a controversial decision a smart choice. My next question is: what are you most sceptical about the field of quantum computing in general? Well, I am sadly sceptical about the hype. I fear that it will attract people for the wrong reasons, mostly because of fashion and money. I have seen this trend in the late 90s when there was a strong interest in a field called structural complexity. It attracted lots of young people, producing papers and PhDs. Many results are correct, but without meaning. And then after 15 years, nobody reads those papers, they had to switch fields. My last question is sort of on the more optimistic side. Suppose we're in the future, and there is a perfect quantum machine that you can use. What kind of computation would you like to do on it? Well, I would like to test the Riemann hypothesis. I read somewhere that with 2000 perfect qubits you can prove the Riemann hypothesis. Oh, if you find that article, please send it to me: I'm interested.
- The Next Generation of Rockets
By Struan Caughey Photo credit to SpaceX As we conclude the first quarter of 2021, the rocket industry has shown vital signs of moving from strength to strength. The past few years have been defined by small to medium-sized rockets with a strong focus on reusability. This legacy started in 1977 with the launch of the Enterprise space shuttle. The turn around for the space shuttle cost approximately $450 million USD, and the shortest time to turn around and reuse it was 54 days. This is a far cry from SpaceX's 27 day current record at USD 15 million with the Falcon 9 B1060 on February 4th this year. We also now have players in the reusable market other than SpaceX, such as Rocket Lab, which also has fully operational rockets as well as projects at the developmental stage from Blue Origin, European Space Agency, I-space in China and Roscosmos, amongst others. This year, however, three rockets have frequented the news, all from drastically different businesses. They have all been in the news for various reasons, what they share however is they're all the flagship rocket for each company. SpaceX First, we have the Starship from SpaceX. Their current line up of operational rockets are the Falcon rocket and the Falcon heavy rocket; however, neither of these have been making the news this year. Instead, Starship, the company's current rocket in development, has been hitting the headlines. There have been two successful 'hop' flights in 2020, reaching 150m. However, the main news is the four subsequent flights, all reaching at least 10km, three being this year, and all of which resulted in "rapid unscheduled disassembly". These rockets have excited space enthusiasts for two reasons; first, they will be the first operational fully reusable rocket with existing models having some single-use components. The second reason is that these are the rockets envisioned to take humans to Mars. They can take over 100 tonnes to low earth orbit, which would put it in a class of its own, there only ever being one other rocket, the Saturn V, with the same capacity. SpaceX's Starship rocket taking off. Photo credit to SpaceX Rocket Lab Next, we look to the other end of the spectrum, New Zealand. Rocket Lab is a small reusable rocket firm operated out of Auckland but registered in the United States. This was not the company's original intention, with the original plan to be small, cost-effective single-use rockets; however, the electron has now been iterated upon to become reusable like the Falcon and in dramatic fashion. A mid-air helicopter interception is used to catch the electron rocket on its descent. The CEO Peter Beck had intended that they would not venture towards the more populated part of the market of medium-sized rockets, instead sticking with their 300kg capacity booster; this now is in question. On March 2nd, again playing into Rocket Lab's flair for the dramatic, a Hollywood trailer-like announcement was released on Youtube where Peter Beck literally ate his hat as he announced their new Neutron Rocket, which would have 8000kg payload capacity launching the company directly into the upper end of small payload rockets and in a new direction. This will be similar to the company's Electron rocket, having only the first stage be reusable. Neutron render - Photo credit to Rocket Lab NASA Lastly, there is NASA. The most famous space agency which still has the record for the rocket with the largest payload capacity (the Saturn V which took astronauts to the moon This rocket was able to handle sending 140,000 kg into low-earth orbit but was retired in 1973. With the current focus on renewed missions to both the Moon and Mars, there is a new need for a similar rocket. Instead of resurrecting a 50-year-old booster, NASA took steps to design the SLS, also known as the Space Launch System. This new rocket has been hitting the headlines for its recent static fire test, which showed the sheer amount of power this next-generation, interplanetary rocket has. When the SLS does go to launch, it will overtake the previously stated record. This rocket represents one of the few newsworthy rockets which is still only designed to be launched once. This is to increase the ship's maximum payload size. While the Rocket Lab Neutron rocket is substantially different from the SLS, SpaceX's Starship will compete with it. The trade-off at this moment would seem to be payload size on the SLS side with reusability and cost in favour of the Starship. This is a recap of the three most reported rockets of 2021, which are all in different stages of development; however, there are many more that will also be going up against these boosters and further iterations on existing ones to make them more advanced. There are interesting arguments on both sides of reusability, with there being a great many benefits both in cost and speed of turn around; however, NASA shows that there is still a very real place for the more traditional rocket with high yield requirements. Wherever the industry goes, one thing is certain. Rocket science is only going up.
- Rights for Nature in Aotearoa
Interview with Dr Brad Coombes By Nina de Jong Beech forest in Te Urewera, which was given legal personhood status in 2014. Photo by David Tip on Unsplash (2019) Dr Brad Coombes is a Senior Lecturer in the School of Environment at the University of Auckland. His research focuses on indigenous peoples’ participation in environmental management. He has worked on Te Tiriti o Waitangi/Treaty of Waitangi environmental claims of several iwi, including Ngāi Tūhoe and Ngāti Tūwharetoa. Brad’s recent work, “Nature’s rights as Indigenous rights? Mis/recognition through personhood for Te Urewera”, criticises the “personhood” or “Rights for Nature” environmental management approach. This approach recognises landscapes as sentient entities, and in some cases legal people, that have their own rights. It has been employed both internationally and in Aotearoa to protect nature in a way that is intended to align more with indigenous values. In this interview, Brad discusses how he became involved with this work, the main shortcomings of the personhood approach to environmental management, and how we should proceed into the future. How did you come to be a researcher in environmental management and indigenous rights? Probably the more important story goes back to where I was raised. Kāti Māmoe used to have quite a bit of land in the South Island. When the government redirected and extended the main trunk railway line back in 1888, they compulsorily acquired a corridor right through the middle of that Kāti Māmoe land. Unfortunately, there was what looked to be a simple clerical mistake. Rather than taking 20 yards on either side of the rail, which was the legal maximum that you took for a railway, they managed to somehow take 800 yards on either side. Despite a lot of acknowledgement that it was illegal, we still couldn’t make the court system give our land back. And this included Moponui, which is the maunga tapu or sacred mountain for our hapū. The railway department had no use for the forested lands on either side of the railway, including the whole of Moponui. Eventually, it gave the land to the Department of Tourist and Health Resorts to become a scenic reserve in 1912. We were left with a tiny bit of land down by the sea and a tiny bit of land up past the railway. Neither of these could be used for the purposes that it was used for before. So, the idea of losing your forested rohe, including your important mountains, is definitely not foreign to me. The battle to try and get some of that back influenced a lot of my childhood. Of the 6,500 hectares lost to the railway, the tribe eventually received 420 hectares in reserves and 128 hectares as freehold land because my grandfather just bought it back, and went on to live there. The mixture of, on the one hand, land rights, and on the other hand, somebody else’s vision of what conservation should be, is a personal thing. I was sent off to university with the idea of contributing to the fight to get some of it back. You worked in the Urewera inquiry district, where a personhood approach for Te Urewera was taken. What was your role in that settlement? I have been involved with Te Urewera since the year 2000. When I came to this university, the Crown Forestry Rental Trust and the Waitangi Tribunal approached me to research environmental claims that had been brought before the Tribunal. I did environmental history reports for the Gisborne Inquiry District, Te Urewera, Wairoa and Tongariro National Park. The Tribunal gets very specific on land loss issues, but it can’t afford to research everything, so it lumps together all of the environmental claims within an inquiry district and gets one or two people to research them. You basically look at every environmental issue that tangata whenua have been unhappy with since 1840! For Te Urewera, clearly, with the national park overlapping so much of their home territory, conservation management was the number one issue. They wanted to have a clear picture of how it became a national park and how it was managed, with emphasis on how a preservationist style of management alienated Tūhoe rights. What do you think are the most important merits of a personhood/Rights for Nature approach? It’s been very hard in Te Urewera, and throughout the rest of New Zealand, for the public to see forests as anything other than forests, and to see mountains as anything other than mountains. One side of the debate has seen them solely as resources to be developed. Another side of the debate has seen them as environmental assets to be protected. And the strength of those two lobbies is so strong that any other view of forests, mountains, landscapes, rivers, is lost on the public. Personalising these spaces through personhood rights might at least prompt some discussion that may, over time, balance that debate so that it’s not so dualistic. However, has it done so yet? I don’t think so. Probably not at all. The Treaty settlement process, especially where it involves national parks and conservation spaces, has been stalled. It has been fractious, and it wasn’t really going anywhere. And I’m not sure I would argue this, but others certainly would, that anything that can accelerate that process is likely a good thing. I personally tend to stick to the idea that justice always takes time. If you’re trying to speed up a tricky process, it’s always going to backfire. But I am sympathetic to the idea that where Treaty settlement processes apply to the conservation estate, progress has been very slow, and that’s doing nobody any favours. If you can find an innovative, left-field solution that people sign up to, it’s considered a success. But it’s what is missing from that list of benefits that’s probably more interesting. I try to keep out of the Whanganui River example, just because it’s one of the few that I haven’t been involved with. But I look at what’s happening in the Kaipara Harbour, or the Waikato River, or the Rotorua Lakes, all areas where instead of personhood rights, a different strategy has been utilised. And I see some positive progress. The substantial difference between, say, the Waikato and Whanganui cases is not so much that one has person rights, and the other doesn’t. The idea of a river ancestor was acknowledged with the Waikato case, but it wasn’t made a person. The substantial difference was the investment of money. Investments and clean-up efforts have been made, with federal money coming into the local and regional scales. And Māori are being heavily involved in deciding how that money is spent. The model that seems to be working most in New Zealand is state investment in co-managed restoration. That’s what’s happening in Kaipara, Waikato, and the Rotorua lakes. Where is the investment at Whanganui? And you could also say, where is the collaborative decision-making that goes into it? Because the Whanganui guardians are more champions than they are actual decision-makers. I think that’s indicative – where progress is being made in decolonising freshwater management in New Zealand is not where person rights are being trialled. Leading on from that, what are your main criticisms of the Personhood approach to environmental management? When you research your own iwi claim or another iwi claim, you get to know the claimants as people. It’s very long-term research. Whenever a third party or the government presented an option for National Park or Te Urewera, I think about how those particular people I became quite close to would react to it. It’s significant that the reaction that comes up is always surprising, when person rights are suggested. In both of the inquiry districts where personhood came up, in Te Urewera and Tongariro National Park, it was well after the research, after the bulk of negotiations and the hearings, before the idea of personhood came up at all. It’s a belated afterthought, to be honest. The earliest mention was in 2012. At that point, there were just a few idle mentions of what had happened in Ecuador and Bolivia, and was it relevant to treaty settlements in New Zealand? And, originally, Tūhoe were one of the strongest voices saying, “Well no. That’s obviously not relevant to us at all.” Things changed between then and 2014, when Te Urewera was given person rights, but it’s not something that Tūhoe ever demanded. And even if personhood was a good thing, constantly giving indigenous communities something that they didn’t ask for, and denying them what they did ask for, will eventually cause problems. My big fear is that this will backfire because there was no Māori demand for it in the first instance. The second one is to go back to that duality. New Zealanders think the land is either wholly degraded or perfect and can’t quite come up with a solution for the majority of the country, which is somewhere in the middle. That’s about finding an honourable, sustainable solution that finds a balance between conservation and development. And are we any closer to that, having adopted person rights as our “go to” for Treaty settlements and conservation estate? I think we’re further from it than we ever have been before. We still have that need that’s unresolved, and we’re focusing on the wrong thing by focusing on person rights. The concern that these rights are easily manipulated has come up in many other parts of the world. When you look at the countries that have written Pachamama into their constitutions, especially Bolivia and Ecuador, it’s been a horrible time of resource extractivism, especially in the petrochemical industries. That means that the award of person rights coincides completely with resource degradation, environmental loss, and the trampling on certain rights for different people. It was supposed to be done in the name of indigenous peoples, but instead it’s enabled the petrochemical industries and mining industries to degrade indigenous territories. It’s the way that these industries have framed nature as, “Well, now that we’re accepting it’s animate, we accept that it can heal itself. So there’s nothing wrong with putting a few cuts and bruises into her.” It’s argued in the north-eastern states of the US that since localised ordinances around person rights have come into place, the mining industry and the fracking industries have found it easier to get around those rights compared to what was there before. And those ordinances were brought in explicitly to rein in those industries. These issues exist along with the multitude of social justice concerns that personhood raises, like, what is an indigenous right to development in a place that’s now a person? What does this do to forestall indigenous demands into the future? And the quote from one of the interviews I did about slavery: “We still want to own Te Urewera, so are we now slavers because we want to own the land?” What that stands for is how personhood might forestall any future approach to historical justice in these places. Personhood will prevent questions of ownership from being addressed properly in the future. The lack of balance between conservation and development, and the perpetuation of preservationism is my main academic concern. We need to find a more sustainable-use approach, and personhood rights have set us back on that. Tongariro National Park is currently being involved in Treaty Settlement negotiations for multiple iwi, including Ngāti Tūwharetoa. Personhood rights, similar to what was applied to Te Urewera, is being considered for these negotiations. Photo by Yulia Gadalina on Unsplash (2019). A criticism that you have of personhood and rights for nature is that it is not a concept from te ao Māori. In some ways, it is shoehorned into the New Zealand context. In 2001, a prison in Northland was being built at Ngāwhā springs. It had opposition from Ngāpuhi and Ngāti Rangi, who said the taniwha Takauere would be desecrated by the prison construction. Taniwha may in some ways be analogous to a personhood concept, and in contrast to personhood, taniwha are embedded in te ao Māori. Do you think legislation that centred around existing taniwha, for example, might have different consequences to conservation in Aotearoa than a personhood approach? Well, there are a whole lot of things beyond taniwha at stake there. The pools at Ngāwhā are a wonderful community development project, and they’ve become a site of cultural resurgence in some ways. There was also a deal done with Top Energy, where the company would buy some of the former mined area right beside the springs and give it back to the local hapū. In return, tangata whenua accepted the extension of Top Energy’s works. Tangata whenua have big aspirations for community development on that site. Even back in 2001, there was an essentialisation of the taniwha being the only issue, when many other things were going on. Not the least being the reason for the prison being built in the first place. Around 75% of the inmates are Māori, many from local tribes. There were some Māori that were saying “Yeah, go ahead and build it, we need to be close to those people who are going to be rehabilitated”. Others were saying, “Well this is just a kick in the guts to the concept of community, to put a prison right on what is an important place for us.” I’m not downplaying the narrative of the taniwha, but it wasn’t as central to the whole story as history made it out to be. As to how it may be more relevant to the New Zealand context, I think: What is personhood? Is it part, or not part, of what you’ve called “te ao Māori”? I personally don’t like thinking about it in those terms. I have a sneaking suspicion that the phrase “te ao Māori” is invented. There’s a truckload of very different cultural perspectives going on for different iwi and hapū, and to singularise that is counterproductive. The starting point is to say, “What’s important to this tribe?” One reason why I would never rule out personhood, for, say, Te Urewera or Tongariro, is that for the tribes involved there, there were elements of personhood in what they do. For my own tribe, Kāti Māmoe? There’s nothing about personhood for us at all. But we are widely held up to be sacrilegious, early victims of colonialism that have lost our way, whereas Tūhoe are often considered to be staunchly defensive of their culture. The “mountains marrying the mist” in Tūhoe’s case, and the “I am the mountain, the mountain is me,” understanding of Te Heuheu’s relationship with Tongariro, suggests to me that there is something endemic about personhood for those particular tribes. So, I think too much gets lost at the “te ao Māori” scale. It’s less of a cultural imposition if you look at what was specifically important to Tūhoe, and specifically important to Tūwharetoa. Taniwha can be an important focusing aid to draw attention to otherwise hidden issues in environmental management. When State Highway One was in the works, the famous taniwha down towards Mercer came into the media. Previously, there had been no way of getting cultural values on the agenda for environmental impact reporting on building motorways. The public was so transfixed with building motorways that it was very difficult to say, “Well, what should be protected in these landscapes? Where should we spend extra money to go around significant sites?” I think personhood is no more or less real for Māori, than taniwha. At times, the relevance of both has been exaggerated for Māori, and at times underplayed. But I don’t think either concept and their use in resource management are any more genuine or disingenuous. Personhood and a Rights for Nature approach in Aotearoa embodies the tension between crown governance and Māori sovereignty. Wai 262 has the potential to change the way conservation and environmental management occurs in Aotearoa. Wai 262 outlines a partnership, ‘in which the Crown is entitled to govern but Māori retain tino rangatiratanga (full authority) over their taonga (treasures).’Do you think this is going to be a workable vision for environmental management? [The Wai 262 claim, also known as the “Flora and Fauna Claim”, was a Waitangi Tribunal claim lodged in 1991, and was one of the largest and most complex in the Waitangi Tribunal’s history.] It’s notable that it’s been a long time since the reports, and I don’t think we’re much closer to anything tangible coming out of it. Our track record speaks for itself there. The Tribunal reports do quite a good job of saying that resources are inseparable from their metaphysical properties. It makes clear that we’ve got to start putting those metaphysical properties first, rather than ignoring them. And that is quite a fundamental shift for New Zealand. It’s not that long ago that a judge in court said that Nganeko Minhinnick and her Ngāti Te Ata people’s objections to the Waiuku Steel Works – the taking of freshwater from the Waikato, bringing it to the steelworks, transforming it in terms of heat and chemical pollutants, and then transferring into the Manukau Harbour was “a purely metaphysical objection”, and that was reason for it to be dismissed. That it had no substance because it was “purely metaphysical”. To frame it like that is incredibly insulting. That was in the late seventies, early eighties. If we can capture what’s been said in reports and research for Wai 262, and ensure that the metaphysical properties of nature are realised in courts and tribunals, that will be a great thing. I just don’t quite see what the mechanism for that is. The intent is there, but it’s not like the various parties of Wai 262 have actually come up with mechanisms for getting to that point. Rotoiti, which is one of the lakes in theRotorua Lakes District. In this region, personhood has not been the focus of environmental issues in treaty settlements. Instead, there has been investment into lake restoration, and an attempt to give a larger role to Te Arawa in environmental decision making. This approach more closely aligns with locally based leadership than personhood rights approaches. Photo by Nicholas Rean on Unsplash (2014). Do you have any ideas of mechanisms that might work for this vision of Wai 262? I don’t know if I should admit to being a bit of an anarchist, but academic anarchy is a little different to populist anarchy! I’m a big believer in flax-roots approaches that involve local expressions of leadership. And that’s particularly relevant to the sorts of issues that personhood has been used to address, because giving a landscape personhood is not a local solution. It’s actually a globalised rights discourse that’s trampling on local expressions of personhood and culture. Hopefully, you got that distinction I made earlier between there being something meaningful for Tūhoe and Tūwharetoa to their relations of mountains and forests that may look like personhood is relevant. But it’s not, because personhood doesn’t build on what’s there; it imposes on top of what is there. I would rather find solutions within the community. The relationship between metaphysical properties and physical nature is a very fine tuned thing. It can’t be understood or well managed from a distance. In academic understandings of anarchy there’s still a role for the state, but it’s an enabling role. To make local decision making and local control work in an ever globalised world, we need the state to be actively supporting it. But the initiative has to come from below. We don’t have a lot of scope for local control in New Zealand. Our particular style of doing things has always been against letting local people take control of their circumstances. A lot of the opportunities for local control have been taken away by the Think Big mentality that we’ve had in New Zealand for a long time, that lead to Tiwai Point smelter near Invercargill, and the Tasman Pulp and Paper Mill in Kawerau. I think probably the worst thing was the National Development Act in the eighties, which said any major development that is in the national interest can be decided in court in Wellington, rather than at site. It meant that all the local activists were bankrupted by having to go to Wellington to protest, and most of them just dropped out because it’s too expensive. New Zealand’s conception of being a small underdog means we think we will fall behind the rest of the world unless we take a national development perspective. Local expressions of environmental interest need to flourish for many Māori interests in the environment to be realised. It’s when the local gets enabled, that we’ll be in a position to more honourably and effectively deal with Māori environmental claims.