Search Results
76 results found with an empty search
- Commercialisation of Research
Interview with Assoc. Professor Geoff Willmott By Alex Chapple Photo by Michael Longmire on Unsplash For many young students, commercialisation of research, and industrial research is not something we come across very often. And yet, this field of research is incredibly important and brings about a huge amount of innovation and entrepreneurship. If you haven’t been exposed to industrial research much, it can be difficult to see how it works, who funds it, and how to get involved. The following is a conversation I had with Associate Professor Geoff Willmott from the University of Auckland. Geoff is a Principle Investigator, as well as the Deputy Director (Commercialization and Industry Engagement) at the MacDiarmid Institute, and has a joint appointment with the Department of Physics and the School of Chemical Sciences. We talked about how industrial research works, and how you can get involved as well. How does industry research work? And how's it different to research that is funded, for example, through the Marsden fund? The industry refers to companies that are making things, selling things, producing items, or selling services. In the course of doing that, they come across challenges or opportunities in their business that require research to figure out how to address them. So Apple wants to create an iPhone, everyone wants to buy an iPhone, that's going to make them and the shareholders a lot of money. But in order to do that, they need to get some research done to figure out how to make a good phone. So that's kind of how industry research works, it creates value in companies. In a very important way, it's not different from any other research. So if you're a scientist working on an industrial project, you will quite often have a very similar process to what you would have if it were a more fundamental project. You have a research question that you're trying to find something out about. History is littered with these discoveries, where fundamental advances come as a result of industrial problems. A really good example would be one of the recent Nobel Prizes awarded for the blue LED. It was a fundamental material science problem of how do you get a blue LED material? In doing so it meant you could have a white light LED, so it's an extremely valuable problem. So in general there can be a bit of an arbitrary line between industrial research and fundamental research. Now, having said that, there's obviously some kind of difference because otherwise we wouldn't be talking about it. So in terms of funding, a company can directly fund research themselves. So they might pay say, myself or the university, to carry out research that's of interest to them. But there's also quite a large amount of government funding and quite a large number of schemes that support industrial research. So the companies are not really paying for it. Callaghan innovation is one of the agencies in New Zealand and they can fund PhD projects. So the money will come from the government and it's seen as a public good that supports the company, but it's also a public good in the sense that it supports the broader economy as well. Whereas when you apply for the Marsden fund, people will not judge you on how applicable your science is, it's all about is this a world leading, game changing, academic idea? In industrial funding applications, it's about what is it gonna do, who are you working with, and who's interested? I should also mention that lots of, and some of the best innovative companies employ their own R&D (research and development) staff. So many people with undergraduate/postgraduate science degrees can go on to become researchers at companies that conduct internal research. So if a business has a problem, and it requires research to solve the problem, are the research problems more physics/chemistry based problems rather than an engineering problem? Often these lines we draw between departments are pretty arbitrary. Engineering schools, and the profession of engineering may be more set up to do directly applicable things. Perhaps the chemists and physicists are more set up to think outside the box, and do things that are entirely new but that's a generalisation. But perhaps that's how it can be classified. There are plenty of commercial projects that go through physics and chemistry departments as well as engineering. The figure shows the gross domestic spending on research and development as a percentage of the countries GDP in 2017. Gross domestic spending on R&D is defined as the total expenditure on R&D carried out by all resident companies, research institutes, university and government laboratories, etc. New Zealand's R&D expenditure falls short of the OECD average. Source: data.oecd.org/rd So I think you had a project that you were looking to commercialise. So how does that work? It sounds like it's the opposite direction of what you said before. So commercialisation is where you take the research that you're doing within the university and say, hey, we think this could be useful outside the university. It's a very interesting area, because it's where the two different worlds meet. So people who are researchers who have done research, all of a sudden, come in contact with the idea of a market. You need people to want to buy your product before you've got the products. There's a lot of stuff that happens at the interface there, and that's kind of interesting. So one thing is to educate our scientists and engineers about how the commercial world works. We have an organisation called Uniservices. They're attached to the University of Auckland in what's called a technology transfer office. So if you are trying to commercialise your research as a University of Auckland researcher, then typically you go to Uniservices and talk with some of their specialists. They may say this isn't really for commercialisation come back in a year when you figured out this or that, or they might say, this is great, lets go patent it. Or they might say, this is great, this company we know could really use that technology so you should go talk to them. The idea of protecting intellectual property, and to be able to capture some of the knowledge for your own private benefit is really important in societies. Otherwise, if you couldn't protect your intellectual property, then somebody can steal your idea and so there's no point in doing the research in the first place. In terms of dealing with intellectual property, there are different ways to go about it. People are most familiar with patents. So that's kind of like a paper except you write down what your idea is, and it goes to a patent office and gets judged whether it's a good patent. Once you've got your patent, then competing companies overseas that come up with the same idea cannot commercialise it in the same domain that you've got your patent in. The other thing you can do with your patent is licensing. You can go to another company and say, we've got this cool idea and you can use it for X dollars a month or something along those lines. That's quite a good model in many cases because it means you don't have to go and set up a business yourself which can often be hard and costly. This way you can still make money from your inventions. So the other way is to set up your own company and that's what startups or spin off companies are. More and more are emerging from universities in New Zealand. A lot of young physics, engineering, chemistry graduates get involved in startups. I went to the bio engineering building over the summer and there were tons of spin off companies that were based in that building. Yeah, and that's a healthy ecosystem of companies. So in order to grow your company, you need capital investment, which are people who want to give you their money for a stake in the company. Then when the company grows, they can sell that stake and make money. So you need to find those people that believe in you to grow the company, and it's easier to find those people if you're all based in the same place/building. A good example is Silicon Valley, where a lot of the IT and electronic companies sprang out of one particular geographical location in California. And the idea was that because all these companies were rubbing shoulders with one another, and because they shared people (people moving from one company to one another and cross pollinating ways of doing things) the investors would go to Silicon Valley to find cool things to invest in. What do you find most exciting about industry research? I think it's exciting because you can see some kind of tangible effect of what you're doing. You can see a product go from A to B. I worked on one product, where the research we did provided the sales material. So when the company was going to customers, they would show the research and the customers would say, wow that's cool, let's buy it. So you can see that type of impact. It's also a little more dynamic. There'll be kind of harder deadlines that are out of your control. In the academic environment you might set yourself a deadline and then halfway to getting there realise there's some other thing that's more interesting and change plans and so on. In industrial research it can be a little bit more prescribed in terms of what you're doing but it's also more energetic sometimes, and that makes it exciting. How can you get involved in industry research? One thing is go look for jobs in these startup companies because they might be employing people. There are also a number of R&D heavy large companies in New Zealand that will recruit scientists. If you're looking at research projects within the university, then they might have an industrial partner too. Taking advantage of skills development is the other thing. Uniservices put on workshops, and they're things like the Velocity challenge, which is a challenge where people can take their ideas and pitch them for investment as young students. Lots of science undergraduate students will go into fourth year honours and do a research project as part of that. So talk to your supervisor about whether there is intellectual property associated with it and whether you should be publishing it or keeping it a secret. The supervisor should have a bit of an idea about that. So it's not too early to ask around and get involved in commercial projects.
- Explained: Dark Matter
The concept of Dark matter explained by Caleb Todd Photo by Brett Ritchie on Unsplash Chances are, you’ve heard of dark matter. Whether in popular science articles or the technobabble of your favourite sci-fi show, it pops up pretty much everywhere, and no wonder. Learning that what we think of as ‘normal’ matter only makes up 15% of the matter in the universe, and that the remaining 85% is comprised of some mysterious substance which we can’t touch or interact with in normal ways is quite the bombshell. It makes perfect sense that it would occupy so much of our science consciousness; it’s a concept at the very boundaries of what we know and understand, whose absurdity is easy to explain, and which has a catchy name and pub-quiz-style facts associated with it. Perhaps the strangest part of dark matter, though, is that you’ve probably never been told where these crazy ideas come from. Dark matter is widely known but not widely understood. That may be why you’re reading this article. The natural assumption is that anything which can stump the best physicists of our generation must be so advanced that it will be beyond the reach of anyone without a PhD. In reality, though, dark matter can be understood by anyone who’s spun something in a circle and has experienced gravity. I’m guessing that’s most of us. By the end of this article, you will understand why scientists treat an idea as ridiculous as dark matter seriously. To get there, though, we need to make sure you know enough about swinging things in circles and experiencing gravity. Swinging Things in Circles In 1686, Isaac Newton revolutionised physics when he laid out his three laws of motion (although we only need the first one). Newton realised that objects move in a constant direction at a constant speed unless there is a force acting on them. This can be unintuitive because you’ve never set something in motion that has kept going forever, but that’s only because forces like friction and air resistance slowly sap a moving object’s energy. If you threw a ball out in deep space, far from any gravitational pull or atmosphere, it would keep moving in the direction you threw it and at the same speed. That’s Newton’s first law in action. Conversely, if you threw the ball here on earth, its path would curve because gravity acts on it as a force. So the first law tells us that to curve an object’s path, we need a force acting on it. But we can go further: it is possible to work out precisely what force is required. Suppose you were to tie a weight to the end of a string and try to spin it around. After a bit of experimentation, you would figure out that three factors influence how hard you have to hold onto the string to keep the mass from flying off in a straight line. These are: how heavy it is, how fast you’re swinging it, and how long the string is. Without going into the details, there is an exact relationship between the force on the object, the mass of the object, its speed, and the radius of the circle in which it moves. If you know the speed, mass, and radius involved in any circular motion you observe, then you know exactly what the force is without needing to measure it. That’s everything you need to know about circular motion to understand dark matter. It’s a very straightforward idea: knowing the speed, mass, and radius of things going in a circle tells you the force acting on them. The other puzzle piece we need is gravity, which is the topic of the next section. Gravity As with circular motion (and, in fact, virtually all of physics), the story of gravity starts in earnest with Isaac Newton. He was the first person to formulate a universal theory of gravity, and it was a game-changer. Astronomy was arguably the biggest field of research at that time, and Newton showed that his theory of gravity could predict the planets’ behaviours perfectly. A few generations before Newton, Johannes Kepler had observed that the planets all moved in ways that obeyed three laws of planetary motion. Kepler’s laws described the properties of celestial motion, but gave no justification for why they were so. They were observations lacking explanation. When Newton published his theory of gravity, he proved its validity by showing that Kepler’s three laws arise as natural consequences of the theory. Newtonian gravity solved arguably the most significant outstanding problem in physics at the time, so, naturally, the scientific community accepted it as correct. But the story doesn’t end there. Over time, as we made more refined measurements and Newton’s theory of gravity was put to the test, people started noticing discrepancies. Newton’s predictions did not match perfectly with what we observed out in the universe. Most famously, in 1859, the precession of Mercury’s orbit was observed to be too large to be purely Newtonian. Ad hoc justifications were proposed, but they all ultimately failed to be satisfying solutions. On the one hand, Newton’s theory of gravity had been so successful in so many areas that it was difficult to discount it. On the other hand, it had serious gaps that couldn’t be ignored. A new theory was needed. Enter Einstein. In 1905, Einstein had his annus mirabilis, or ‘year of miracles’. Within the space of a few months, he published four papers that would revolutionise physics. One which decisively proved the existence of atoms, one on the quantum theory of light, one proposing the most famous equation ever (E = mc ), and one on his special theory of relativity, which redefined how we understood space and time. All in one year. It then took him ten more years to derive his general theory of relativity. In essence, Einstein’s general theory of relativity was his theory of gravity. It describes the force of gravity as the experience of curvature in the spacetime continuum. When a moving object’s path is bent by gravity, it’s because it is moving in a straight line through curved space, not because it’s moving in a curved line through flat space. If that doesn’t make sense, then don’t worry because it really shouldn’t. The critical point is that the general theory of relativity is a theory of gravity, and — as you may have guessed — it does not suffer the shortcomings of Newtonian gravity. The precession of Mercury’s orbit aligns perfectly with general relativity’s (GR’s) predictions. Every test thrown at GR has validated its veracity. In fact, some of the most well-known and widely-discussed areas of modern physics are direct consequences of Einstein’s theory. The observation of gravitational waves, for instance, was among the final predictions of GR to be experimentally observed. Black holes, too, arise as solutions to Einstein’s equations. In short, it can’t be beaten. Now you know much more than you need to know about gravity to grapple with dark matter. The gist is that we have a robust theory of gravity standing up to every prediction we can throw at it. Well, almost every prediction. The Problem Now we finally get to tie our two concepts together. We know that objects can move in circles given the proper force, and the size of that force is entirely determined by the speed and mass of the object and the circle’s radius. We also know that general relativity is a well-established theory of gravity that can predict the gravitational forces on objects accurately. You may see where this is going. Galaxies are made up of hundreds of billions of stars orbiting around their mutual centre. They effectively undergo circular motion, with gravity acting as the force keeping them bound. Astrophysicists can measure stars’ masses, speeds, and distance from the centre of the galaxy. They can also derive the gravitational force acting on all these stars by working out the galaxy’s total mass and where that mass is distributed. So, we can calculate the force required to make the stars move in a circle, compare that to the force Einstein predicts the stars to be experiencing, and find that they are equal. As it turns out, they aren’t equal. In several different measurements over a number of years, physicists noticed that the stars orbiting in galaxies were moving too quickly. In other words, the gravitational force we know must exist significantly exceeds our expectations. Einstein’s gravitational theory failed to predict the galactic forces acting on stars correctly. There are only two possibilities: either general relativity is wrong, or galaxies contain a lot more mass than we can see. Dark Matter General relativity’s success makes it difficult to dismiss, which forces us to consider the second option. The proposal is this: there is a new type of matter that does not interact with light. Its only measured influence on ‘normal’ matter is through gravity. This would explain why galactic gravitational forces are greater than what we would expect; they are subsidised by unseen matter that is covertly increasing the mass in galaxies. Physicists are very original, so they named the substance which doesn’t interact with light ‘dark matter’. The conceptual route we’ve taken here to motivate their existence is only one of many. Mass of cosmological structures can be measured in many different ways, and they all point to the presence of missing matter. For example, gravitational lensing is a process where large masses (like galaxies) with strong gravitational fields bend light around them. We regularly observe gravitational lensing around clusters of galaxies, and the geometry of the lensing allows us to estimate the mass involved. Again, we find that there is more mass in galaxies than we can see. A host of different observations all support this same thesis. However, that is not the end of the story. As elusive as dark matter may be, it does have (feeble) effects on other matter. We should, in principle, be able to detect it directly. Indeed, many researchers are trying to do precisely that. As of yet, though, none have been successful. On the one hand, this is not unexpected. Any direct interaction between a dark matter particle and normal matter will be so slight that incredible experimental precision is needed. Furthermore, there is considerable ambiguity over exactly what properties dark matter would have and where we should look, meaning detection efforts have to be aimed at a broad range of candidate properties. Nonetheless, a lack of direct evidence for these particles is a valid objection to their veracity. That takes us back to the other possibility: general relativity could be wrong. An observation of gravitational lensing by the Hubble space telescope. The orange light in the centre is a large galaxy. Behind it is a blue galaxy whose light is bent around the red galaxy in all directions because of gravity, causing the illusory ring effect. The precise position and shape which the lensed image takes on depends on the gravitational field strength, which, in turn, depends on the mass of the red galaxy and where the mass is distributed. Such observations have provided evidence for the existence of dark matter. A New Theory of Gravity Let’s come back to Newtonian gravity. We said that it was a revolutionary theory that was taken on board because of its startling ability to predict otherwise unexplained phenomena. But, over time, discrepancies arose between observations and expectations that forced us to look elsewhere for our theory of gravity. Now let us consider general relativity. GR was a revolutionary theory that was taken on board because of its startling ability to predict otherwise unexplained phenomena. But, over time, discrepancies have arisen between observations and expectations. Sound familiar? Admittedly, dark matter is a more satisfying solution to general relativity’s problems than anything proposed for Newtonian gravity’s. However, there is a second strike against GR’s name which redoubles the doubts over it: we already know it’s wrong. This may seem surprising after I spent so much time talking about how it has passed every test thrown at it, but at the time, I was omitting an important detail: general relativity has only really been tested on particular scales. We know it works incredibly well for objects the size of planets or stars, but it may no longer apply at larger scales. Newtonian gravity worked incredibly well for objects the size of you and me, but it began to fail when considering things the size of our solar system. Testing a theory in one regime offers no guarantees of its validity in another regime. A computer-generated image which approximates what some scientists believe dark matter would look like. This is only a theoretical conjecture, though, as we are yet to directly detect any dark matter. The problems for general relativity get worse, though. GR is one of two theories central to modern physics; it describes things on large scales, while quantum field theory describes things on small scales. Both theories have been highly robust within their own domains, but they cannot be reconciled with each other. There is no definitive quantum theory of gravity that would unify physics under a single conceptual framework. General relativity simply cannot be reconciled with quantum field theory at small scales. Quantum field theory is the best supported theory in science - its predictions have been proven to a level of accuracy beyond any others’ - so we know that GR cannot be entirely correct. Having said that, GR being an imperfect theory does not imply that dark matter does not exist. To dismiss dark matter, we would first have to find a theory of gravity that explains the phenomena that made us consider dark matter in the first place. Many attempts have been made at correcting GR. Some of these attempts have successfully explained some of the phenomena, but none have explained all of them. The single proposal of dark matter can explain a huge variety of observations, and Occam’s razor suggests that a single, simple proposition that is consistent with many observations is more likely to be correct than a large number of complex solutions which partially explain some observations. For this reason, dark matter is the more widely accepted solution to the problems which astrophysicists face. Still, until it is directly observed, the question mark over dark matter remains, and physicists will continue to debate its veracity. You can judge which side of the debate you sit on.
- Solitons
Summer research project by Caleb Todd Last summer, I underwent a research project in the department of physics at the University of Auckland. One of the department's most active fields of research is optics, and my supervisor, Associate Professor Miro Erkintalo, specialises in nonlinear photonics, which is the study of high-intensity light. A host of rich dynamics and structures can be observed in this regime; it balances theoretical interest and practical applications and has something which everyone can engage with. Since the mathematics that underpins nonlinear light maps closely to other physical systems, it also provides a convenient testing ground for the behaviour of phenomena that manifest in all fields of physics. One such phenomenon is the soliton. Solitons are localised pulses that propagate without changing shape. A ubiquitous wave phenomenon, they were first identified by John Scott Russell when he noticed that water displaced by a boat in a canal continued to move through the channel at a constant speed and without flattening or otherwise dissipating. This is in contrast to ordinary waves, which broaden, narrow, or break, given time. For nearly a century, this discovery's importance was not fully appreciated; it was a curiosity with little to no use. However, solitons have become a hot topic in recent years and one of their uses won the 2005 Nobel prize in physics. The solitons we will consider are not pulses of water, but light. If you take a stretch of optical fibre and join its ends to form a loop, you have what is known as a fibre ring resonator. Any light you send in will circulate for a long time before it is lost. If the light within the resonator is sufficiently intense, it can experience nonlinear effects that are not present with low-intensity light. For our purposes, we only need to know about one of these effects: the nonlinear refractive index. The refractive index of a material determines the speed at which light propagates through it. A Nonlinear refractive index refers to a material's tendency to change its refractive index as the light's intensity changes. Pulses of light are more intense where their size is greatest (the peak of the pulse). So, across the profile (i.e. shape) of the pulse, the refractive index of the material will be changing. This intensity-dependent refractive index, also known as the Kerr effect, causes nonlinear self-focusing, where the pulse is contracted due to the variation in speed across its profile. Usually, the wavelength of a pulse is constant throughout, but the Kerr effect also shifts the pulse's leading edge towards higher (blue) frequencies and the trailing edge towards lower (red) frequencies. This is depicted in the graphic on the next page. Image made by Emmanuel Boutet Licence: CC BY-SA 3.0 A double balance maintains a soliton's constant shape. On the one hand, driving must offset lost energy: a laser will continually send light into the resonator to compensate for the light which escapes. On the other hand, the width/shape of the soliton arises through a balance between nonlinear self-focusing and dispersive spread. Dispersion refers to the way the speed of a wave changes as the frequency of the wave changes. If higher frequencies move slower, the material is called ‘normally dispersive’, and if lower frequencies move slower, it is called ‘anomalously dispersive’. In the anomalous dispersion regime, the leading (blue-shifted) edge of the pulse will move faster than the trailing (red-shifted) edge, causing the pulse to spread out. This dispersive spread compensates for nonlinear self-focusing, and the overall width of the soliton remains constant. Solitons formed by this double balance in fibre ring resonators are known as Kerr cavity solitons. They have been studied extensively by research groups around the world, including the nonlinear photonics group here in the University of Auckland's physics department. One reason they have garnered so much interest is their use in generating optical frequency combs (light made up of a series of equally spaced frequencies). These frequency combs, whose development was awarded the Nobel prize in 2005, are used as high-precision tools in spectroscopy, optical clocks, metrology, and GPS technology. One of the severe limitations, though, on solitons is the requirement to operate in the anomalously dispersive regime. Referring back to the double balance, the dispersive spread can only compensate the nonlinear self-focusing if the blue-shifted leading edge of the pulse moves faster than the red-shifted trailing edge. If we are in the normal dispersion regime, the situation is reversed, and the dispersion helps the self-focusing rather than hinders it. There is no balance. The issue is that optical fibre is only anomalously dispersive at specific wavelengths. For example, light in the visible spectrum is normally dispersive in optical fibre. So, if you want an optical frequency comb at wavelengths of light in the visible region, you won't be able to use a fibre ring resonator soliton. There is a way around this, though, which was the focus of my research project. Kerr cavity solitons can be made to exist in the normal dispersion regime by exploiting higher-order dispersive effects. Whether we are in the normal or anomalous dispersion regime is determined by the second-order dispersion coefficient. That's the coefficient on the second-derivative term in the dispersion's Taylor series if that means anything to you*. In general, the coefficients at all orders will modify the soliton's behaviour, but the effects of coefficients beyond the second-order are usually inconsequentially small. However, it is possible to operate at wavelengths where the third-order coefficient is as significant or even more so than the second-order coefficient. This is useful because the effect of third-order dispersion on a soliton is to shift its centre frequency away from the frequency of the driving laser. In particular, with sufficiently strong third-order dispersion, the soliton can be pumped in the normal regime but have an anomalous centre frequency. This allows the double balance to be restored, even with normally dispersive pumping. There are complications, though. The dispersion parameters are not the only factors in the behaviour of light within fibre ring resonators. Two other central parameters are the driving power (which we will call X) and the detuning of the driving frequency from the resonator's nearest resonant frequency (which we will call Δ). Solitons do not exist at every pair of X and Δ, and these parameters also determine their shape. In particular, larger values of Δ give rise to taller, narrower solitons, which is favourable in producing optical frequency combs because a narrower pulse is comprised of a broader frequency profile. Ideally, we would be able to increase Δ arbitrarily, but there is an upper limit in Δ of soliton existence at any given driving power. When third-order dispersion can be neglected, there are well-understood bounds on soliton existence. An approximate upper limit in Δ can be determined for any given value of X. My project was to probe the existence range of solitons when the third-order dispersion cannot be neglected. A plot depicting the upper and lower limits of the detunings at which solitons may exist. The purple and red curves are for when third-order dispersion is not accounted for. The blue and orange points are the upper and lower limits which I found for a given, non-negligible strength of third-order of dispersion. Linear fits for my data have been presented to guide the eye. To do this requires us to turn to computers. The canonical model of light within fibre ring resonators is an extended form of the so-called Lugiato-Lefever equation. Solitons are pulsed solutions to this equation that maintain a constant shape in time. When third-order dispersion is included in the model, it is difficult to obtain analytical results which describe soliton characteristics as Δ and X change. However, we can use code on computers to simulate the Lugiato-Lefever equation, and by changing the parameters we can observe how the solitons change. In particular, you can pick values of X, Δ, and the third-order dispersion strength, generate a soliton, then increase Δ until the soliton no longer exists to find an existence upper limit in Δ. I found that third-order dispersion drastically reduces that upper bound in Δ. The graph above shows how substantial the decrease in the upper limit is, even for a moderate third-order dispersion strength. This limits the usefulness of a third-order dispersion approach to introducing normally dispersive soliton frequency combs because large Δ's enable more efficient energy conversion from the driving laser into the frequency comb. Nonetheless, third-order dispersion solitons still comprise useful tools for producing frequency combs at new wavelengths. One of solitons' most useful features is how their position and number can be precisely controlled by rapidly modulating the driving over time. Again, this is well-understood when third-order dispersion is negligible, but including third-order effects complicates things substantially. My summer research's natural progression is to investigate how third-order dispersion affects the manipulation of solitons when the driving is modulated. This is the focus of my BSc (Hons) research project. If the properties of solitons with third-order dispersion can be quantified, we will be able to reliably access soliton regimes traditionally not accessible. Kerr cavity solitons underpin a substantial range of technologies, some of whose limitations can be reduced by the promising features which these new solitons possess. I look forward to seeing how this research develops, particularly the breadth of its impact on fields of science, from biology to chemistry to astronomy. The capacity for nonlinear optical systems which support solitons to be scaled down to micrometre sizes means that whatever advances are made could easily seep into everyday use. It is very possible that future generations of ubiquitous technologies like phones and computers will rely on Kerr cavity solitons. *If it doesn’t, don’t worry. For interest's sake, though, a Taylor series is among the most important tools in a physicist’s kit. It describes the fact that pretty much any function f(x) you’re interested in (in this case, frequency as a function of wavenumber) can be represented by a polynomial. In general, the polynomial will have an infinite number of terms, with each term being of the form f(0) x , (the nth derivative of f at 0 multiplied by x to the power of n). However, close to x=0 the terms with larger exponents will quickly approach zero and can be safely ignored. If we are left with only a few terms, our job is greatly simplified because polynomials are much easier to analyse in general than arbitrary functions.
- Seabird Sensory Ecology
An Interview with Ariel-Michaiah Heswell By Louisa Ren Ariel is a postgraduate student in Marine Biology who recently completed her BSc(Hons) project on seabird sensory ecology with her supervisors Dr Anne Gaskett and Dr Megan Frieshen. The results of this project are expected to be published soon. Currently, she is working on her PhD where she will be studying seabird sensory ecology even further. Tell us a bit about yourself! How did you get involved with your honours project? I was born and raised in Brunei, which was a very diverse environment and had very diverse animals. I loved animals and conservation, and when I moved to New Zealand, I took the Bachelors in Marine Science program and absolutely adored animal behaviour. The marine science department did not really focus on the particular animal behaviour I wanted to study for my postgrad, so I changed my major for my PhD to Biological Sciences to study the behaviour of seabirds from a sensory perspective. My supervisor and lecturer at the time was giving a presentation on seabird sensory ecology, which I got really into. Then I did an honours project on it looking at whether certain types of sensory systems and seabirds are more vulnerable to bycatch, why they are more vulnerable to bycatch, and we looked at it from a sensory point of view. It did turn out that actually those seabirds that had a greater and larger sensory system relative to their body size, were more likely to be attracted to fishing vessels, which then increased mortality rate—these were amazing findings. And so I was continuing along the lines of bycatch by adding new avenues into it for conservation of seabirds, including plastics and lights. Right now for a PhD, I’m looking at different types of colours, plastics, and lights, seeing if certain seabirds are more or less likely to become attracted to the plastic. Looking at that from a sensory perspective, why are they being attracted? Is it the size of their sensory system? I’m looking at it from that point of view—I’ve just started and I’m seven months into my PhD. That’s fascinating! What exactly is meant by seabird sensory ecology? Ecology is the study of animals and the interactions in their environment; how they interact with things—which can be other animals, or it can be fauna and flora. Sensory is just how they view the world—like from their eyes, their hearing, their sense of smell and vision. So how they interact with their environment using these sensory systems. Why exactly are seabirds attracted to f ishing vessels? So that question is actually really hard to answer. There can be multiple factors and variables as to why they are attracted. There could be just a sense that they're in the same area, so the fishing vessel and seabirds are competing with each other for the same prize, which is fish or squid or something like that. When the fishing vessels are in the same area as seabirds, there's an increase in interaction rate, therefore they're more likely to be attracted to the fishing vessel. However, the actual studies have shown that sometimes the seabird diverts away from its normal migratory route and foraging area to go directly towards the fishing vessels. There is also the potential that because the fishing vessels are after the same prey as the seabird, as well as the bait and the awful discharges the fishing vessel emits, the chemicals and smells resemble the same odours as the seabirds’ prey and diet. All these good smells for the bird that make them think, “I love the smells, I'm going to go towards it.” Maybe they see that as their chum—which is their diet, and so they're like, “I want to try to grab something off that.” As for why they're attracted to lights, it is possibly because some of their prey is bioluminescent and lights up, so they're attracted thinking it's food. Another possibility is the seabirds are attracted because they use the celestial bodies such as the moon in the stars for navigation. They may see the light and think, “Oh, I should use that for navigation,” and then collide with the fishing vessel instead. Were there any particular bird species that were found to be particularly attracted to fishing vessels? Yes. So there were some studies which have been done in the Hauraki Gulf in New Zealand when they've got MPI (Ministry of Primary Industries) reports, and they found that the black petrel was a high risk species to being attracted towards the boat—this is different from light attraction. For the bycatch, the highest was once again black petrels and some flesh-footed shearwaters, and I believe some albatross species—I can't remember, but I think it could be the Buller's albatross or something like that. There were definitely seabirds which had less bycatch risk as well, such as the common diving petrel, the fluttering shearwater and the Buller’s shearwater. With the light attraction, it's interesting because the common diving petrel—which was less caught in fishing gear—was actually more likely to be attracted to the lights, and they were actually caught as deckstrike when they just ram into the boat. And so common diving petrels are one of the highest, and also lots of Cook’s petrels, whereas something like the black petrel is less often attracted towards the light. How was the research done? Was it all observational or were there experiments involved? For the honours project looking at bycatch, I went to museums and measured the skull and the wing lengths, and a bunch of body sizes of six different types of seabird species. We had three that were more likely to be at a higher risk of bycatch, and three that had a lower risk of bycatch. I just did a whole ton of measurements of their eye socket sides, nostril socket size, and did some brain scans to look at their olfactory bulbs and their optic tecta. Then I did a morphometric and sensory comparison of the different species to get a correlation of that. For the lighting experiments, I had a lot more range (because we had some permits) to do experimental designs. We went to the outer islands of the Hauraki Gulf, like the Mokohinau Islands, the Little Barrier Island, Tapanui, and Tiritiri Matangi, and we shone different types of lights into the sky; for example, red light, green light, white light, halogen fluoro, and a huge flood light. We counted the number of seabirds that we saw, and we weren't really looking at any particular species, but the species which were most likely attracted to the lights had burrows nearby. Was there anything that surprised you from your findings? Hm… something which surprised me, especially with the lights is that so far, we actually haven't found any statistical significance between different colours and attraction towards lights. When we did some more research beforehand into it, it looks like it also depends on other variables, such as the location. If you look at that specific location, there were differences between the lights, but when you combine them, there were no differences—so it's to do with the location as well as the moon phase. We definitely found significant results for the moon phase, which was that when it was a full moon and fully bright, there was less attraction towards the seabed—towards the light by the seabirds, whereas during a new moon and it is completely dark the lights were a lot brighter in comparison and less conspicuous, they were more likely to be attracted. Did you look into how birds see colour and whether that would affect their attraction to colours and lights? Yes, that is a very valuable point. Unfortunately, when we want to do that sort of thing and see how the bird sees the colour, the experiment to do that would involve dissecting the eyeball and looking at their rods, cones, and oil droplets—that's only now been done for two species of seabirds. One has just been done in Hawaii, and so [that researcher] is going to publish her work very soon; and another one has been done in Australia. It's really, really hard to do because ou need a freshly killed seabird in order to do it, and the ethics and permit to kill a seabird? Really difficult. Unless it's about to die (because of something else), you're not really going to be able to cut the eyeball and get everything done in time. The [researcher in Hawaii] has been doing this for a very long time so she knows what she's doing and managed to do it, but it’s just going to be really hard to do for us as we’re just starting out. Did this lead to any new questions? Yes. So in terms of questions of the moon phase, it also looked at why as well—most likely we would have to do the eyeball [dissection]. But also, the basic [study] we could do is just looking at their sensory size, and what the size of the visual system is. It won't help us with what the bird can actually see, but we can get an idea that if some eyeballs are bigger than the others, maybe that's related as well. It also opens an avenue of what other colours they are attracted to; for example: plastics. Are they attracted to certain colours or types of plastics? So yeah, just looking into that sort of thing as well. What else did you do during your project? When I did the honours project I was only able to do one CT scan sample per species, so I only got six. Because it was an honours project and was restricted on cash and time, I could only get a year to do everything, including coursework. I would like to do more of it this year, and I've actually been sending emails to museums in order to do some more CT scans. What I found was very restricting with the CT scan, is that if I try to go to a hospital or clinic to try to get CT scans they all ask, “Why do you want to do that?” Then they just say, “No, I'm not really interested.” And if I go to the bioengineering place it’s great because they understand the scientific research, but at the same time, I'm very restricted with the skull sizes I can scan. I can only give small samples because they can only fit a max. of about 10 to 11 centimeters, and some seabirds are much bigger—they can have about 15 centimeters. For those seabirds like the black petrel and the shearwater, the fluttering shearwater and the common diving petrel—those are really small. But bigger birds, which I want to test in comparison, are much bigger so they can’t fit, so I am attempting to look at other places I can do the CT scans on. And yeah, so I found that very challenging as part of the research for the honours, and I'm probably going to find it challenging in that six month time to a year time [with further research]. So you’re hoping to do more CT scans later on? Yeah, because I would like to look at the olfactory bulb, and optic tectum. There is a potential way around it—I'm going to do dissections on the brains of the seabirds. That is a potential in case the micro CT scans fail. But I did enjoy doing the micro CT scans, because you actually get a 3D image of the brain and it's really cool to look at and do measurements. What kind of people did you work with during this project? I worked with museum curators, I worked with people who love to go out to remote islands and do [research]. I worked with rangers from the Auckland Council, the Department of Conservation, as well as my supervisors who know a lot of spectral and colour measurement things—just a variety of different people. What kind of impact do you hope your research will have in your field? I'm hoping that I can do some more publications and presentations at conferences, and things that could potentially give some more light to understanding these topics, especially since not many people look at it from a sensory perspective. Not many studies have delved into this sort of area, especially in New Zealand and Australia. Very little is done on plastic ingestion in seabirds in New Zealand, and it's so strange, given how many seabirds we have here. Nothing has been done on plastic ingestion and seabirds—maybe the odd observation but no one's actually done any experiments or published papers on it. It'd be good to actually give [these findings] to the MPI and the primary industries (especially for fisheries) and for example, going to cruise vessels and saying, “Oh, potentially you can use these colours of lights to reduce the attraction rate of seabirds,” especially if you're passing through an island that is known to have burrows. You could dim or change the colour of the lights, or maybe discuss with cruise ships and fishing vessels, as well as with plastic companies. I haven't done any experiments on this yet but in the next year or so, looking at the plastic and potentially going to people who do the plastic side of things and suggest maybe changing to this certain colour to try and reduce consumption by seabirds and turtles and things like that. Research aside, did you learn anything else from doing this project? It really taught me how to manage time to that next level; I did some other projects before, and I thought, “Oh, yeah, I can manage my time alright,” but doing an honours project really teaches you that because you've got to balance your actual coursework, and other papers you're taking, as well as your research, as well as trying to find time for your friends and family. Then there’s also your own time, so you have to go find that balance. It really teaches you to find a balance for your mental health as well as all of your physical health—like going out, doing some exercise, hanging out with some friends as well, going back to see your parents. It also teaches you a lot more about taking control of the research, because I was very, very scared when I first started. I was like, “Oh, shoot, I don't know, if I'm gonna be able to do this in terms of what if I do it all wrong?” I felt like I needed to ask my supervisor about everything I was doing and I really needed guidance [starting off]. That was the beauty of my supervisor—she was always there to fall back on and guide me. As for my honours project, that really taught me to become more independent and I can now actually design my own [experiments]. Now for the research of this PhD project, I've actually started doing a lot more designing of other projects. I was doing some reading, and I decided that I want to do some experiments with colours and penguins. I brought forth a proposal to my supervisor and she loved it. So yeah, it teaches you how to become a bit more independent in your thinking and if you’re like me, you can start off extremely dependent and super scared and nervous about what to do but in the end, you will grow in confidence. Do you have any advice for anyone who is hoping to go into research in this field? You don't have to be a seabird lover at first—I certainly wasn't. I had no idea there were so many different types of seabirds around, like the main albatrosses, cormorants, seagulls and penguins, and then you just open a new research area. Trust me when I say seabirds will grow on you and you'll end up loving them so much. You also don't have to be someone who loves hiking or is very into getting out to the field to get into seabird research. Sometimes there are things in terms of going out to remote islands, going up and down sheer cliffs. I am personally terrified of heights, so I let some other researchers go up a sheer vertical cliff and I'll just wait down here, and that's all. You don't have to do that in order to do this seabird research because although a lot of fieldwork does involve going out to remote islands, you can also do other things like examining what colours they're attracted to, looking at their digestive tracts or something like that, and looking at their morphology; there are a variety of other things you can do if you're interested in conservation and things. Or if you really do enjoy going remotely and going up vertical cliffs, then yeah, go for it if you enjoy the thrill of that, but there's so many different pathways and avenues you can take. Just be creative, and you and your supervisor will find a project. What is your favourite seabird? I love the New Zealand storm petrel. It's such a little cutie and any white-faced storm petrel as well. If you look up a picture, you'll see why they're just so cute. When they skip the surface of the water, the little pitter patter of their feet and fluttering around is so adorable. Also the New Zealand storm petrel, which was actually thought to be extinct until either the late 90’s or early 2000’s when it was rediscovered in Little Barrier Island. Do we have many of those? [Their population] is still really small but they are definitely around as you can see them in the Hauraki Gulf just fluttering around—they're tiny and probably around the size of a dove. Just picture that little thing, skimming the surface, barely touching the surface with their feet and then hopping off again. Overall, how was your experience while doing this project? I really enjoyed the seabird research. It's definitely been a journey and adventure and it has definitely given me a greater love for seabirds, as well as for research and showing that I actually really want to do research. For anyone interested in continuing on with research and if you're into this sort of thing, it's really fun. It gives you a sense that, “Oh, I discovered that,” you know? I like that it gives you a sense that you're helping contribute to the scientific world, that you're actually helping your community. So you're actually helping with research and findings to understand how the world works. It's actually real and it's real fun, you'll meet so many interesting people. I've definitely met a whole ton of eccentric people and they’ve been great to work with. It's been a real fun time, definitely challenging. But overall, it's very rewarding.
- OPINION: AUSTRALIA VS TECHNOLOGY
An opinion piece by Struan Caughey Google would be in the right to pull out of the Australian market. This is due to their Governments’ poorly thought through legislation and lack of understanding around digital technology. Throughout this piece, I will be looking into the Australian Government’s approach in its attempt to regulate the internet through two separate pieces of legislation. The first will be looking at the impact of The Assistance and Access Act 2018, specifically its effect on encrypted messaging. The second will be reviewing the impact of the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2021 and looking at Facebook and Google’s responses to these. The Assistance and Access Act 2018 On 6 December 2018, the Australian Government required companies to provide access to encrypted data on request. This specifically affected end-to-end encrypted messaging applications such as WhatsApp and Signal as this, in essence, banned their technology. The reason cited was for “national security” in response to the increasing threats of terrorist attacks across Europe and domestically. There are three major flaws with this. The first being a privacy issue, the second is the technical and security issues of implementing a system that would be compliant, and third is the ability to circumnavigate the legislation, rendering it ineffective. Privacy: First are the ethical issues of this approach. If two people have a private conversation in a private location, the vast majority of people would agree that both parties would feel violated if a third person was listening in, irrespective of whether it is a friend eavesdropping or a government planted bug. There is an expectation within society that private conversations are kept private. Why should this not be true when you are having the same conversation via a digital platform? This also ignores the impact this could have on people who require such protections, such as government whistleblowers. Technical Limitations: The second, and arguably the more compelling point, is the technical limitations of such an approach. To understand this, one needs to understand how end-to-end encryption works. We will look at a situation where person A is sending a message to person B through WhatsApp. The system works through two keys, a ‘public’ and a ‘private’ key; both keys are generated on person B’s device. The ‘public key’ is then broadcasted by person B’s device and is available to everyone. This can be used by person A to encrypt the message on their device before sending. This message can now only be decrypted by the ‘private key’, which should only exist on person B’s device. This makes it nearly impossible for anyone to see the message’s content, including WhatsApp, the messaging provider. This is a robust system as it is highly secure, being virtually un-hackable except through brute computational force. Brute force would in itself take thousands of years on current technology to work. Even if an individual’s private key did get revealed, you could never construct a large scale attack on an app using the technology, provided that the encryption is properly implemented. For the Australian Government to have access to the messages, they would have to hold a copy of the private keys. This could be done in several ways, but all of them leave the individual’s security open to malicious actors. The simplest interception approach would be to have either the Government or individuals phone generate the private key, and then send it to the other. However, this immediately opens a vulnerability to potential bad actors who could intercept this. The other alternative is that all messages get sent to a second account, the governments’ account, but this is large-scale data collection that the public would be unlikely to support. Suppose you could get messages to the government securely. In that case, this relies on their storage being both secure and unable to be accessed by malicious actors. While you would hope that a government would be capable of doing this, recent news show this is not necessarily the case. In November 2018, Brazil’s high court launched an investigation into a hack that shut all proceedings down. This should be a warning to err on the side of caution. This situation has the possibility of happening with the Government of Australia. Ineffectiveness: Any organisation that wanted to have encrypted communication for nefarious communication has options; they could code their own app, sideload an existing app or use a VPN to get an existing app from another country such as Signal. Programming your own app is not overly advanced, as Signal’s code is open source. Putting the three issues; privacy, security and ineffectiveness, together, we can see how this legislation demonstrates the Australian Government’s ignorance in producing this legislation. Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2021 We now look at our second case study, the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2021. The internet has existed as a platform, since its founding, where publicly accessible information is done so freely. The legislation we are considering has attacked that existence and even brought out Tim Berners-Lee, the person credited with founding the world wide web, to speak on the issue. He stated that this bill “risks breaching a fundamental principle of the web by requiring payment for linking between certain content online”. To explain how Australia has brought such heated debate to the forefront, we must look at the legislation and its implications. As both Google and Facebook have been in the news regarding this legislation, we will look at this bill from their perspectives. We will look at the current system, the issues with the legislation and if there are other solutions. The legislation’s key point is to require payment from content distributors, such as Google and Facebook, to news sources for the sharing of their articles, as they make income from ads on these searches. Without context, this doesn’t appear to be too great an issue. At first glance, it appears to be a good way for news organisations, who in general have struggled with finances due to the shift to digital, to gain a new stream of income. That is until you look at who the primary beneficiaries of the current system are. Pre-existing system beneficiaries: In the current system, Google and Facebook benefit from sharing these articles since it brings traffic through their particular services. This, in turn, allows them to run ads on these searches, which generate income. The other side of this is the benefits to the news publishers. By being made available through these services, traffic is directed to their site. Again, this allows them to run adverts on their site, allowing them to generate revenue. You could argue that the primary beneficiary in this system is the news sites, as Google and Facebook do not require news sites to operate, whereas, for the news sites, the ability to reach a global audience through these services is invaluable. Because of this arrangement, the idea of Google and Facebook having to pay for, in their eyes, facilitating a service to these news outlets makes little to no sense. Arbitrator issues: The situation worsens with the method that the Australian Government is implementing as the payment system. The two interested parties have to agree on the price which they feel that the news organisation should be paid. This is then what is charged; however, an arbitrator decides the value if an agreement is not made. This is not able to be appealed. This gives a considerable amount of power to the news organisations, as even if they had previously been okay with the existing situation, there are no repercussions to them asking for as high a value as they wish as, at worst, this will just go to arbitration. This has already occurred with News Corp claiming that they should be paid between $600 million and $1 billion USD for their content by the likes of Google and Facebook. In contrast, Google has stated that they only earn around $10 million USD in advertising revenue from searches related to news-related services in Australia for 2019, while Facebook stated, in response to being asked a similar question at a Senate Committee hearing, they earned “virtually zero” revenue. Reaction and backlash: You can see that in this situation, the tech companies may not be inclined to engage in the system that leaves them with so few options. For this reason, I am sympathetic to Facebook removing news sites from their Australian operations and not publishing any of their links abroad, as well as Google's consideration to remove themselves from the Australian market altogether. However, I feel that they did not go about this in a way that was likely to garner public support. They did not clearly explain their reasoning, which made it appear that these corporations were using their size to force and manipulate a government. In reality, it was their attempt to demonstrate that they may not be able to operate economically as usual under these new rules. Both acted overzealously, especially Google, which led to the public outcry against them. These corporations deciding that they do not want to work under these situations is a perfectly legitimate response. I would understand if they withdrew, at least, the most affected services from the Australian market. If this had happened, the stakeholders worst affected would have been the news corporations. Facebook and Google ended up shooting themselves in the foot as they lost public support, taking away their perceived position of control. Revealing trade secrets: The final issue with this legislation is that one of its requirements is that the tech companies must let news corporations know about any change in their algorithm, which could affect the standing of their news in the search results. How this will be conducted is still unknown. As an example, Google makes a multitude of changes to their algorithm daily, most of which don’t even require human input and many times, even Google may not be aware of the effect of a specific change. On top of this, Google, as a business is essentially their algorithm; everything else is auxiliary. To give this information up is giving up trade secrets. Another way: There is widespread acknowledgement that news sources are struggling, and to all the failings of this legislation, at least the Australian Government is attempting to remedy this issue. But what are some alternative solutions? There are a few ideas that have been floated. One is, interestingly, a project of Google’s. A new service called Google News Showcase may be coming to New Zealand. This has Google pay and curate news corporations for articles that are made available for free to users. This can also include paywalled news sites, and brings the reader directly to their websites, resulting in increased traffic to the news corporations. These stories would later be made available freely on the web. While this does bring in the new stream of revenue required by their news corporations, it also does not entirely undermine Google's negotiating position. As a solution, this is a sort of halfway house. There have also been suggestions of allowing taxpayers’ funds to be made available to supplement news organisations. This can be argued, as some see it as a democratic right to have access to a wide array of news sources; however, this could be criticised for reducing the autonomy and impartiality of news sources. There may not be a clear way to resolve the issue. Conclusion: As we continue to push into the digital age, more of these issues will become apparent, and we have to try for better systems of distribution that do not impact the freedom the web has been founded upon. Brute force such as the Australian methods are not the way forward; they break existing systems and risk reducing freedoms of individuals and companies, while remaining ineffective at their objective. In the end it comes down to the individual, you the reader. If you value digital news from a particular source and are able, then subscribe or donate to them. As for the end-to-end encryption, keep pressure on your local representatives and educate them. While their intentions may be good, they are acting on areas where they lack knowledge. It is dangerous and impacts us all. Lastly, if you have the ability to influence changes like the ones mentioned, have just one take away: don’t look to Australia.
- The Future of Food
Our eating habits reflect our biological needs, cultural practices, and accessibility to resources. Aotearoa is facing mounting sustainability issues and Fonterra has recently been named the highest carbon emitter in the country, after reporting over 13 million tonnes of greenhouse gas emissions to the Environmental Protection Authority. Dr Rosie Bosworth is a specialist in the future of food, with a PhD in environmental innovation and sustainable technology development. We interviewed her in 2021 on our radio segment, titled Tomorrow’s World, which airs on 95bFM. We decided to revive this interview in light of growing food sustainability concerns for Aotearoa, and adapt it into a print article exploring the future of food. While it is well known that changing to a plant-based diet mitigates the effects of climate change in a myriad of ways, for some, a stark shift to entirely plant-based just isn’t feasible. So, what could diets look like in the future if the entire planet can’t go strictly vegan? Is a vegan diet more sustainable? Historically humans have consumed meat to satisfy nutritional needs. With hunting related to high danger risks and energy demand, a shift to intensified agricultural practices has increased with patterns of urbanisation [1]. However as wealth and resource extraction has concentrated into some regions, and populations have increased globally, the type and quantity of food produced has changed dramatically. In the last four decades global meat production through agriculture has increased by 20%, with 30% of the global land surface area used for animal production [2]. The normative practices of consuming meat within a daily diet has contributed to biodiversity loss and increased greenhouse gas emissions. However it is important to consider that the consumption of any resource comes at an expense. We asked Dr Bosworth how sustainable a vegan diet is: “It is complicated, vegan food has so many types of ‘plant based’ options – some of which are being criticised for having a large footprint themselves – like almond milk. But there are now also more and more advancements in science and biotech which mean we can even produce the same proteins as those found in animals or dairy proteins themselves, without the animals, that don’t require the use of plants as substitutes . When you’re looking at plant based milks, almond milk gets a worse reputation than other plant based milks like oat or coconut, but even when you compare almond to dairy it is markedly more environmentally friendly, especially in terms of water use.” The idea of lab grown food, which Dr Bosworth refers to as ‘biotech’, has been rising in popularity. Even large fast food chains such as Burger King have released Beyond Meat® and Impossible™ Foods burgers. So how do these cell based meat processes stack up sustainably? A life cycle assessment (LCA) considering the eutrophication, potential land use requirements, and greenhouse gas emissions of these alternative proteins compared to chicken, lamb, and beef (Fig 1) show a better performance for cell-based meats [3]. However, currently the energy consumption used by cell-based meat production exceeds all alternatives. Cellular agriculture [cellular agriculture is] “Taking cells from animals and growing these actual cells outside the animal. By feeding them a carbohydrate feed stock, we don’t need all the energy source to produce that we do to grow animals over time to slaughter or raise as dairy cows. Another really cool process that’s being advanced right now to produce dairy proteins and other molecules is precision fermentation. Precision fermentation involves programming yeast or fungi to produce the very same proteins and molecules like milk or cheese, without the animal, in large vats. Essentially, the cow is becoming an old piece of tech.” As a response to the long-term environmental degradation that traditional livestock agriculture creates, biotechnologists have conceived a new route of catering to the 21st century human’s desire for meat: cellular agriculture. As Dr Bosworth mentions, the process is essentially taking a piece of animal tissue, relevant to the section of the animal we want to consume. Then, these cells are cultured, and given all of the nutrients in vitro that they would receive in vivo. They grow to maturity in a bioreactor (which is simply any manmade vessel that carries out biological processes) in the same manner an entire organism would grow in a field, and reach the same fate that such an organism would: they’re harvested, and processed appropriately. There are two distinct processes included in cellular agriculture, and they’re not limited to producing ‘meat’. Acellular products can create things like milk, for example, using a starter culture, inserting the gene that produces milk, an animal protein, into a microorganism. This means the process of milk production then occurs in a lab, outside of an animal, so we skip all of the excess maintenance of the animal that would occur, and jump right to the end result; the animal protein we desire. This is the process by which most medical insulin is made, and the host microorganism in that case is generally E. coli. These engineered microorganisms do all the work for us, and are markedly lower maintenance than farming an entire cow. Cellular agriculture, alternatively, takes specific tissue from a biopsy, and is grown similarly to acellular products, with a scaffold and nutrients. Its differentiation is the fact that living cells are being cultured, rather than proteins. The main part of the meat we eat is muscle tissue, so this is where the biopsy is taken from. Ethics The ethics of cellular agriculture [4] could fill two entire volumes of this publication alone, so we’ll simply outline them. There’s a pro-stance, which argues that since we’re avoiding the raising of livestock purely for the use of their resources and inevitable slaughter, the process aids animal welfare. And it’s easy to see the arguments for this; we do indeed clearly bypass the possibilities of inhumane treatment, because we don’t have a whole organism (in the traditional sense) to deal with. It also ties in neatly with the argument of sustainability; by avoiding the raising of a whole cow, we avoid the emissions that said cow creates, simply by its existence. That’s avoiding a lot of emissions even before we get to the supply chain points of maintenance, space, land use, water consumption, then the myriad of processing that needs to happen after the animal’s demise. The inverse of these arguments is a tricky conversation: Gene-editing may be perceived as tied up with the ethics of ‘playing God’, and the implicit debate within these questions as to what the definition of ‘life’ is. Of course, these cells are ‘living’, but are they sentient? And how does that make a difference to how they should be treated? Answers to these questions are value-laden and boil down to a pretty detrimental issue for the process if left unresolved. If people are unsure about how they feel about this new technology, they A) won’t participate or B) will actively rally against the concept. There’s little point in developing technologies such as this, if they won’t be accepted and adopted by the populus. Science often operates as a knowledge seeking exercise, and as catering to the needs and desires of the population; if no one’s using it, it’s a dead end. Figure 1. Comparison of the environmental impact of meat and meat analogs. Data are normalized to the impact of beef production. Eutrophication does not include data for mycoprotein. Land, emissions and energy data for mycoprotein were adapted from a 2015 LCA. Data for beef, pork, chicken and CBM were adapted from a 2015 life cycle assessment. Data for PBM were adapted from an impossibleTM Beef LCA (land, eutrophication, emissions) and a Beyond Meat® life cycle assessment (energy use). Figure adapted from Rubio et al., 2020. Manipulating soy to mimic meat textures and tastes “Heme (or leghaemoglobin) is a molecule found in cows but can also be bio-fermented and harvested using the same DNA found in soy root nodules. It’s what gives meat that umami aroma and meaty rich smell and taste. [This is important because the] average consumer wants a similar experience with meat burgers – not a rubbery or bland soy product. There’s a sensory experience that tofu may not give, and we need to offer the same sensory experience to get mainstream audiences to switch over.” As an alternative to cellular agriculture, the biofermentation of heme may provide another solution to people’s rejection of plant-based alternatives. As important as taste is, it’s not the only component in the sensory experience of food. As Dr Bosworth explains, heme can be found in cows, and is utilised by meat substitution products to recreate the ‘mameme aroma’ experience, which can be so imperative to enjoying meat products. Haemoglobin is the source of heme in cows, but can be replaced with sensational likeness by leghemoglobin in a food context. Leghemoglobin is found in the root nodules of soy and other legumes, and fixes nitrogen as soy plants grow. The two are oddly similar, which is why leghemoglobin has been appropriated for the purpose of mimicking ‘blood’ in plant-based foods. There are many methods of accessing heme in leghemoglobin. The most intuitive one is digging up the roots of soy plants, and extracting the goodness inside for our purposes. However, this does seem counterintuitive if part of the aim is to be more sustainable – ripping up acres of crops for their roots doesn’t quite fit. So, researchers found another way to produce leghemoglobin: fermentation. Again, our tiny microorganism friends help us battle climate change. Fermentation for heme production involves using genetically engineered yeast, which has been inserted with the gene for leghemoglobin production (in soy, this gene is LBC2) [5]. The ancient process of fermentation then ensues, and a whole batch of yeast, working hard to produce leghemoglobin, is created. It is similar to the acellular agriculture process. After this, it’s simply a matter of isolating the leghemoglobin produced, and adding it to whatever meat substitute a company desires. Environmental psychology and the value-action gap When we consider what we will be serving for our University reunion dinner in twenty years time, we may be leaning towards in vitro meats. Although a vegan diet offers many benefits, the sensory experience and cultural ties to eating food associated with emotions of satisfaction will remain [6]. When you know something tastes good, your taste sense works through chemosensory where a chemical stimulus on a nerve ending (taste bud) is mediated through taste and smell, and naturally our bodies like things that give us energy, such as sugars and carbohydrates [7]. We asked Dr Rosie Bosworth how future food developers considered this: “When we think about food, future foods don't want to consider themselves as food tech or science start ups, especially when positioning themselves for the end consumer. By and large they still consider themselves as a producer of tasty food, that is the most important bit.” The cultural and sensory process of eating meat can be related to environmental psychology, modelled by the value-action gap [8] . Although we may be aware of the environmental and health benefits of eating less meat, there are stronger values such as convenience, habits, and satisfaction that result in continued meat eating behaviour. A 2021 New Zealand questionnaire found that an omnivore diet was the most prevalent dietary category (94.1%). Gender (men) and political ideologies (conservatism) predicted lower probabilities of transition from a meat to no-meat diet [1]. As climate concerns, food production demands and ethical tensions continue to grow it will be interesting to see which food technologies gain mainstream traction. This is where future foods such as cell-grown meats may come out as the top dish.