Smooth Brain Society

#26. What is Artificial Intelligence?- Dr. Andrew Lensen and Amer Hussain

October 19, 2023 Guest: Dr. Andrew Lensen and Amer hussain Season 2 Episode 26
#26. What is Artificial Intelligence?- Dr. Andrew Lensen and Amer Hussain
Smooth Brain Society
More Info
Smooth Brain Society
#26. What is Artificial Intelligence?- Dr. Andrew Lensen and Amer Hussain
Oct 19, 2023 Season 2 Episode 26
Guest: Dr. Andrew Lensen and Amer hussain

Dr. Andrew Lensen and Amer Hussain from the School of Engineering and Computer Science at Victoria University of Wellington come on the podcast to talk about Artificial Intelligence (AI). They cover how it works, the different types of AI, ethics around data protection and the shortfall in the current laws to protect people when it ultimately takes over. 

Support us and reach out!
https://smoothbrainsociety.com
Instagram: @thesmoothbrainsociety
TikTok: @thesmoothbrainsociety
Twitter/X: @SmoothBrainSoc
Facebook: @thesmoothbrainsociety
Merch and all other links: Linktree
email: thesmoothbrainsociety@gmail.com


Show Notes Transcript Chapter Markers

Dr. Andrew Lensen and Amer Hussain from the School of Engineering and Computer Science at Victoria University of Wellington come on the podcast to talk about Artificial Intelligence (AI). They cover how it works, the different types of AI, ethics around data protection and the shortfall in the current laws to protect people when it ultimately takes over. 

Support us and reach out!
https://smoothbrainsociety.com
Instagram: @thesmoothbrainsociety
TikTok: @thesmoothbrainsociety
Twitter/X: @SmoothBrainSoc
Facebook: @thesmoothbrainsociety
Merch and all other links: Linktree
email: thesmoothbrainsociety@gmail.com


Hey, can you hear us? How are we going? Good, thanks, how are you? Yeah, good, good. How does it feel? Thanks for joining. Yeah, happy to. I should thank Amr for convincing you. Oh, it didn't take much convincing. He's a nice boy. That's good. It's a good, that's good. That's something. But yeah, and, but yeah, Andrew, uh, welcome to the smooth brain society. Um, so essentially the way, the way it goes was I was interested in asking. Or talking about artificial intelligence, cause I have no clue what it is about. And I thought, who do I know who knows anything about AI? And then all I needed to do was walk across the hallway and knock on my brother's door. because he's doing a master's in artificial intelligence. So I've got him on here and he recommended that Dr. Andrew Lenson, and I need to specify doctor, would be one of the best people to ask to come on and talk about artificial intelligence and the ethics around it. So Dr. Andrew Lenson, he will, I will read out his introduction, but I do not know half these words, so he will need to explain them further in a bit, but. He has a PhD, is a senior lecturer in artificial intelligence at the School of Engineering and Computer Science. His core research interests center around explainable AI, genetic programming, unsupervised learning, and real world slash interdisciplinary AI in New Zealand. So I read all these things, I have no clue what they mean, and hopefully across the course of this hour, you will help me break it down. Also on the episode is my brother. Amir Hussain. I'll let him introduce himself a little bit and what he does and what his relationship to AI is. Hello. Yeah, so my, well, I'm doing a master's in AI. My research is in the use of genetic programming for multi-target regression. In other words, it is simultaneously predicting specific values. based on a given set of information. Yeah, and most specifically using fish data. So I'm looking at information we get from fish and based on that information, assessing the level of nutrients, or assessing the level of proteins, carbs, fatty acids, and a select group of nutrients in those fish. Awesome. Cool. So yeah, welcome guys. Welcome, Andrew. Thanks for coming. Can I start with a little bit of background on yourself? I gave a little bit of introduction, but I guess that does not give the full story. So could you tell us how you got into AI and what AI is? Yeah, so I like to think of myself as an accidental academic and that I never had grand visions of being a doctor of any kind or work in a university. But I was fortunate to have. I studied here at Vic as well, so I was fortunate to have opportunities to do things like summer scholarships and then to do quite an interesting honours project and then that sort of went sort of step by step into a masters and then into a PhD and then I sort of just stuck around because it was interesting and then they started paying me and so I was like, oh, how are you going to pay me? And you know, I guess you want me here and so, no, I'm being slightly facetious but... I've sort of stumbled into this. I mean, I started off being interested in computer science and chemistry. That was my initial double major in my first year. Quickly realized that organic chemistry was hard and I didn't want to do a whole paper on it. So I went back to just computer science. And then, yeah, I mean, I've obviously done my PhD and then did a short postdoc and then I've been a staff member here for just over three years. And yes, so really enjoying it. What is AI? That's a fun question. That's something I've been sort of trying to teach my first years this semester. And the way I like to think about it is AI is any computational system that has behavior or does things that we associate with intelligence. And so really it kind of comes back to the definition of what do you think is intelligent, right? So when something like an ATM first came out, a lot of people would say, wow, it's giving us money automatically. That's really intelligent. And so maybe back then that was kind of AI. But now we sort of take these for granted and things like chat, GPT and, you know, self-driving cars and sort of the barrier, what, or the, you know, level of what is and isn't AI, I think is sort of constantly moving. And so it's a bit of a loose definition, but I would say any sort of computational system that displays intelligent behavior. Cool. But I guess because you said ATMs and stuff, the certain things which people might have considered AIs, which probably aren't, right? So like a calculator or something, because the human mind can't really calculate that way. You think it's very intelligent, but the human mind can't do maths the way, I guess, a computer does maths or calculated does maths. Does that kind of factor into AI as well? Yeah. I mean, we have sort of different levels of AI, if you will. So we have things like now or weak AI where we have AI that's very good at very specific tasks. So for example, you could have an AI algorithm that can do your text return. And that is something that, as you sort of allude to, even a human can't necessarily do. You know, I can kind of do my own text return because I'm a bit nerdy, but most people wouldn't want to do their own text return or need to. And so we already have situations where AI is allowing us do things that we can't or otherwise wouldn't do. And so that idea of intelligence, I think, is more around, you know, things that we associate with experts, for example. So you could hire an accountant to do your tax return, or you could get AI tax agent 101 to do it for you. And so, yeah, it's sort of a moving goalpost, I think, but it comes back to that core definition of what do you think is intelligent? And it is very vague. And I think that's part of why it's become such a buzzword as well, because it's very easy to sell something as intelligent. Well, yeah, so you often hear AI with the term of machine learning. Those two tend to go hand in hand. Could you elaborate a little bit on what, if there's any differences between the two, or, well, are they a subgroup for each other? Yeah, yeah. So you're right. Machine learning is a very common term as well. And I think, yes, machine learning is a subcategory of AI, essentially. And when we talk about machine learning, we really mean algorithms that learn from data. So you give a machine learning algorithm a big set of data, for example, the whole internet, in the case of things like chat GBT, or, you know, a billion images in the case of some of these. image recognition algorithms. You give it a bunch of data and it will train on that data so it takes the data and then tries to learn to give you the right answer. So if I say to you, here's a hundred photos of cats and a hundred photos of dogs, then your machine learning algorithm will basically take those and one by one try and predict the right answer. And every time it gets it wrong, it will change something about how it operates to try and fix that mistake. So yeah, machine learning is sort of a subtype in that it's very data driven, whereas AI is a more sort of general concept and can be anything intelligent. That's artificial. Since AI is a subset of machine, sorry, machine learning is a subset of AI, are there, and machine learning is essentially using a lot of data and teaching something across time to kind of get better. What other forms are there in AI then? Why is this just a subset? What are the other things? Yeah, yeah. So when we talk about things, for example, there's a thing in Europe called the Human Brain Project where essentially there's a bunch of researchers. had quite an ambitious goal of trying to make an artificial brain that would act very much like a human brain, but was entirely driven by computers. And so that would be a form of intelligence in that we know obviously our brain demonstrates intelligence. That's how we're talking here today. But that's not machine learning because it's not about trying to predict outcomes from data. It's more about trying to recreate or simulate intelligence. And so it can be a little bit of a subtle definition sometimes but uh... you can think about uh... yeah the things i was at that we you don't use the city uh... have a bunch of data you're trying to process but you're more trying to model intelligence and so that's something that uh... the cause of scientists are obviously quite interested in uh... and there's some people even here at Vic you're looking at it you know whether you can do that and what it would look like Would you say that a few years ago Google developed that robot that could play Alpha Ghost? Because you can't really imagine it wasn't really a process where they gave them the robot a whole bunch of games and they just learn on that information. They probably learned the rules of the game, right? Would that be a form of non-machine learning AI? Yeah, I think it again, you like it a little bit blurry. You know, definitions always changing. But yeah, when Google was producing AlphaGo, which is, I guess, the AI model that is beating champions ago now, they use a lot of what we call reinforcement learning, which is a type of machine learning, and that they basically simulate a whole bunch of games and then the AlphaGo AI model tries to learn strategies. to run at those games. And so it will play a game and then at the end it will get either a reward. So for example, I run this game and 17 turns, that's good. Or maybe a penalty, I lost this game. My strategy wasn't so good. And so I think primarily they're using this idea of reinforcement learning, which is a type of machine learning, but is not quite as data driven perhaps as we would traditionally say, because it's not really. a big database, right? It's more simulating games, playing games, and then getting feedback. So, yeah, the lines do blur a little bit there. Okay, cool. That kind of makes sense. Can I ask considering you said you love teaching this to first years, what is the hardest part of trying to explain AI to first years? Uh, what's the hardest part of this? Yes. I mean, I don't know if there was one hardest part. I actually find it a really enjoyable challenge because our first year AI course has no programming requirements, no mass spec, no mass background required. And so you're really forced to. try and sort of figure out the essence of these topics you're talking about and explain in a way that, you know, does a way of all the sort of technical details. And I find, you know, things like using lots of analogies, for example, or even like demonstrations in class, like one of my first lectures, I had a photo of my dog and a photo of chat GPT. And I was like, okay, let's figure out which of these are intelligent and why. And so I think examples like that are really, really important. And so it takes quite a bit of thinking beforehand and sort of think how would I explain this to you know first year Andrew rather than the computer science before in a way that makes sense but also doesn't trivialize it too much because I don't want to just say you know here's this fancy thing don't worry too much about it like you want to give them some idea of what's going on and so it's been a really sort of fun challenge and it's actually forced me to understand concepts better myself as well and so I really like that part of it is you it's not just how you teach to students, but it's also clarifying those thoughts in your own mind around what are the fundamental principles here? Yeah, I concur because I'm tutoring the course which Andrew is running and it's very, you have a wide variety of students coming from different majors. You can't really use a singular explanation for each of them. Some of them require, well, other than the aspects of some being visual learners, some... like to read more. It's more about the same analogy won't fit everyone. You're going to have to find different ways to explain this idea that, well, yeah, so the images, how the images are trained or how the information is learned, how the AI model tends to improve over time. And yeah, it's... Quite fun actually because it just helps build my foundation on the subject as well. Cool. So then can I ask both of you guys, what are the key concepts? What are some of the key concepts which you would want someone like myself who has no experience in AI, what are the key concepts you'd first target? If you wanted to teach. Yeah. So I always, well, I say always, it's first time we run the course, but I very much started the course with a discussion on what intelligence is. Right. Because to me, if you're going to do a course in AI and then you first of all have some idea what intelligence is because otherwise how do you know what artificial intelligence is? And so some of the discussion we've already had today I think is a big part of that. And then from there we go into quite a lot of fundamentals of our machine learning and I think that is because it is really sort of the center or the beating heart of AI today. So when we see... Things like ChatGBT, we see sort of these text to image things like Dali 2 and Dali 3 and Mid Journey, you know, these image generation tools, they're all using generative AI and that at its heart is using machine learning. So if you want to understand modern AI, so to speak, you need to know machine learning. And so, different concepts, even just concepts like, you know, what does data look like? What is this idea of tabular data versus text data versus image data? what does it mean to have a data that is labeled versus unlabeled? So for example, I said before the examples of cats and dogs, right? So if I give you photos of cats and photos of dogs, and I tell you these are the cats and these are the dogs, that allows you to do what we call supervised learning. So you can use those labels to tell your algorithm whether or not it's giving you the right answers. Whereas if you don't have those labels, you have to do a thing called unsupervised learning, which is a whole different type of approach. And so these sort of, I think these fundamental ideas can get you quite a long way in terms of understanding what is being used out there. And I think can be quite empowering because you can kind of go, yeah, I can kind of see how Chachi BT came to be. And for first years, that's kind of a cool thing, right? I mean, it's not necessarily what they would expect coming in. And so we really try and focus on giving them those tools to understand the overall principles. And then obviously at second and third year, you can get more into the... technical details but, Amir, what about you? What have I missed? Well, not much really. I think you just covered all the bases there. That actually, one of the words you mentioned, that actually leads up nicely into some of your research which is unsupervised learning. I think it might be a good little concept to discuss because that's actually something we see quite often day to day. on the internet. Could you explain to us a bit more on your research in unsupervised learning, but first, what unsupervised learning is? Yeah, for sure. So when we say unsupervised learning, that is essentially when we have data that has no labels. So I mean, if I, for example, give you an example of your Netflix viewing history. So I know that, you know, Amir, you've been watching Treasure Island for the last couple of weeks, you know, and I can see that you've... I can see you've been watching every night for two or three hours a night, you know, I can and I can sort of build up this profile misinformation right there this is Misinformation I said I said on a podcast it must be true. Um, yeah, so I can build up this This sort of um, you know, I have some data maybe it doesn't represent Amir's viewing history Maybe it's um, someone else's or whatever our data is But the point is there's no labels about it. So I don't know what is actually, I want my AI or my machine learning algorithm to predict or what the output should be. So if we talk about, to contrast that, something like if I said I wanna predict house prices, then we know how many bedrooms a house has, how many bathrooms it has, where its location is, and we know what it last sold for. And so we could build a model to try and... predicted price based on that. Whereas for unlabeled or unsupervised learning, we don't have any of those labels. And so what we're doing there is things like, trying to recommend Amir his next show to watch based on what other people like him have been watching. So if he's been watching Treasure Island a whole bunch, then maybe he would like to also watch, Keeping Up With The Kardashians next, you know, like because we know that people who watch Treasure Island often will watch Keeping Up With The Kardashians. And so that would be a form of unsuitable learning because we're not saying, you know, this is a label of, you know, we're not saying Amir is this type of person, this is a label, but we're saying, what do other people like and what are their behaviors? And so that would be something like a recommender system. And that's actually a lot of what places like Facebook, Meta more generally and Google and Twitter or X, I suppose there's now, are doing, right? They're looking at, what people like you, people like you, what their behavior is and then using that to help suggest things to you. So it's just kind of pattern recognition in a way. It's being like, oh, yeah people do this and this and this then They should in theory do this Yeah, yeah, pretty much you're trying to find pattern within that data, right? Because you don't have any labels and so what you want to know is how is this data represented? What's the structure of it? And so another form of unsupervised learning and one that I did some work on my PhD was around the clustering so if I say, you know, here is my I don't know, 10,000 people, I can, for example, cluster or group them into maybe say three or five or 10 clusters. And then the people within each of those clusters is all sort of related to each other as far as data sets go with people in different classes unrelated. And so something like that is used quite often in things like marketing. So Amazon can group people together who have similar purchase histories, then they can. maybe make an ad campaign just for that group of people. And so yeah, this idea of trying to find patterns or structure in your dad. Um, cool. So, but with clustering and all, you need to start off with some data points to begin with, right? So you can't start this out of nothing. So I'm assuming your algorithms get stronger the more data you put in, in all of these. What do you say? Yeah. You know, in all these types of AI, you start off with, you need to start off with something to create more. But then from that point, is there an issue with the data you use itself? So for example, you said using people's search histories or whatever, is there an issue with using that information without someone actually giving it to you? Oh, totally. I mean, the data privacy is sort of an incredibly contentious issue. And I think this year more than anything, with the disputes we've seen in Hollywood and the disputes we've seen even more generally in sort of the, in the art community, right? So things like ChatGVT that have been trained on basically the internet, there's a lot of questions there about, hang on, have they asked permission to use this data? If you write a blog post or if you host a podcast, do you mind if... OpenAI or Google or whoever is using that data to train their model. Should they be asking for permission? Should they be giving you some money to do that? That's a massive legal question and also a massive ethical question. Legally speaking, it's looking like maybe they might not have to because it's sort of, it's not a copyright issue necessarily because they're not copying, they're sort of learning from it. But there's a lot of... lawsuits going on especially in the US around that at the moment but especially in Hollywood and then sort of ethically speaking it's a much I think a much more subtle question because some people will say I posted this online you know I don't expect to I put it out there for people to read and to use however they want I don't expect anything back now if he was they will hang on no I posted this for humans to read and this is you know a piece of creative writing maybe it's my poetry. I don't want someone else taking that and then sort of selling my creativity and then using it for their own good. And so there's a very big question here and depending on who you talk to you'll get very different perspectives as well. Do you think the do you think the stuff produced by AI then is original, like stuff produced from chat GPT? Or is it just in your opinion, do you think it's original? Or do you think it's Yeah, just like regurgitated work in a different way. So it's original. It's often original in a sense that has never existed before. So if chat GPT generates a 200 word paragraph, probably that's never been written before. Sometimes it might've been, but most of the time it hasn't been. So it's original in that sense. But is it, and so I guess technically it's original, but again, it comes back, so it always comes back to definition, right? What is originality? And I think that's very much a question like what is creativity? And I like to think about it sometimes that when you write something, if I tell you to write a 200 word paragraph I don't know, your opinions on gamaphones for example, right? Oh yeah, I can't remember that. Yeah, yeah, so only turn to words. When you write that, you're not writing that just from some inherent pre-existing knowledge in your brain. You've read things about gamaphones, you've watched the movies, you've probably read some blog posts or some synopses. You have all the information that you've, yourself as a human, sort of trained on. And so when you write, are you writing an original piece of work or have you just sort of trained on someone else's? And so that's quite an interesting, I think, discussion to have sometimes. Of course, there's a difference in scale because how fast you learn and how you learn is very different to things like chat GPT. But if you sort of follow that argument, then you can see how, especially legally speaking, how some of these things Yeah, yeah. Yeah, it's definitely an interesting sort of, it's definitely an interesting question because to my understanding of how chat GPT works, it just sort of uses the words and sentence structures that it knows and on a probabilistic scale of words which associate with it. it just sort of forms sentences based on appropriate grammar structures, based on appropriate sentence structures. So how is that far off from how we as humans use words in a language because we don't we tend not to use words we don't know we use words which we already know and more often than not use them in the sentence structure or context that are most applicable. and chat GPT sort of operates on a similar sort of dynamic. There, because it's sort of, because of the way it works, it won't always have the same sentence if you ask it a question. It can quite often have formed that sentence in different ways because it just sort of, it has a small element of randomness to it, which allows it to exert that. generative quality or that creativity aspect of it. Yeah, so chat GBT and humans, I guess, you can get very similar results in some ways. So I mean, one of the reasons chat GBT is so successful is because it kind of passes that test of this looks like a human could have written that. I mean, myself having used chat GBT a bunch now, I can kind of tell when it's written by chat GBT because it's very roughly and it uses certain disclaimers at the end. And you know, it has a very sort of sort of writing style turret. But that being said, it sort of passes that test of it's human. But it's interesting because it uses a very different approach to what we do, right? So chat GPT and models like it are basically just predicting the next word. And so every time it splits out a word, it's going, okay, what are the previous words? What have we said before? And then that all goes into its big network and it says, okay, here is the next word. And as Amir said, that is sort of slightly... random so it will have maybe the top 10 words and then it will pick one of those with some amount of randomness which makes it seem more creative or more human-like. Whereas when we write things down we don't, at least I hope, we don't necessarily sit down and go okay what is the next word in the sentence. If we wrote like that it would be a very weird result you'd get. It's that sort of game you can play in your kids where you're trying to make a sentence and you've... you each have to choose one word at a time that ends up being, you know, talking about butts and rainbows and all sorts of things, right? And so as humans we tend to be much better at doing things like abstraction, so we start off with some high level idea or concept and then we break it down into smaller parts and each of those parts we then figure out how are we going to say this and then how do we structure that and where it fits into our writing. So we almost... go the opposite way, we sell at quite a high level and work our way down, whereas ChatGBT is just sort of going along one word at a time. But I mean, it's kind of a means and end in that you get a very similar outcome, or at least an outcome from ChatGBT that is good enough to do a lot of the things that we do when we write. Yeah. Considering, well, I guess, yeah, you said means to an end, but are there any models which you know of or AI models which you're working with, which do it the same way humans do like go from top to bottom? Or is that not really possible? Yeah, there certainly is quite a lot of research, and using more of that sort of approach where you start off with concepts and then try and break them down to sub concepts. And, and that's, in some ways, that's almost more one of these examples of AI rather than machine learning, perhaps, because this idea of, for example, learning ontologies or learning hierarchies and all these sort of things that we know humans do, or even scientists do, obviously, are things you can also get computers to do. But by and large, a lot of the recent progress has been in these more sort of iterative next word approaches for language models. It happens that that's the approach that has worked really well, and so now this is the approach everyone sort of jumped on the bandwagon of, because it's new and exciting and successful. But that's not to say that these other approaches aren't able to give us sort of more representative forms of intelligence that perhaps might eventually align more of how we as humans think. Interesting. Should we move forward to, I listed out a list of things in your introduction. Could we go through them and we talk about each one a little bit? So the first thing which I said was explainable AI. What is explainable AI? You've been explaining AI to me, but I assume if the term is there, it means something different. Yeah. So often when we talk about... modern AI, we're talking about as you said, we talked a lot about things that chat GPT, for example, and these the model is driving chat GPT are just absolutely enormous. So we're talking about something that has 1.7 billion numbers in it. And he's a research numbers will be some, you know, real valued numbers. So maybe it's 4.973 or whatever it is. And you have 1.7 billion of those. And when it gives you an output, it's basically doing a big calculation using all those numbers. And so what that means is you really have no idea what it's actually doing inside. If I put something into one of these models and it gives me an output, I can say, oh, that's a nice output. But I don't know how it got to that output. I mean, I can maybe have some intuition based on how it was designed, or I can do some very high level analysis. But most of the time, I have very little idea. explainable AI on the other hand is much more focused about, okay, how do we make these models that we can understand? So if I put in your details for a bank loan and the model says no, no bank loan for you. I mean, you want to know why that is you want to know what a decision is. And so if you use something like an explainable AI system, then maybe you can say, Oh, your bank loan was denied because your income was below this threshold. Or you don't have a high enough deposit rather than the same, you know, no loan for you Nice sort of analogy. I like to use when it comes to these two types of AI. So Regular AI is a black box AI versus explainable AI is a black box model would be something along the lines of you see a Cow going into a beef jerky factory and the next thing you have is beef jerky you do not know what happens inside the factory. And I'm pretty sure not many people would want to know what's going on in that factory for that beef jerky to come out. So explainable AI is just sort of our doorway into the factory to just have a look at what exactly is going on as grotesque as it might be. Yeah, so that's sort of the analogy I tend to use when trying to explain why, what... the kind of AI it does and why explainable AI is important. It's a good one. It's great for a vegan documentary as well. But okay, so if we're doing that, is there a sort of, apart from the benefit of like seeing what's going on, are there other benefits to it as well? Can you alter what's happening inside then because of it versus the black box where you don't know what's going on at all? Yeah, I mean, that's a really salient point, right? And so there's almost a scientific discovery point to it as well, right? So if you want to make a really good model, and you can't really understand what it's doing inside, then it's much harder to try and improve it or to make it more accurate or those sort of things. And so from sort of a scientific perspective, it's quite, it's very appealing to have models where you can go inside and go, Oh, look, it's obviously done this thing wrong. How can I change? My algorithm to try and fix it and so I do a lot of work with for example people in ecology and in law and other Disciplines and if I give them some massive black box model that gives very accurate results. They'll go That's nice. Good job, Andrew But they won't really trust it and they won't really want to use it because they don't I mean first of all They can't validate what it's doing and second of all, you know as an ecologist, you're not interested just in getting the best Results you want to know why you get those results and so explainable AI as a sort of a research tool I think it's also really interesting and often underlots benefit and so yes, it's a good one to bring up to a Nice sort of sorry to interrupt, but there's a nice sort of dichotomy because the current project I am doing or I'm doing my research for is composed of three different teams, so Vic has the AI team, we have a biochem team and a team from the pissy culture industry. And we asked both parties, what do they prefer? Do they prefer the models to be explainable or would they rather just good results? And you had the biochem team who what's going on in the model and they had a vehement yes, while as the industry people just said, oh, we don't care. We just want good results. And if something goes wrong, we will point to you. Because that's how the industry works. They just want the results because they don't really care what's going on inside. So just sort of finding the balance between some different AI, current AI techniques tend to range from having, being very explainable to and not being as good to stuff like deep learning, which uses chat GBT, which chat GBT uses, which you don't cut as Andrew explained earlier, billions of parameters and very tough to understand what's going on in there, but will give you very good results. So because of that, is not knowing a kind of concern that the AIs will become sentient or they can't make their own code, they just follow whatever code you tell them? We're getting into sci-fi part now. It was bound to happen. Yeah, I mean, I don't think of that as sort of a direct outcome of explainable AI in terms of. necessarily having an effect on whether or not things become sentient. I think that's more of a wider discussion around I mean, there's still a very open question there about whether or not it is possible for these models to become sentient, you know. And I hate to do this again, but again, it's going to come back to your definition of sentience, because we all know that even philosophers and psychologists will debate all day around what sentience actually means. I'm sure this is something you've found in your area, right? And I think what I think what people are really concerned about is having these sort of AI models that are in the real world and are able to do things better than anyone else or to basically take over the world, right? So the big sort of scary thing is we'll accidentally create some AI system and it will escape from the lab or whatever it is and run around the world and kill everyone and take over and that's the end of humankind. I don't think we're anywhere near that point. And you could debate all day around whether it's even possible to get to that point. And so I tend to, and this is sort of an easy out for me, but I tend to focus a lot more on more immediate threats around things like, you know, data privacy, data sovereignty, what the sort of social benefits versus risks of these algorithms are, right? Because I think if we can have a debate around AI replacing jobs or AI being used in ways that will make existing inequities worse. Then we're sort of starting that discussion that will need to be had before we even start talking about sentience because when we decide what we want AI to do in our society, I think we'll probably find actually no, we don't want to create these sentient models. And so then we will need to talk about things like regulations and other ways to try and prevent that from happening. Again, not that... I think it's a sure thing that it will happen, but it tends to be better to prepare for the very unlikely worst case rather than sort of go, ah, that would probably be fine. This reminds me of, I think there was a news article a couple of years ago where some scientists had discovered how to create an embryo from dinosaur bones. or something from dinosaur DNA. And the first comment under it was this specifically for movies telling us why this is a bad idea. Yeah, a bit similar. Can we then talk about your work around data privacy and security and all those issues? Because I think that's very interesting. So yeah, kind of what do you do around it? Yeah, let's just see where you go, freeform. Yeah, so I haven't done that much direct on a data privacy part of it, but I am quite interested in using things like Explainable AI as a tool for understanding how decisions are made with people's data, if that makes sense. And so when we see, you know, central government, for example, are very interested in using to make their processing more efficient and to sort of automate things. I mean, we know that ACC has a really big use of predictive modeling or, excuse me, or AI. If you go to a doctor for a reasonably normal injury like you cut your leg on the trampoline or something, you often get a text before you leave your doctor's office saying, yep, your claim's been approved. And that's a predictive model, an AI system that is making that decision. And so to me the interesting question is then, how can we understand how it made that decision? And I think that itself is a data privacy or just a data sovereignty even issue, right? Because if there's a system using your data, you may wanna know what data it's using and as we said before, how it's using it to reach that decision. And so if you get denied, then can it tell you why you don't get your ACC claim? For Cloud, the ACC only uses it to sort of do fast approvals. And so if it's something that wouldn't get approved by that system, it goes through a human-for-manual review. And so they have a safeguard there. But even there, there's perhaps a bias in that if you don't get a fast approval, maybe you're less likely to get approved in the long run. And so, yeah, I've got a bit of a tangent here. But to me, I guess that's the interesting part is how we can build these systems trustworthy, especially in these more governmental applications where the government has a duty to do right by their citizens. If a company is using your data that you've given them in a way you're not happy with, you can kind of stop using their product. But if the government's doing it, it's a bit harder to opt out of that sort of system. Sorry, Andrew. There's a bit of a tangent. But you're... ACC example sort of brought up this question for me. How would you differentiate? Because I imagine most of the modeling which government agencies do would be statistics based, statistical modeling. How would you differentiate between statistical modeling and AI modeling? Yeah, so I think it's sort of a natural evolution of what government's already doing. So I mean, I feel like government was one of the first places to start to use things like business rules, you know, sort of if-then statements, you know, if you're, for example, using the ACC example, if your injury is less than $200 a risk to prove it, you know, that's sort of statistical modeling. And it's quite natural that they're going to take that modeling and then probably very slowly because government very slowly evolve it into more machine learning or shouldn't appreciate a government they're very high working people. You know more sort of advanced or more modern AI approaches. And so a lot of these concerns that we're talking about are ones that have been around for quite a while. But I think it's just that modern AI is. we know is much more highly performing and can operate much more quickly in its scale and is also inherently more of a black box. So a statistical model or a sort of a business role model where you can look at the rules versus a neural network model where it's learned some rules but you can't see them. And so I think we're just seeing these concerns sort of grow in significance these days. Do you think with, because you said, well, with a private company, it just don't provide to the private company. Do you think governments in all need to get slightly involved? Because a lot of private companies, I feel they were really big. And in many ways you kind of need them these days for communication and so on and so forth. So how, as someone who does a, who works in AI, do you recommend countering? Yeah, for sure. We definitely need stronger legislation, particularly around data. I mean, we have a privacy act that is almost up to date with internet, not quite. They've almost got up to date with internet. And that was, you know, a few years ago, right? And so our privacy laws are actually pretty good compared to a lot of the world, but are still quite behind where they ideally would be. And so yes, I think there is a lot of work to be done in that space. For example, do you have a right to, well, I mean, actually the projects as it stands says that you have a right to know how your data is used. But I don't know how well that's been tested in terms of can you demand an explanation of how this AI model use your data? And if you ask the government in the official information act request or something, I don't quite know how they will respond to that because if they're using one of these neural network models, they might not be able to explain it. And so I think there's a lot of questions there and a lot of work to be done in terms of modernizing the legislation. And I guess another angle to it is that we know these AI models, especially these very big ones, are from overseas companies, so places primarily the US and Europe a little bit as well. And so when we think about AI being used in New Zealand, there's lots of questions for government around how do we... make sure these companies aren't just profiting off our data. How do we make sure these AI systems are designed for New Zealand? So a Western model does not change in the US is probably not very aware of Maori data sovereignty or tikanga. And so there's a lot of work to be done in that space as well because it's certainly, it's here already, right? And the impact is gonna keep growing. And so I would like to see more sort of discourse in that area. And then Hopefully after the election we'll start to see an AI strategy or something coming out. Your fellow lecturer for the AI LEL 100 course, Ali, he had present, I believe he mentioned something along the lines of, in terms of where AI is in regards to research, that we're probably in a stage similar to that of the Renaissance, where science and tech was developed. So. If we use that sort of analogy and say that AI is currently at that sort of standpoint of the 16, 1700s, how do you think that should develop, sorry, how would that, how do you think that should play a role in the molding of these legislations and make sure we sort of future-proof them better than previous laws and bills for, say, the internet or cybersecurity? Yeah, writing law is really hard. And we've seen a lot of examples where law has been sort of rushed through and reactions to certain events and then ends up being very lacking or needs a lot of further work. And so to me, I guess I'm, my approach would be to look more at those very fundamental aspects of things like data as a human right. And so if you think about it from that angle, you can sort of make a set of laws that is more likely to be effective for a longer period of time. You know, if I say, all right, any algorithm used in New Zealand on someone's personal data, I guess you define personal data, right? But maybe on things like age, ethnicity, gender, you know, those sort of things. Any algorithm that uses that data has to be able to explain to that person. how it was used and how the outcome was produced. You could tidy it up a little bit, but if you had a piece of law like that, then that is not, will work not only for AI, but also for these other more statistical models or business role models that we see already. And so I would hope and I would expect that it would work for a longer period of time. So this, rather than have a law saying, okay, chat GPT is bad and they have to pay us this much money. in a year or two, probably not really as relevant anymore. It's more about those underlying principles, almost like human rights. Yeah. On the same lines, then, you did mention a little bit about one of the biggest fears being into the workforce or how AI might lead to further inequalities. We saw it a little bit, or we're seeing it a little bit with the writer's strike and so on. in Hollywood right now. What do you, how much of a threat do you think AI in its current form is and do you need specific laws to prevent from this in the future as well? Yeah, it's a really tough question. In some ways I expected, I would have expected more disruption to happen by now. I mean when chat gbt was sort of released almost a year ago now. it was pretty obvious that this technology could do a lot of things that, um, to a reasonable level that, uh, existing jobs were doing. So things like people running blogs and people in marketing, even social media posts, a lot of that could be automated. Um, and I'm kind of surprised, but perhaps, well, but definitely sort of glad that it hasn't been as quick of a transition as I expected and that we still have people in these jobs and, um, And in some sense, you need those people to sort of validate and fine tune these models and make sure they're getting the right output. But that being said, I do expect, especially those sort of jobs, to change significantly in the next few years still. I think companies are already working on how to integrate this technology into their pipelines or into their processes. And the more and more that happens, the more we'll see fewer of these roles available and it will be less around people. writing marketing posts or social media posts and more and many fewer people validating those posts or tweaking those posts. And so to me there is certainly a concern here in that There could be a lot of, this transition could be very fast and that could be quite scary. People like to talk about the industrial revolution as sort of an analogy, but the industrial revolution happened over a much longer period of time because building factories takes quite a lot of time because you have to physically build them, right? Whereas using AI, you just have to download a model, get someone to write a bit of code. stick some things together and then you've got something working. And so when people talk about things like universal basic income, I think this is perhaps another example of why something like that is a good idea. Because if we don't know exactly where this technology is going in the next five years and then having more social security and these things in place in case we need them is a much safer approach than saying, we'll see what happens, it'll probably be fine. Um, yeah. What, what do you say to people? So I've got friends who work in, you know, the creativity kind of like advertising and so on, and they, what do you say to people who are like, who one are pretty mad at chat and GPT and various AI models, chat GPT is obviously the most common one. Um, and for them also saying that they cannot, it cannot actually replace true artists or true art. Yeah, I mean, I would be mad as well. And I kind of am mad because I don't think we've seen a government or even society more generally sticking up for these people. Right. So we as a society, we've always sort of undervalued the creative field. And I mean, I think that's because we're so we get a bit political here, but it's because we're built on capitalism, right. And so we're driven by profit. And so things like The creative industries that sort of have to scrape by and we don't see the support for them that they probably really deserve And so yeah, what are what are these people going to do if they? are losing Work to things that chat GBT or to image generation tools And it's not to say that these tools are better than them by no means they are I mean they the credit peak creative industry is still much better at communicating to people understanding what they want and then building something that's actually has a human element to it. And so I think there's always going to be a role for that, much like there's still a role for painting, even though we have photography. But it's going to be smaller because if you, if you want to stock photo for your social media posts, or if you want a blog post releasing your new software or your new product, companies going to choose the large language model that costs a few dollars rather than the, um, creative professional who costs maybe $150. And so what are my messages to them? I mean, I don't want to have to tell them to retrain or get another job because I don't think that's what they should have to do, but I can't help but be confronted by the fact that it is going to be a smaller workforce and it's going to be even more competitive and that it's just kind of a depressing thing. And I think really raises bigger questions about how we as a society value people and value their contributions. Yeah. To the, just sort of on the creative, on the creativity aspect, if you, I imagine, well, I'm not an expert in the arts, but the stark differences from the lens of an artist between classical pieces of art against modern art, are significant. And if from my perspective, if I trained say something akin to Chachi BT, but for art form on images pre say 1600s, no matter how creative it is, it probably would never be able to come up with something the modern art movement came up with, which is quite different. it would be able to make plenty of pieces which are inspired by those art pieces from that period, but not be able to make something completely new. So one aspect, one way to look at it could be that it could also breed some new evolution in the creative world of doing something new. But at the same time, it is, as Andrew said, very, very sort of difficult. period and there's no sort of guarantee of how this all will play out. Yeah, I think there's always going to be a role for artists because there's always going to be a desire to have art that is driven by human experience and human emotion. And you see when you look at artwork, you know, you talk about artwork from the 1600s and you learn about what the creator was going through in their life at that time and how their sort of mental disorder linked to this creative art style. You know, they have all these human elements to it. But I think the economic value of things like digital art and for example, are going to suffer because it's just so easy to automate it that companies who just want a pretty picture for their press release, they're not going to see that value financially speaking. And I think that is where the impact on jobs and impact on careers is going to see, is going to be an issue in terms of it as a hobby and as a, and even for some artists to actually still be very famous artists and be very successful artists, that's not going away, but it's that sort of, that bulk of the creative work, the sort of contracted work. It's a bit, it sounds like a, I wouldn't say a grim, but it sounds like a sad picture until like legislation and all is done and as a society, we kind of, kind of change our priorities a little bit. So let's move on to something slightly nicer. You in your, in your bio, you say you have a number of collaborations under AI for good. So let's talk about that. Let's talk about the good parts of AI now. So what are part of these collaborations? Yeah, so I think one of the really nice things about working at a university is I get the opportunity to Sort of set my own research agenda right to some extent if I start doing You know jazz piano or something they might like pull me up for it but if I'm doing something to it I can sort of do what I want and I try to take advantage of that and try and sort of explore ways that we can use this technology to actually improve issues in our society. So AI for Good being sort of a movement, an older movement now that is around how do we use this in a way that is more or less for good purposes. It's one of my favorite collaborations is with Dr. Rachel Shaw in the School of Ecology and her and I are working on using AI to automatically... identify individual kaka at Zelandia. So if I put a camera in a nest box and Bob the kaka comes along, I take a photo of him and I can say, oh look that's Bob, I've seen him before five days ago in Wilton's bush and now I know that Bob's gone from Wilton to Zelandia. And so you learn a whole bunch of things for example around their sort of nesting patterns around whether or not they commute because... apparently birds commute to and from Zalanda. I guess it makes sense, right? You have a nest set up there. And even things like population dynamics. So do the same birds hang out together? And things like how much the birds' appearance change over time. So there's a whole bunch of questions we can answer here that without this technology would be pretty much impossible to answer because kaka... in Roentgen at least, are so prevalent now that they are mostly unbanded. So it used to be that when the Khakha was born in New Zealand, the researchers would put a series of kata bands on their legs and then you would be able to identify them based on those bands. But now that there's so many of them, the majority of them don't have those bands, and usually a semi-gallible postgraduate student sitting in the field looking at the bands and writing them down. It's quite expensive and so this technology would allow us to basically find out a whole bunch more information and to tackle things like some of the issues we've seen recently where Kaka are starting to try and feed out of bait stations. Poison stations, kākā are very curious and they like to try and stick their heads in and that can end very badly. And if we can understand if it's the same birds or if it's in a certain location, you know, there's all this information we can gather that would lead to a lot of really interesting ecological findings and also conservation advantages. So that's just sort of one of the examples of things that I work on, which I think it's pretty hard to find some. bad part too. I mean, yeah, maybe we don't have as many students sitting in the field, but then hopefully those students can spend more time analyzing data and looking at results and find interesting behaviors, but that sort of thing. What would be the hardest part of this process? Would it be the data collection or would it be applying of the AI method? It's definitely applying it because so this whole research direction is based on hypothesis is a wrong word because we have confirmed at some extent but we believe that kaka have quite distinctive beaks because they are parrots and when you look at their beaks you can see quite different quite sometimes reasonably subtle but certainly differences in things like beak textures the shape of the beak the angles of the beak all of these things. And so we know that kaka look different, but what we don't know is, for example, how different they look. So is it minor variations? Is it always big variations? And even things like, do their beaks look the same every single year? So if I look at a bird now and a bird in five years, will its beak be the same or will it have changed completely? And so we know that we can distinguish them, for example, within maybe a single season, so maybe one summer, but we don't yet know can we do that every summer? And sort of counterintuitively, the more and more birds that we get pictures of, kind of the harder and harder it will get because there are more and more beaks and they're more likely to start to be more similar. And so there's a big question there about how do we actually use this technology to detect different birds very accurately? And so I think this is another really good example of where using AI for an initiative interdisciplinary application, then raises new computer science or new AI challenges. And so even though we're applying it to this area, there's this whole question of how do we then change our network or change our model to be more accurate on this type of data. And so you can learn also quite a lot of fundamental machine learning or do fundamental machine learning research at the same time. That was just very well explained. And also, I think My head just went a bit, well, expanded quite a bit. So I'm trying to think of what to ask next. I am assuming that because what you're doing with kaka, you can essentially do with any bird or any animal, whatever, if you set it up and yeah, you could kind of repeat the processes per se. It depends a little bit. We think it would apply to other parrots. So for example, the Kaya, the mountain parrot in New Zealand is quite similar to Kaka in that regard. There's actually not that much research in this area. A lot of people do it on things like giraffes, who obviously have quite distinctive stripes or elephants. Or a lot going on in Africa with African animals. I think researchers wanna go to Africa. So they do it on African animals. But other animals like, it would be... Like a blackbird. Could you do it on a blackbird? Maybe, but then I don't know. Whenever I look at a blackbird, they all look pretty similar. And maybe that's me being specious, but we do need to have these sort of visual differences in order for the machine learning to find them, right? And so if you or I can't tell a difference between them, then it's gonna be quite hard, or it's gonna be harder for a model to do that, yeah. That's really cool. That's pretty interesting. Um, yeah. So D in that case, how would you, that's just probably the last kind of question I would ask, cause we also recorded for about an hour and a bit, how would you then for like a blackboard you said, which is pretty hard to see, or for things where human vision or whatever cannot identify it, therefore making a model of it is really hard, how would you create models for those things? What would be the step to. Yeah. So you'd have to look at other approaches. And so one of the really sort of classic approaches in ecology is to use things like pit tags, which are basically little tags attached to animals that emit basically a radio frequency that identifies them. Again, you run into all those same problems of scale. And if you have lots of birds, you can't possibly tag them all. One of my Colleagues Steven Marsland and in the school of Mass is actually looking at using audio recordings to try and identify Kiwi Because Kiwi have very distinctive songs, right? They part of the reason that they are able to pair up for life. So a male and female Kiwi will tend to have a Heterosexual sexual. I am what's the word when they have Monogamous I don't I don't actually know about the sexuality of Kiwi haven't learned that one yet, but yeah, they tend to appear for life or at least for a long period of time. And part of the reason they do that is because they recognize each other's calls. And you often hear them calling to each other in the night. And so based on that, we can perhaps identify Kiwi based on their audio recordings. And so you can look at other types of data. So rather than looking at photos of Kiwi, which is probably hard because they're not eternal. you can look at their audio recordings and then maybe that will give you information to make these algorithms. And so you have to sort of think about what you're basically trying to think about. What is the thing that I or an expert can use to identify these animals? And based on that, it tells you perhaps where you should start using machine learning. Awesome. I know, I know we've been recording for about an hour and a bit and I really did want to get into genetic programming and all the other stuff, but we'll probably, we'll probably let it be even for like listener is we'll have you on another time to talk about all the other stuff. Hopefully post selection and see if something new has come out. But I'll ask you, what are the next steps in terms of what your research is looking at? What do you What is, what are your goals? Where do you see yourself going with AI? Yeah, so I'm really keen to sort of scale up some of this interdisciplinary research. So I mentioned the AI and ecology one, but there's also some stuff in AI and law I'm quite interested in around using things like chat GBT or lies language models to help with access to justice, giving people sort of legal recommendations or resources they can look at. And so for me, I'm really trying to scale up those collaborations and I mean, part of that is finding good students and part of that is also finding money. And so I'm in that kind of stage of having to write grants and say, please look we're doing cool stuff, give us some money. But of course also the explainable AI is always a part of that as well. So I tend to choose collaborations that are interesting. to me and they tend to be ones where I want to understand more about the model and the data and the problem. And so even in that CACA research, we're looking at explainable models because as I said towards the start, my ecologist collaborator is not going to trust me or trust a model if she can't understand it to some extent. And I'm kind of like that too. I don't want to trust some black box model. And so being able to take, I guess, all these different interests I have and combine them into research projects is, yeah, it's really fun and exciting. So I just sort of hope to keep going and hope to take these things to that next level where they can have more of a, more of an impact on New Zealand or gonna be more, use more in practice. On the interdisciplinary applications aspect, what do you think would be sort of, what do you think would be easier? As of right now, where if there is a discipline that wants to apply AI tools, it's usually up to the person with the AI background to get caught up to speed with the discipline. And as far as, well, I'm quite new in terms of using AI for interdisciplinary applications, but it just sort of feels like the AI people have to. X, well, X increase their brain folds and, and just sort of get a better gauge of the domain they're trying to apply the AI tool in, as opposed to the people, the party from the other discipline understanding the equivalent amount of AI. So how much more of a role do you think the, the collaborating discipline should play in started developing the AI tool. Yeah, yeah, one of the things I've been really, I've really quite enjoyed is that my Rachel, my collaborator in ecology has learned so much about AI just for our collaborations that these days, sometimes when you meet with a student, she can be like, No, no, your model seems to be over fitted. Have you tried this or something? I'm sitting there like, am I even needed anymore? But and so I really like this stuff because both parties hopefully are learning something. And again, one of the nice things about being in academia is you never really stop learning. And for a while, that's kind of frustrating because like, I don't want to be a student forever, but then when you realize it can be fun as well, and you can become a bit of an ecologist or a bit of a lawyer, I think that can be really empowering. But I mean, and here's a little plug for you, but we've got our new AI major at Vic, right? And one of the things I really tell students interested in that is don't do just AI. do AI and something else because all of the jobs and interesting problems in AI going forward are going to be much more around how you use it for particular problems or particular applications. And so if you can have some expertise in both, that's really, really ideal. Or even if you're doing a psychology major or whatever, a health science major, you can do a minor in AI or just do a first year AI paper. It's 15 points. It should be interesting. And that can give you enough of a fundamental sort of baseline knowledge that you can then have these conversations with people in AI. And then you're, as Amir said, you're sort of meeting them halfway a little bit. So essentially you suggest that we keep them in the dark and preserve our jobs. For now, maybe. Yeah. I don't think it will work in the long term. It's good. I'll clip that. I'll use it as a plug. I'll send it to Vic so they can. So they can put it on their ad-lib. Oh, I did hear this one thing in the news the other day, where someone, because you're talking about AI and law, that someone tried to defend themselves with asking questions on chat GPT, but they weren't allowed to. Or somewhere in America or something, was that true? Yeah, yeah, yeah. There was a lawyer who got caught out using chat GPT to write their sort of just their cases for the courtroom. And as far as I can tell, it just gave some really stupid answers. And the judge was rather unimpressed. The lawyer sort of claimed ignorance, said, oh, I didn't realize it wasn't allowed. But I mean, students use that excuse sometimes, and it doesn't convince me, so I don't think it's going to convince the judge. So yeah, there's, and I think in the legal profession, where so much of it is based on, at least in the courtroom, is based on arguments and making convincing arguments. things that chat GBT, you're not going to replace lawyers yet. Um, but yeah, that was a particularly like face palm example of this is someone who does not know what technology is actually doing. That was awesome. Uh, thanks so much. Um, for the end, we're going to ask you a few questions. Um, I'm aware I want you to answer these as well. So I guess you guys can alternate your answers. Um, they, they just, they just real like. two things just to end for the last five minutes, kind of a wind down. So whenever you're ready, I guess you guys can pick which order you wanna go in. But, summer or winter? Oh, that's a hard one, winter. Sorry, summer. Um, all right. Outdoors or indoors. Outdoors. Mm, outdoors. but movies or TV shows? TV shows. All right, if your life was a TV show, what genre would it be? Alright. I'd like some elaboration on that, Andrew. I'm curious now. Yeah. Oh, no, I like horror TV shows. And so I guess if I had to be, my life had to be a TV show, I want it to be a good one. That would be interesting. But I just realized now that it's probably not such a good experience for me. Unless I was a villain. Maybe I'll be the villain. There you go. Sounds good. As an AI person, you probably are the villain for a few people. Yeah, yeah, yeah. Probably like slice of life comedy for me, Eric. Would it be in anime form or would it be... I wish it were in anime form. Oh. It'd be so much more interesting. All right. Well, I'll give you this then. What superpower would you guys like to have? Invincibility. invisibility? invincibility. You can't get hurt. Along a similar lines permeability. So the ability to go through stuff. You guys have given the most different answers. Most people usually just say like flying or transportation, teleportation or something. I like it. I like someone's answer being able to find a parking spot at the university. That's unrealistic. It's more likely that you'll be permeable than finding a parking spot. Yeah. Cats or dogs? I have a cat and a dog. I can't, it's like choosing a favorite child. But you know what's your favorite? You can refuse but you know. I refuse. It's offensive. I can't decide either. I'd say dogs are far more entertaining, but I'd say my spirit animal is a domestic cat. for. Is a hot dog a sandwich? No. We've had this debate so often. I still say it's a taco. Has to be a taco. Yeah, is a taco a sandwich? I think a taco is a taco. I think a hot dog is a taco. It's about your definitions, Andrew. I think. What's a sandwich? You're putting an AI model in here, so it's the same thing. So a Subway Sub is a taco because again, it's got bread, or it's carbs on three sides. A sushi is a burrito, or burrito is a sushi, whichever way you want to look at it. and empanadas and samosa. These are all true statements. All right. What is your least favorite type of music? uh country EDM for me. Just don't get that noise. All right, last few. What is the worst thing you've paid money for? I don't know, I haven't spent much money recently. I got some really like soggy chips from Burger King that weren't very good. But, ask me next time, I have to think a bit more long term. Yeah, fair enough. Controversial. Something from Auntie Mina's. No. We're going to sell you a degree. I haven't paid for it yet. Technically the government has. I've got to pay off the loan. Okay, but speaking of your degree, what is something ridiculous that someone has tricked you into doing or believing? I'm not good at these quick five questions. Amir, you go, I'll think. There's just so many things that the host has tricked me into believing as an irresponsible older brother. Oh, I've got a good one. My dad convinced me that when you stir a maga milo, if you stir backwards it will undo it and make it separate again. And I believed it for like five years. Oh, did you never try and figure out that? I just, you told me I wasn't doing it right. It's like you go to the exact same rate you started to start with. God, that's good. All right. Um, if you were not doing this, so your current job, what would you be doing? Uh, probably working for some big tech company making way too much money. Yeah. How come you chose this then as, as opposed to working for a big tech company? Oh, that's another, it's a whole podcast in itself. I think. Surely we have him on for having one again at this point. Yeah. 100%. I know. I definitely know a cohort of lists. potential listeners would be very curious about that. First year AI students. Yeah, well someone needs to teach them that they can make big money. So. That's true. Yeah. You're making multiple billionaires. If you're not one, it's fine. And they don't even come back and give me any of it. They just. Honestly. You need to sign it into the degree contract when you sign up. Yeah, that's a good idea. Talk to my lawyers. Make sure they don't use chat GPT. All right, brother, what about you? synchronized swivel maybe I'm not sure I'm fully uncoordinated outside of playing cricket so Uh, maybe a connoisseur of food and media? And Lloyd. Exactly. Chad GPD is taking my creativity away. In that universe. Thanks guys. Right, the very last question. If you guys could give us one piece of advice to live with, what would it be? Don't do drugs. Actually no, don't do, only do fan drugs that aren't too addictive. Don't do crack or like myth, but a bit of weed, a bit of ecstasy. Sounds good. I'm ready. What a top that if you want to end it on that I Can't top that That's actually a good plug for but Episode on psychedelics, isn't it? Yeah, that's good for psychedelics get the right mushrooms. No, don't get the wrong mushrooms. Yeah Andrew if you wanna where there's a whole episode on the wrong and the right type of mushrooms on this podcast, which you can Listen to it wrong. I do some professional research good Alright, awesome. Thanks guys so much. Thanks everyone for listening. Till next time. Remember, do the fun kinds of drugs. Bye. See ya.

Introductions
What is Artificial Intelligence?
Machine Learning and Types of AI
Explaining AI to beginners
Ethical Issues of AI and is it original?
Explainable AI vs black box AI
AI in the real world: data sovereignty and ethics
AI in the research world
Ending Questions