Smooth Brain Society

#47. Bayesian Statistics in Psychology Research - Dr. Taylor Winter

Smooth Brain Society Season 2 Episode 47

Bayesian Statistics allows combining prior information of a population to the current sample of experimentation to create stronger inferences. Dr. Taylor Winter, Senior Lecturer in Mathematics and Statistics at University of Canterbury, uses Bayesian methods to investigate a range of societal and group factors (Social Psychology).

Dr. Winter takes us through some of the basic ideas around Bayesian statistics and how it differs from traditional methods of hypothesis testing in research. We discuss examples from his work on authoritarianism and social identity theory as well as learn the the differences between his time working in industry vs academia. Lastly, we discuss his culture focused projects including Dungeons and Dragons and how Māori culture can manifest behavioural change.


Support the show

Support us and reach out!
https://smoothbrainsociety.com
https://www.patreon.com/SmoothBrainSociety

Instagram: @thesmoothbrainsociety
TikTok: @thesmoothbrainsociety
Twitter/X: @SmoothBrainSoc
Facebook: @thesmoothbrainsociety
Merch and all other links: Linktree
email: thesmoothbrainsociety@gmail.com


Welcome everybody to the Smooth Brain Society. Today, we're talking to Dr. Taylor Winter, who is a senior lecturer in mathematics and statistics at the University of Canterbury. He specializes in Bayesian models and structural equation modeling. and uses them to investigate a range of societal and group factors. So social psychology aspects. So at the group level, he investigates the sense of self and social identity theory. And I got him on because I thought it was a really interesting conversation to have where you can kind of see how sort of the worlds of mathematics and statistics kind of align with psychology and how you can use them for psych research, both at a kind of group level and also at individual levels. Thanks Taylor for coming on. Welcome. Yeah. Thank you. And, uh, thank you for having me on. Um, yeah, I, as you mentioned, I work now at the university of Canterbury, um, and, uh, More importantly, whakapapa down to Ngāi Tahu down Southland as well. So that's what brings me into some of my other research, which is more in a Māori sphere, as well as social identity theory seems to be my jam at the moment, but I've bounced around quite a bit. I think to say that I'm an expert in anything is a bit amiss. And it's really, I just follow my heart, which gives me this really eclectic sort of research approach. research portfolio um that's also been incredibly interesting and and um i've been quite privileged to be able to jump around on a few different corporate and work with a few different um really cool people as well i think that's really good because we can then jump around through your various research and we can give everybody a flavor of all the eclectic work you've done uh awesome and as always if for listeners of the show now we have a We have a co-host on who usually has no idea about the topic. Today, we have Amer back on. He was a co-host. I think it was last year, right? You haven't been on since about a year. So it's been a while. So thank you for coming back. No longer a student anymore. No. Went from the godforsaken world of academia to the even more godforsaken world of capitalism. So, yeah. But hey, the idea that you can ignore that our universities are a neoliberal construct is a bit profound, but okay. I mean, it's a start. It's the gateway into the system, isn't it? Yeah, exactly. It's where we bring you into the capitalist world. Yeah. All right. But because this podcast is more about science and research, as opposed to I guess when you get to capitalism and social constructs, it does come under social identity theory to one way or another. But Taylor, can we start off with you giving us a bit of background into yourself? You said you jumped around a few places. You said you Whakapapa to the south. But I know you did your PhD at Victoria University of Wellington and in Wellington. So, yeah, if you could give people a bit of a background into your work and we can go from there. Yeah, well, I mean, really picking up from our brief foray into capitalism, actually, the reason I actually ended up in a maths and stats school was because I spent some time in industry. So if we go right back, I actually started my undergrad at Otago University and went right the way through to masters. And my masters was actually in cognitive neuroscience. So doing MRI and not anything to do with what I currently research. And after I finished my master's, I knew I wanted to be in research. I loved the post-grad experience. I didn't know what I wanted to do for a PhD. And I cannot remember who told me, but someone who is very sagely and I should probably have remembered was like, if you don't know what you want to do for a PhD, then don't do a PhD. you can come back to it so I ended up going into going into work I've worked at Stats NZ for about six years up in Wellington and after a couple years there I was like yeah I think I have a better idea on what I want to do and it was wasn't so much a topic that I had in mind but it was I the type of things that I wanted to learn from a PhD, the type of skill sets that I wanted to hone. So I wanted to be able to have a PhD where I could focus in on particular skills that are quite hard to develop independently or purely through through your work, which was structural equation modeling, Bayesian statistics, which I'd started picking up from my work at Stats NZ. And there was really awesome Bayesian statisticians there that were super helpful in pointing me in the right way and teaching me. And then that's when I started my PhD whilst working at Stats NZ and started collaborating in a few different directions. I'd say I actually started my PhD in, um, uh, I started my PhD in gut microbiome research. Oh, um, yeah. And then I published one paper, um, maybe two papers, uh, in that area. And then I realized that it was a scan. Uh, so, um, I wasn't really happy with the field. I can talk about a bit about that. Um, a scam's a little bit of a harsh word. I was about to say putting on a gut microbiome as a scam, but I can see what you mean as someone who's literally just written a proposal for a gut microbiome project. Yeah. I can see what you mean. Don't snitch. Yeah, yeah, yeah. Don't peek under the curtain. We'll just close that can again. But Yeah, actually, so once I did my PAM, and I forget, what is the PAM? What does that stand for? Project Assessment Meeting or something? Yeah, so like after your 12 months of doing research, you have to present what you're going to do for the rest of your thesis. And I did my entire PAM on gut microbiome. And I already knew at that point that I wanted to change into another line of research on authoritarianism. Um, and I hadn't told my supervisor Paul. Um, so I did finish doing my pan. I got approval to do gut microbiome research for the rest of it. And then I met with Paul and I was like, by the way, um, like my PhD supervisor, um, I don't want to do any of this anymore. Uh, I actually want to do this other line of research that I find super interesting. Um, and, uh, for people that would know Paul, Paul goes, um, He was incredibly supportive and just laughed it off and thought it was more humorous than anything and jumped on board and supported that new line of research. So that was on authoritarianism, and it was looking at COVID-19, which, of course, So, of course, not to say that I would have found it anyway. So finished my PhD and managed to swing straight into a role down into University of Canterbury, which is very, very, very fortunate and just absurd, absurd timing more than anything. They were looking for someone that had experience in industry that was willing to move back into academia, which is not very common. And at the time it was during COVID. So They had some issues hiring people from overseas and having them come into the country to start their roles. So they weren't accepting overseas applicants. So the pool of applicants was a lot smaller and it was already a small pool of people that they wanted to bring in from industry. So I was very, very lucky. So as far as academia goes, as you might know, you take the job where you can find it. You don't get picky with these things. But it's actually been awesome. Working in maths and stats surrounds me with people that certainly can challenge my thinking and my technical ability, as well as being a pool or a wealth of knowledge just down the hallway that I can go and talk with and teaching into those papers as well was obviously challenging for a psychologist that keeps you on your toes as well. But that's the very long history as to how I find myself where I am now. But yeah, I mean, if you want me to keep the ramble going, I can circle back to the gut microbiome stuff just to do some backpedaling if you'd like. All right. Should we let him do some backpedaling? Oh, yeah, yeah, yeah, yeah. Who doesn't want to hear more about the microbiomes? Yeah. For me, I'm quite curious as to how that transition, how do you find that transition from, as you said, from psychology and then from microbiome research, right? And then you also have these maths and statistical sphere of knowledge. How well do they complement each other? How difficult was the transition going from one to the other? Yeah, so I think in terms most of psychology, I would say a bit of a hot take, I guess, is that your base statistics are quite fundamental to most of psychology, I would say. Um, so if you have a strong foundation and statistics, then that is going to be able to carry you through many different parts of, of many different disciplines within psychology. Um, So I felt that the statistics has always been able to support me and can transition between those different aspects. I think the research process and how you approach your research can also carry across different areas where you come unstuck by changing. And I don't think that this means you should never change. But I also think if you found your niche and you love it, then you should stay there for sure. You shouldn't be too scared to change into a different field that interests you and apply those skills that you've developed into a new area. The biggest problem is that you now have to read up on an entirely new field. which already going from gut microbiome research was quite difficult because you're already reading across a broad interdisciplinary field. You're reading articles around physiology, immunology, then the psychology stuff, the neuroscience stuff, and of literature, those bodies of literature and then move over here to something else. So there is that part of it. And I think that's the only part that you're essentially writing off in that respect. So, yeah. And so another question I had was along the lines of, so you spend some time in industry. I imagine there's a lot of work around doing research analysis and such. So what were the differences you found doing research in industry versus doing research in academia? Yeah, that's a really good question. I think it's... coming from a more statistical background I think and I think this is a lot of students that I've seen that have left like psychology and moved into industry as they're sort of like this is the statistics what we're doing statistics because the um I think that a lot of the information that you're providing for stakeholders, be it ministers or the public, it's all descriptive. So there's a lot of descriptive information, descriptive statistics, sorry, and there's not a lot of more complex hypothesis testing as we would typically see within research. That's not to say that you shouldn't do that. I think it's really important to do the analysis to support the trends and the patterns that you're identifying in the data. uh, when you release those, those top line statistics that are just descriptives. Um, I think that, but I think that's one of the main differences is that this main body of statistics that you're putting out are descriptive. Most of the technical knowledge that you need, we don't actually cover within, um, our, our undergraduate and postgraduate psychology programs. So things like sampling methodology for surveys, uh, waiting methodologies, uh, understanding, um, bias, variants and uncertainty within our statistics is not something that's very well covered. And I think that some of the best people I've seen that can communicate statistics and research in industry are people that can communicate uncertainty really well to a lay audience. Um, that's something like that's, that's can be a bit pretty mind boggling, even for technical folk. Maybe I should put you on the spot here and ask you to communicate uncertainty to us. As someone who comes from a cognitive psychology background, I know statistics. I've done a bit of teaching and research methods and stuff. But like you said, it's not a lot. We don't do a lot of waiting work or anything like that. So if you could give us a general idea of uncertainty in statistics, that would be nice. So, yeah, what I mean for uncertainty at the level that I was just talking about, say that I'm reporting a trend on the unemployment rate and I'm saying unemployment has gone down, right? Well, if unemployment's gone from 4.2% to 4%, what's the margin of error around those two statistics? And is that actually a conclusion that I can make from that time series? When we go out and ask a subset of the national population, in this case with unemployment, we're asking 40,000 people per quarter in New Zealand whether or not they're unemployed. Of those 40,000, the unemployment rate within that 40,000 isn't going to match perfectly with the true unemployment rate because there is an unemployment rate, a true unemployment rate for the whole country. And we're trying to infer from our sample that what that true population parameter or statistic is. And the difference between that is something that we're never going to know fully. But what we can start to understand is what that level of gray or the uncertainty, the margin of error around our sample statistic that can allow us to determine whether or not we're in the right ballpark of the true population parameter. That's how I would think about it in that context. In a more research-focused context, I think it's like p-values, right? Who can actually define what a p-value is? And I think that that's, I mean, that's one of the reasons why frequentist statistics are so So silly is that it's quite backward. I think understanding uncertainty is something that most people can start to think about and grasp. But when you try and flip all the logic of which we generate our understanding around uncertainty, which is what you do with frequency statistics, then it just starts to melt mines, which is completely unnecessary, right? So like the P value being the probability of observing the data that you got with the assumption that the null hypothesis is true. And it's like, but I don't care about my hypothesis. I just want to know the probability that my hypothesis is true. Why am I thinking about everything ass backwards? And that's, I think, something where Bayesian statistics, which does have its own complications, can be quite good because we actually start flipping that around and saying, okay, what's the probability that our hypothesis is true? And that becomes a lot easier to communicate, albeit that you're taking on a lot of that technical complexity yourself. Yeah, well, maybe that would be a good place to jump into because if you could give us a bit of an introduction into what Bayesian statistics are, what Bayesian methodology is, and then we can probably talk about your work from there and get into research. Yeah, I also probably highlight frequentist statistics. Is that more like traditional statistics or what those are, that is as well? Yeah, I should say null hypothesis significance testing is what I'm talking about. So your typical generic statistical tests where you have your, where you ultimately, the hypothesis, the fate of your hypothesis rides on the p-value. So in the reverse of that, the Bayesian statistics, well, I guess the best way to put it is that with null hypothesis significance testing, we're determining the probability of our data given a hypothesis. And then we flip that around with Bayesian statistics and we get the probability of our hypothesis given some data. we step away from using a p-value so instead of saying what's the probability of what's the probability of the current observed data sample given that our null hypothesis is true and we instead say what's the probability that our hypothesis is true which makes it a little bit more logical and That's, in essence, the main thing that I think is quite different is flipping everything around can take quite a bit of time to get your head around. The other part in Bayesian statistics that makes it really awesome is that you can actually include prior knowledge about what you think your effect would look like. And that's also I think at a high level, it's pretty easy to say, Oh, we include prior knowledge on what we think our effect looks like. And we put that into the model and it's like, okay, I can follow that. But how, how that, what that actually looks like when the rubber hits the road, I think is a completely different story. Um, and, uh, I, I think that that becomes quite a long, um, That's sort of an entire couple of lectures or lecture series. Yeah, I can imagine. Which is sort of the annoying thing with Bayesian statistics is just that It can be a little bit impenetrable, but ultimately, if you imagine, you know, I want to if I assume I might lose some people, but the people that are familiar with regression, regression analysis, essentially, I can instead of having a single coefficient, my coefficient is represented by a distribution instead of a single point estimate. And then I can say, I think that my coefficient is somewhere in this normal distribution with a mean of this and a standard deviation of this. And then I can go and click some data and I can see where that data fell within my normal distribution. And I can update or combine those two distributions to get a new posterior distribution that says, this is where we think that the coefficient lies given where we thought it would be versus where we observed it in our collected data. So, for example, if the predictor is an arbitrary predictor has a coefficient of four, but instead of saying four, you say, oh, but very unlikely that it'll be between less than two or greater than five. Yeah, that's exactly it. And we can use any distribution in order to represent that that makes most sense to us. It could be that we think that the parameter is going to be very close to zero. It's not going to be negative. So we might want to use some sort of distribution that would bring us close to like a negative binomial distribution or something that would give us a positive coefficient that's really close to zero. or something like that. Or most typically, you might just use a normal distribution. Yeah. I fear... that a lot of people after hearing the words regression, binomial distribution and so on, like might be having sort of terror attacks. Yeah. We'll sprinkle that in for the, for the nerds. We'll sprinkle that in for the nerds and then we can, we can bring it back down to, bring it back down to earth with interesting stuff. Yeah. Maybe we could jump into that. Do you, you could probably like, cause you said your social identity theory, identity theory is your jam at the moment. Could you kind of give us, tell us what it is? And then maybe we can then combine your Bayesian knowledge and your work in that area. And we can probably run with a few examples that might help understand things. Yeah. Yeah. Well, I mean, the social identity, like I said, that's my jam now. And that's the field that I'm moving into at the moment. So I'd say of being actively involved pushing into that area for about a year. And so I'm still very fresh, still trying to get my head around a lot of the field because it's a very in-depth field. But the basic rundown is that um a lot of i i think a lot of people in psychology would be um familiar with the idea of social identity theory from i think it's taj fellers is how you would pronounce it from from the 80s uh maybe even a little bit earlier um this idea that as opposed to like personality psychology where you're looking at um how an individual's sense of self might manifest. In social identity theory, we're looking at how someone's identity manifests as a product of the groups that they associate with or identify with. And interestingly, that has been a fairly fundamental part of psychology and social psychology that has ticked along. And in recent years, there's a recent couple of decades, I guess now, there's been a group of researchers at University of Queensland that have really started picking this up and they have produced their own standards of practice around interventions, groups for health or the social identity approach. This is Alex Haslam and Yolanda Yetton, I'd put in there and that comes to mind. Nick Stephens, Tegan Kroes, they're all absolutely amazing researchers that I've been following. And might I add that as I've moved into these and as I've had questions and been starting to put together different projects around this, those researchers have been super helpful and very collaborative and awesome people to work with and always keen to help point you in the right direction and give you guidance. Just to clarify, because you ended up doing your PhD in authoritarianism. So I'm assuming that from a social identity theory perspective, someone who's acting sort of authoritatively would be because of a result of all their group interactions, as opposed to something innate, which has led them to tend more authoritarianly. Is that kind of the difference? I think so. I think, well, I mean, I think current theory around authoritarianism is where you look at the individual as having a set of attitudes or attitudinal clusters, as they're called in the literature, that comprise to form... authoritarianism as more of an attitude than a personality trait, which I think is stepping away from what a lot of the literature had focused on, was looking at it purely as this person is authoritarianism, that is how they are, and that's how they always will be. Whereas I was looking at it more as an attitude and saying, well, actually, maybe there's some predisposition to how authoritarian someone might behave. But the amount to which that authoritarian state is exhibited is based on a group of social factors, which mainly pertain to threat to the ingroup. That's in a nutshell what I was looking at for my PhD. So were the only factors that were looked at just the external factors in the group or were there any innate traits as well taken into consideration or any specific traits which were excluded to just refine what you're looking at? Yeah, so I think with authoritarianism, like I said, we're measuring one construct, which is three factors, which forms authoritarianism within an individual. And part of that, if someone has a high level of authoritarian aggression, part of that is going to be just their baseline level of how authoritarian they might be, which might have manifested developmentally. And then their current environment and the stress and the threats that they perceive towards the groups that they associate with, which, as you can imagine, in COVID-19, there's this massive societal threat to the group or to the country. And in order to respond to that, we had to increase the amount of social cohesion or connectedness within society in terms of how we react as a group. We had to all fall in line behind a single leader and follow a specific rigorous set of rules in order to make sure that we survived, essentially. And it was about measuring that reaction that we saw in COVID-19. The interesting thing that I think sort of flips us around, well, one of the things in this research that flip it around is that um when we talk about authoritarianism in in this area of research we're not actually talking talking about authoritarian dictators we're talking about authoritarian followers not the leaders that's a different different thing and that's more like social dominance orientation more than um authoritarianism um what in terms of how we measure it and how we talk about it um but What we were talking about was authoritarian followers, and we were talking about the fact that it was actually a good thing, that actually having very high social cohesion and people sacrificing their own autonomy to follow that of a leader, a group leader such as the New Zealand government was actually a very good thing in order to address the societal threat of COVID-19. Which, yeah, that makes it a little bit weird in terms of how you might conventionally think about authoritarianism and how it's a bad thing. And certainly that's the case, right? Authoritarianism was, as we currently understand it, was born out of research immediately after World War Two, where we were like, okay, Nazis are bad. How can we start to understand how this type of behavior and how this nastiness within a society can manifest? What does that? And that was where they settled on this idea of authoritarianism and trying to hone in on it as a personality. And that was before we had even really gone down the rabbit hole of attitude-based research, which wasn't until much, much later in time after Adorno. Adorno was like 1950. And then it wasn't until more in the 80s that you got Bob Altemeier that was coming along and starting to do a lot more around authoritarianism. But always through that line of research, it had that undercurrent stemming from World War II. And it had always had that significant right wing focus, which is another thing that I think has sort of marred the line of research authoritarianism is that we only look at it as right wing focus when matter of fact, you can have left wing, you can have left wing authoritarians, authoritarian followers and leaders. It's not something that's unique to a particular political ideology, but certainly it is overrepresented within those who would identify as right wing. You bring up a very interesting point, which I was thinking of because you said that it has more to do with followers. And then you could then argue that people who are very staunchly supportive of any like one political party and they like are almost ride or die for that party. As you can see, I guess USA would be a very good example because there's a two party system there and you have like the Republicans and Democrats and you have Democrats. an X percentage of the population which is always going to vote Republican, and then there's this X percent which are going to vote Democrat no matter what. And do you think that they both would sort of manifest the same levels of authoritarianism if you were to sort of measure on those sort of personality traits? Yeah. So I think that in US samples with most modern authoritarian measures where developed in the US, so they tend to work really well with US samples. You do find that there's overrepresentation within Republicans, but there's still people that would identify as Democrats that would be seen as authoritarian or would have high levels of authoritarianism. And certainly that varies a lot by the different factors within authoritarianism, which was another line of my research was actually saying that sometimes we need to look at the actual factors underneath them, because although The Republicans could be very high in authoritarian aggression. That is to say that they want to severely punish people that try and threaten the group, the in-group. It could be that Democrats have high levels of authoritarian submission. They're willing to follow or sacrifice their own autonomy in order to follow the group leader a lot more readily than, say, the Republican Party members or something like that, just as a hypothetical. Could that in a sense be extrapolated to certain cultures or like, you know, across the world, like there's more collectivist cultures which are willing to sacrifice a bit more for the communal good than others which are more individualistic and sort of that clash? Yeah, I think so. And that has been a thought that's crossed my mind and it just becomes so hard and it would be, it's so hard because you have to try and figure out how to measure the exact same construct in an entirely different culture. And one of the problems and one of the reasons why I believe that there's a high level of Republican, not Republican specifically, but right wing followers that show high levels of authoritarianism is because the scales are all loaded ideologically towards a certain political stance. It has things like certain questions about religion and so in the act so this is the the the aggression conservatism traditionalism scale um uh which is the most commonly used scale within authoritarianism research that has questions around um I think it's like sex before marriage it has questions on uh general religious freedom um and off the top of my head. So these are immediately things that would prime people politically, I think, in their responses. So even if we suss that out in the English scale, then if we wanted to translate that to Japanese, I think it would be nigh on impossible, and certainly the research when I was looking into that suggested that there's some issues in terms of trying to... I was looking specifically at Japanese culture and how in Japan we could try and have an authoritarianism scale that we could compare cross-culturally, and they couldn't even identify the same three-factor construct within Japanese society. They could only come up with two factors, which I think could largely just be due to the translation and the actual items themselves, it might have to be something that you have to really hit the drawing board to try and come up with something that works cross-culturally for that specific context. Sorry, to answer your question, I would be very interested in comparing because I think that different societies and cultures would exhibit different levels across these factors, three factors. Yeah. But then, like you said, three factors and Japan only showing two, like conceptually there might only be two factors there. Like we know in sort of a way, like certain, we know certain languages have words for things which do not exist in English, like certain concepts and stuff. Right. So I assume that if certain concepts do not exist in English, why would the things which we decided to do in English or in America, because they were made in America, why would they necessarily exist? show up well in Japan or India or even in New Zealand. Maybe it does not manifest the same way in Maori as it does in Pakeha populations. Yeah, definitely. That's another option. But the reality is that we just don't know. We don't know which it is. We don't know if it's that the scale is not translating properly or that the concepts in the scale aren't translating properly. We just don't know. So it's definitely an area of literature that would be very ripe for future research in that particular area. for me i'm i'm incredibly skeptical so whenever i see an analysis that doesn't like align with what i'm expecting to see um or with something like that i'll just assume that i'm an idiot and that i've done something wrong um where it's like, you know, if it's like there's two factors instead of three, that's where I start. And I'm like, okay, something in the data that there's a reason for this would be because there's some explanation around methodology or this, that, the other thing that could be explaining what we've observed. before getting down the route of specific constructs not translating, which is entirely possible. I lived in Japan for a couple years and I had to park my sarcastic side for an entire two years because it turns out that doesn't translate well. That's not a thing in Japan. So, you mentioned that there are three factors, but only two could actually be sort of patterned in Japanese culture. So, of these three factors, if we were to say, and this is a hypothetical, so if we were to say in New Zealand, Pakeha population, how well do you think these three would these three factors would translate into the populations here? So, I mean, I have I have done studies on New Zealand samples. My I think, three of the four studies in my thesis. And then, of course, a bunch of studies outside of my thesis that I published were on New Zealand samples. So I can tell you, yeah, we do find the three constructs. We do find the same thing and that Labour supporters typically don't. There's not as many authoritarians represented within the Labour Party as there are in the National Party. But the three constructs do tend to hold up in Little America, New Zealand. Were you expecting when you decided to try it out in the Japanese cultural groups, were you expecting all three factors to be accessible or were you not sure whether that would translate well? Yeah, so that wasn't my research. That was a study that I read from the literature. So I was going in blind. I was like, what's being done in Japan that we could look at and understand what's happening in some of our research that we've studied in Western countries? The reason was purely just because it's a culture that's a non-Western culture that I have some understanding of. So therefore, it would be a bit easier for me to understand the cross-cultural differences. Not ultimately well, but I mean, in terms of like, I think Japanese culture is very interesting in that through like historically if you think like in the Tokugawa period they spent like 150 years completely isolated which caused this more extensive homogeneity across their culture and society than what you might have seen in other countries that were more open across that period and that can allow allow us to do cross-cultural research that can, I think, be quite insightful if you can manage to get the measures right. That's the big thing, right? Managing to get the measures right. So because we're talking about this and I want to sort of link it back to your statistics, Bayesian statistics kind of background. So how would your approach into looking at cross-cultural research sort of differ compared to, I guess, how the traditional way we look at cross-cultural research and be like, okay, we have this idea of probably these two cultures might be different on the scale. And then you... measure them on the scale and then you do like an analysis and you, you control for like various biases and things, but you do an analysis and you look for the P value of whether there's a significant difference or not. How would like your approach sort of differ? Yeah, so for me, if I was trying to look at, if I was doing an analysis, I've already done an analysis and I've found some effect and I wanted to understand how that effect might hold across two different countries, say, for example, then I would use that first study that I've conducted to form priors, Bayesian priors, that would inform the analysis in my follow up and that would significantly increase the statistical power of my analysis. But the other advantage to having that more advanced statistical power is that I can actually look at my prior and the likelihood or what I observed in the data and the posterior, and I can compare those three distributions. And I can say, okay, did my prior understanding actually make sense? I think that when you talk about priors in Bayesian statistics, a lot of people are like, oh, so you can just make up your analysis. It's like, well, you can with frequentist statistics. And the difference between frequentist and Bayesian in this context is that with Bayesian statistics, it's transparent. I think that if you're going to publish with priors, you show all three distributions. And if you can see that it's like you have your prior distribution over here. Sorry, I'm not sure where my camera is cutting off. But out yonder, you have your prior distribution, and then you have your observed data over in the other ballpark, and then your posterior is being dragged off towards your prior. You would say, well, that prior is incredibly dumb. It wasn't even close to the data that you observed. If you can't explain that, then you have to change your prior. That makes no sense. So I love that transparency and I love that it gives us that extra level of nuance as to like, how has this data challenged our thinking or not? So that's the difference in my approach when I take a Bayesian lens to the research that I'm doing. Then the actual statistics that I use, what happens when you get a posterior distribution, which is what you use for your analysis, it'll give you, instead of a 95% confidence interval, we have a 95% credible interval, which is slightly different. But ultimately what we have is a distribution of estimates of what we think the estimates are, a distribution. And we can look at that distribution and we can say, of that distribution of samples of estimates, how many of them are greater than zero? 98%. Well, then we're 98% sure that our effect is above zero. I think, I think that, so I have, well, as part of my AI courses, I have done a little bit of probability work, but I'm so, and for some reason that's my weakest area. It's sort of like a chemist who doesn't understand the significance of precise ratios and baking so i've just got no clue uh um so what this is a refresher how would you what's a posterior a prior and likelihood how how would you define these three key components of basin sorry yeah equations so yeah to keep it to keep it um pretty simple the likelihood is the data that we've observed And if you did a traditional statistical test, then the result of that would be exactly what your likelihood is. Okay. Okay. So that part, in the sense that part's the same, like sticking to your comparing cultures example, we've gone and collected data and we've compared and we've found something. Some X thing or this thing is related or whatever. So that's our likelihood. Yeah, exactly. So say I do a t-test to find a difference between, which is just a difference between the two countries that I was looking at. I have a t-test and I have a p-value and my likelihood would be the exact same distribution and p-value that I would get with a traditional frequentist analysis. And then my prior is a distribution that would look like data that I've collected, but it's my prior thought of what the data would look like. So that part is you have not collected any data here. You've designed based on my knowledge of what I have seen based on previous literature. I expect to see whatever distribution or whatever. Yes, correct. All right. That's your prime. then what I can do is combine those together. And I use sampling and some simulation-y type magic in order to do that. And I come up with a posterior distribution. So the posterior is a function of the prior and likelihood. So you can see that I could come up with a prior and say, I am completely sure that I'm gonna find a massive effect And then I observe a data saying, I don't think there's any effect. My posterior would say, there's an effect. And effectively, I've forced my analysis to tell me that there's significance. Yeah, that's what I was thinking. You could basically, if it's combining the likelihood and the prior, you could basically combine it in such a way that you always see something in the posterior, right? Or is there like checks for this? Yeah. Well, yeah, that's exactly what I would say is that you have checks for it. And when you report these analyses, you show the three distributions. So if you have your likelihood and your prior and then your posterior somewhere in the middle and they're not overlapping, you would say, this is ridiculous. This doesn't make sense. So you need to show those distributions and defend the priors that you've selected in order to use informed Bayesian priors appropriately. um and that transparency i think is a very good thing and i think that the insight that you get by challenging what you thought you would find versus what you did is also very insightful Um, but yeah, it, it, it is, it is an evolving field. So I think that trying to, um, there's no sort of standard way in order to report the, oh yeah, report these analysis and traditional journal articles where you have to fit within certain word counts and you have to, um, you can only use so many figures and things like that. Um, sometimes you, your supplementary could blow out quite crazily, um, uh justifying some of these things and doing things like sensitivity analysis so trying different priors just to prove that your prior the amount of influence that your prior might be having on your data so just trying to think of it from through a simple example say we're trying to toss we're flipping a coin uh but let's imagine a world where we have never flipped any coins before So in that sort of scenario, what would our prior be? Would we just logically deduce that, okay, by the looks of it, there's only two ways it can go. It can go on one side or the other, so it's got to be 50-50. Or is it just assumed to be null or void? Or how would you deduce the prior there? That's the thing. So there's different... More broadly, beyond that one example, there's a number of different ways that you can generate a prior. You can have a non-informative prior in that you say, we think anything's possible. You can have weakly informative priors where you say, say you're doing um you're trying to estimate uh people's height and your predictor is weight well you could have a weekly informative prior on that and say well we don't think weight is going to be less than zero and we don't think it's going to be more than 300 kilos um right that wouldn't make sense so we can actually hone in our analysis as to what would be logical in terms of expected um or expected relationships sorry that's a bad example because i actually used to measure an observation of weight as opposed to the coefficient um we don't think that if weight increases by one kilo that height is going to increase by one meter right it'll be less than a meter and it's probably not going to be zero. So we can start to come up with pretty logical estimates or priors to try and rein in our analysis, which increases statistical power in some cases. With weakly informed priors, it's more about efficiency in your sampling than the actual reducing your parameter space rather than actually increasing statistical power. to go the next step beyond weekly informative prize, that's where it becomes to informative prize. It becomes a bit more arcane, I guess, because on one hand, you could talk to a bunch of experts, right? So say if we're trying to estimate the height versus weight example, we could go and talk to a bunch of dietitians or doctors and try and understand, okay, how much should we expect height to increase relative to weight? What makes sense? We talk to a bunch of experts. So that's expert-elicited priors. Or we can look at people who have done studies before and we can look at what their coefficients were and we can look at what their confidence intervals were and we can start to piece together what we might want to have for a prior. And typically you would use that as a starting point and you would increase or flatten out those distributions that you observed from the previous literature to account for some variance between their study and yours, because you also want your data to be more important than your prior. So you want to make sure that you weaken the prior enough that the data is going to be the majority, have the last say, I guess, on the posterior distribution. And yeah, taking it to the other side now to your likelihood, so the data which you have, can't you just make a prior which exactly fits with that or and then you have nothing? Because you see that I see a difference is that therefore I only take in information which sort of supports this and therefore my... or whatever comes out in post analysis is exactly what, you know, I thought was going to happen because that's what my data says. So isn't that another risk? Yeah, well, I mean, it's a risk. Well, what you're doing is you're fabricating your results. So it's the same as like, well, and I think that that's what's cool about Bayesian analysis is that you submit that review for review to a journal. And then it goes to me for review. And I look at it and I'm like, how did you come up with this prior? It makes no sense. where did the prior come from justify it show me what it looks like relative to the other distributions and if you can't do that then you're getting a fat reject and with you know if you were to be more like i think that's quite malicious data manipulation um and if you were to do that in a frequentist paradigm well that's actually hidden you know if you're making up data or something like that then i don't there's no way for me to know that that's just where it comes down to a um to trust in the integrity of the people that submit the article to the journal. So sort of relating this to your work on authoritarianism, in this case, your cry would be the results from the models from the US studies, as well as some from the Japanese studies. Yeah, so actually, if I look at where I've used informative priors in authoritarianism research, one good example would be where I did a study that was longitudinal. So we studied, I think, about 1600 people, a community sample in Aotearoa, New Zealand, and we looked at their levels of authoritarianism. And then we followed them up. That was at alert level four and three. So high alert, high COVID restrictions and lockdowns. And then we did a follow up at alert level one, which is essentially back to normal. And we showed that longitudinally people's authoritarianism decreased and our explanation, our interpretation of that was that the threat of COVID-19 had dissipated. So therefore, authoritarian personalities or attitudes had diminished accordingly. So we used the effects that we observed in that longitudinal study to conduct an experimental study where we exposed one group of people to a prime that threatened them about COVID-19 and said COVID-19 is really bad and it's going to get you. uh and then we had the controls which said covert 19 is not a problem don't worry about it move on um so that was a control and an experimental manipulation and we were able to use the longitudinal study as a prior on um the experimental effect in our in our experiment which was good because um we were collecting that data, we were paying participants for that second, for that experimental study. So then it becomes a compromise of resource and we're able to increase our statistical power and get more value from our data collection through the use of Informative Prize. Nice. So I'm just going to try just to gauge my understanding of this. I'm going to try and come up with a little example. Say we are trying to assess the population size of a country. So you've got some data for between say a four year span, say from 2008, you've got the population in 2012, you've got the population and you've seen how much that's changed. And now you've got an estimated count of what the population is. So from that prior and this sort of observation, you can deduce how good the estimation you currently have is, right? So if you've got your prior from 2008 to 2012 growth, and now say we're in 2018, right? What we found in 2018, how likely is it that this is the correct number can be got based on that prior and this likelihood? Yeah, so you could do that. You could use that as a prior. But if it was like the same sample or the same longitudinal data set, then you could actually just throw it into a single analysis and have an effect for time. And that would probably have more statistical power than separating it out and having it as a prior. So if at all you can just increase the observed data, then that would be your first protocol. And as soon as it makes sense, or as soon as it doesn't make sense to include it in your base sample or in your sample, then you'd start moving towards PRIDES. Because one thing which you said earlier, which caught my attention was you talking about how something, well, if you show sort of your working, you need to show your working and for a reviewer to sort of be like, okay, this Bayesian model makes sense. This is correct. You've clearly accounted for all the sort of data manipulation she can do. Considering you worked in sort of your Bayesian understanding came from working out in industry and working with Stats New Zealand, Do they have the same checks and balances or is this sort of can sort of countries, political parties, whatever, not have as stringent a restriction and sort of manipulate what they want to some extent? Yeah, I would say that we were very robust when I was working at Stats New Zealand, mainly because we had a large talent pool of, well, large, very good with Bayesian statistics. I'm not counting myself among those people. These people, they're the atua of Bayesian statistics in my eyes who taught me. So between them, I think that there was quite a lot of scrutiny and a lot of checks and balances put in place and a lot of sensitivity analysis. So running the analysis with a range of different priors to make sure that the prior is behaving as you believe it should be was really important. And I think that that's less the case in academia, because one is that that pure breadth of analysis would be a technical document that would exist within the organization and industry. But it's not a technical document that you would produce within academia and certainly not one that would typically make it through to submission. That's one issue. The other issue within academia with Bayesian statistics is that reviewers don't really know what they're looking at. They don't know about Bayesian analysis and what they should or shouldn't be looking for. And that was one thing I found really hard because as I was learning Bayesian statistics in the context that I wanted to implement it within my field of psychology, it was basically I would I would like I said before I would just use non-informative priors um like the non-informative prize and say it could be anything because I was just scared that it's like a reviewer is going to try and say that I just made up the data and then I'll get rejected so so I would just took the most um the most conservative approach I possibly could and then I gained confidence and understanding with these models, I started to do things like weekly informative prize and then see how that goes. And then I would submit an article using weekly informative prize. And then I'd get a reviewer that's like, I thought the advantage of Bayesian statistics was that you could include prior knowledge. Why didn't you do that? So I'm like, okay, this review is actually complaining that I didn't use informative prize. Maybe I should be giving this a try and I should just be put. And that's how I sort of pushed the boundaries and tried to understand where I should be going. It's typically hit and miss with reviewers just because it's a new area. It's not that reviewers are idiots because obviously I'm a reviewer myself and I hope I'm not too much of an idiot. But it's just that it's a new area of statistics that not a lot of people know about, which is sort of dangerous, because I would certainly like it if my reviewers knew a lot more about Bayesian statistics and could challenge my thinking, because that gives me an extra level of assurance in my own work. But yeah, it's a tricky new area. And I think that there's more checks and balances in industry and it's more understood in industry relative to psychology. Obviously, in math and stats, we have Bayesian statisticians. But in the field of psychology, it's less common. It's not as greatly understood. We use more like Bayes factors and other things that you might spit out at you. Not much in the informative space. Yeah, that's very interesting because I would have thought, you know, considering most of these sort of models or whatever are developed at some university somewhere, generally speaking, and then they're brought into industry, I would have thought that, you know, there'd be better checks and balances in academia than in the industries themselves. But the way you're saying, I guess it depends on which field you're looking at, because I assume if you put something with Bayesian statistics in, in a stats paper, in a mathematics paper, you're more likely to get scrutinized than you are in a psychology paper just because of knowledge or what's around it. I think that just generally the state of statistics in psychology is pretty cooked because people know enough to be dangerous, right? And nothing in what we do, nothing in our research incentivizes us to be good with statistics. You know, if you can get a p-value less than 0.05, then you can submit your article and that is the extent of statistical knowledge you need to be aware of. And it's hugely problematic. I think it's, yeah, it's quite, it can be quite tricky, especially like if I'm looking at doing statistical analysis, I might spend, a significantly so i i would think i know a bit more about statistics than than the average um psychologist but i would spend significantly longer doing statistical analysis than they might because i'm doing extra checks or i'm taking using a more advanced approach or whatever the case may be and that's sort of a tough pill to swallow that ultimately you could be like holding yourself to a higher standard spending more time decreasing the likelihood that you find spurious significant results, which we're not at all incentivized to do. But I personally found that very annoying about academia that this is where we can touch back onto the capitalism statement we said at the start that you're really incentivized to publish and to publish your incentive. The only way you're actually going to publish is by showing effects. And some people don't realize how important it is to show things that look, there was not an effect here. Um, and yeah, that, and then a lot of information actually gets lost. Um, there's more incentive to like, sort of manipulate your experiments to sort of show an effect and therefore those will get published and then you, and you need those publications because that's how you're going to get money. Otherwise you're not going to earn, you'll be out of a job. So there's a whole, the whole negative impact, which that has, uh, yeah. Yeah. There's one question I did have about just the statistical modeling in academia versus the industry. Well, when you're in the industry, the assumptions you can make, well, when you're in academia, you naturally make a fair number of assumptions when you are observing your model and If people are familiar with a basic linear regression model will know about the E term at the end, which is supposed to encapsulate every error known to mankind or humankind represented in that one single variable. How does this variables influence change? when you are modeling in industry but also how does this error term uh adjusted in bayesian regression compared to traditional statistical modeling yeah uh so i think that you know industry is not I wouldn't put it on a pedestal. I would say that there's some advantages. And like I said, we're more thorough in some cases than others. But ultimately, when you move more into the private sector, you have the same problem that you see in academia and that the client is paying for a certain result. And if you can support that result, then the certainty of it is sort of a moot point. So there's certainly some consultancies kicking around Wellington that have produced some neuron criminal statistical analysis for Crown clients because that's what the client was looking for and they gave them what they wanted. So, yeah, I guess I should just... caveat and say that industries, I wouldn't put on a pedestal, but we do care about error a lot more. And especially within Crown, I think we are quite careful because we need to make sure that whatever advice we're giving to ministers, whether or not they want to throw it out and work on vibes and feels is up to them. But we have to make sure that whatever information we do provide to them is sound and that they can stand on it. If you think about GDP, a minister standing up in Parliament doesn't need to know how GDP is created or how it's calculated, but they need to have the trust that when they say it went down, it did go down. So that's the level of credibility and confidence that we need to give our ministers when we're working in Crown. So we have to understand what that error term's doing and what all the little gremlins are within it. And the understanding of that error term might be spread across multiple teams as opposed to being within a single individual, like it might be an academia, but it's definitely understood and the knowledge is there. With invasion statistics though, I mean, that sort of is a separate question altogether because you can actually start modeling error terms like what you think the error would look like and how that should be captured and whether it's a bias or a variance. And you can actually put priors on those things or parameterize them in pretty fun ways. But yeah, that's sort of a different. So that's a lot. So Bayesian modeling just provides a little bit more flexibility with what you can do with the error term compared to just a basic linear regression model. In my observation, it's been a little bit easier to model around it, but more flexibility. But also remember that when we have the posterior distribution, the combination of our prior and likelihood, it's a distribution. It's not a point estimate with a confidence interval. It's a distribution. So we can really start to understand how the parameter can be represented within our observations and prior knowledge with respect to the population. One thing which I've sort of, which I think I've got a fundamental understanding of probabilistic machine learning. And based on this conversation, I'm just sort of realizing that we tend to use, to have to start with no prior. So just, I'm sorry, what was the term you used? some uninformed prior. And then as the model trains that result from the first model that becomes the prior and then as you keep training, the prior is adjusted based on the model growing. But that also allows the flexibility of introducing an informed prior. that if you want to train a machine learning model with an informed prior, you have the option. Yeah, and definitely my observation has been that within the AI or machine learning research, data science, Bayesian statistics has been growing in popularity because one of the issues with machine learning models is that with things like conventional statistics, we're essentially imposing what rules we have around the relationships in the data. Whereas we are essentially shoveling our data into a model when we talk about machine learning or different machine learning approaches and saying, tell us what the rules are that can associate these different variables to predict an outcome or whatever the case may be. So because of that and because of the multidimensionality that you see more in machine learning, so more parameters than what you typically might handle within conventional statistics, you need an absolute buttload of data. Instead of having a sample of 100 or 200 people, you might need to have hundreds of thousands of people in order to train your model. Um, but by having, if we know very well what some of these relationships are, then we can actually use informative priors in order to limit our, our parameter space. Um, and we can train machine learning models or AI models, uh, on a lot smaller data sets than what you otherwise might've needed, which is more cost efficient. Uh, for one, uh, and it's more resource efficient, uh, it allows you to have quicker turnarounds for clients. I imagine it also could be, in some ways, could be a bit more ethical because you're not requiring the large amount of data. You're not attempting to even breach and sort of use data you're not authorized to use, whereas you're just looking at existing, you're creating a prior, and then from there you can work with the data you're authorized to use rather than trying to go past those hurdles and get something, a ridiculously large data set. Yeah, exactly. I mean, if you're talking about research, if you're wanting to do piloting and things like that using smaller samples, I think it's far more not just ethical. It's not just ethical in the sense that we are being we're having less of an impact and burdening people by giving us the data for things like assuming that we're studying people. so one is the ethical component of being burdensome on respondents or participants um more so than we need to be and by using informative prize we can reduce that we can get pilots off the ground pretty easily as well pilot studies where we're pretty sure of something to support um things like funding applications or whatever but the other thing is that realizing and if we're talking about academia most of our research funding is taxpayers dollars like I think that we sometimes underestimate what our fiscal responsibility is around spending that money within our university academic context. If we don't need to gather as big a data set as we need to and there's better ways of doing it that are more cost effective, then we should be looking at that. And naturally, you should also point out the fact that the priors could contain biases, right? Yeah, they could do. And that's why, like I said, it's particularly good. You can have quite well-informed priors for things like a pilot study. And that tells you whether or not you should be putting that investment into more significant data collection projects. um efforts um certainly um this is purely anecdotal um I don't actually have anything beyond the anecdote but I was hearing about people that had been talking with pharmaceutical companies where they were using informative prize because it's so incredibly um costly to be researching drugs certainly they can't bring a drug through to um through all clinical testing using Bayesian priors. But in their early stages, they can actually quickly identify whether or not a drug is worth pursuing further using informative priors to reduce the amount of spend that they're investing into different drugs that might not end anywhere or land where they want them to land. Nice. Just because, yeah, we've recorded for over about an hour and 15, maybe we spend the last 10 minutes highlighting some of your work, which you're currently doing, because we've, apart from authoritarianism and a little bit of bagging on gut microbiome, we've usually just used examples. So could you talk about some of your current work, which you think is really interesting? Yeah, I'll let you choose what you'd like to talk about. sure well i mean um to yeah i'll talk about the gut microbiome thing and start there but one study we did actually was looking at um we did this cool study with tamlyn connor from otago university she's a nutrition research psychology researcher health psychologist um it's a better way to put it um and uh we got a whole bunch of blood tests from about a thousand people and we were able to look at the inflammation and we were using inflammation as a proxy of gut microbiome health which has significant limitations but basically that was stemming off of this idea of the leaky gut hypothesis the idea that your gut permeability changes when the microbiomes are not in the correct balance when the microbiome is not in the correct balance. And therefore you get these pro-inflammatory things leaking through your gut and causing and manifesting and changes in your bloodstream and also getting up to the brain and causing you to feel more crap because you have the systemic inflammation in your body. The thing we found in that research, we did find that there was an association that people with higher levels of inflammation, did feel a bit more stink. But what we found crazy, crazy, crazy was people, women that were taking the oral contraceptive pool had 10 times the amount of inflammation as anything diet related. People could have the worst diet and it would not impact on their... Sorry, my grandma's just coming in to get her magazine. It would not have the same impact on your... gut microbiome and inflammation as taking the oral contraceptive pill, which was wild. That's insane. Yeah. And depending on what type of pill it was, the brand, there was actually differences between the brand and how much inflammation it caused as well. And that was one of the things that sort of put me off doing gut microbiome research is because I was like, well, there's all the different things that could be affecting your mental well-being. gut microbiome is pretty far down the list in terms of how readily we can pose an intervention on it. And certainly things like increasing or having a cleaner diet would make you feel better. And often it does make you feel better. And that's more of a nutritional argument, more so than the actual microbiome argument. But I just thought, okay, well, if you sleep better, if you exercise, then that is going to increase your wellbeing thousandfold more so than anything microbiome related. So do I really want to continue down this line of research when it's such a small effect and that there could be other avenues to explore that could be more productive or manifest a greater sense of wellbeing? both in me and in participants. So that was the microbiome thing, the microbiome research that I'd done there. And that's what pushed me towards when I went down the root of authoritarianism. And we talked about some of the studies that I did there, where we did longitudinal studies, looking at changes in authoritarianism. We did some experimental stuff there. And it was when I actually completed my PhD, I realized that one of the examiners actually was like, well, couldn't you just explain this with social identity theory? And then I was like, damn, My entire thesis is a lie, and I could explain this entire process with social identity theory. And one of the things that I started looking at there was the social identity approach, like I spoke about with some of those smart cookies over in Australia at the moment. Effects that we were seeing there are amazing. We were finding that things, first of all, in terms of the negative effect, we find that things like loneliness have a similar effect on your well-being as moderate levels of smoking. It can be just as damaging to your health as smoking. which is wild. And that's why we sometimes talk about loneliness as a pandemic. And then when we look at some of the interventions that are being done that have a social identity focus, we see that the social identity approaches that are being used can be comparable to cognitive behavior therapy and sometimes even better. So some of the research that we see there shows that when people experience a social identity approach or groups for health versus cognitive behavior therapy at six month follow up the level of loneliness and the people that had the social identity based approach would be lower than those that just experienced cognitive behavior therapy so when I was reading some of that research I was like this is interesting this is really cool stuff and that's something that motivated me to research into that area more just because I see so much potential I know you're not a clinical person or that at all, but I just wanted to ask because it was very interesting. What would a social identity approach look like compared to a cognitive therapy approach? Would you have any idea? I know you don't need to go into detail, of course. Yeah, well, it's really group, it's group based. And that's basically the essence of it, is that it's more group based, and it's about attachment to identity, and the norms that form around that identity. So they're not the typical behaviors and beliefs and those within the group. And that connection that arises from that is therapeutically relevant. So if you're looking at some of the research that we will be doing, I have a PhD student at the moment, Courtney Matthews, And she is looking at something that looks very similar to these groups for health interventions, which is Dungeons and Dragons, where you get a group of people that come together talking about the same thing and sharing norms and values and beliefs and manifesting that sense of social connection that can be quite therapeutic and help bolster and maintain a high sense of mental well-being. So that's some of the research that I'm quite interested in looking at. that we have starting up pretty much next month is when we actually start collection. Alongside our neoliberal study as well. That must be some fun experiments to run. Yeah, no, it's very resource intensive, but we're really looking forward to gathering that data. Have you got a set of adventures you're going to use as your defined environment for... Well, funny you say that is because, sorry, I'm swinging around on my chair. But it's because I whakapapa Māori and I've been really fortunate in later life that I've been exposed to a lot of people that have really strong cultural knowledge. Some people from the northern iwi, such as Ririwai Fox, who you had on here, and Fin. He's been on here as well. yep and um but then also i've had the opportunity to work with a lot of people in my own iwi naitahu down here um and learn from them but that's not an opportunity that a lot of people um are afforded so one of the things that had been on my mind is how can we group these social identity approaches with this undercurrent of cultural reconnection in a way that's not really confrontational, that has a sort of low barrier to entry, right? Because some of this idea, like if you talk about Ririwai's, we'll talk about cultural embeddedness and he talks about how your ethnicity brings you to the door of the marae, but it's your embeddedness that happens when you walk through. Sometimes that can be quite difficult for people to take that step through. They don't feel that they have the knowledge that they need, which is not that you don't need that knowledge. But of course, there's this expectation or the stigma on individuals that they need to have a certain level of Maori knowledge to engage with it. within their community or with the iwi. So how can we use Dungeons and Dragons and have a campaign that's based around Pūrākau or Māori mythology that can help teach around cultural values and beliefs and practices in a way that's fun, engaging, and that's approachable. So, yeah. As to which campaigns we're doing, we're hoping that we'll be able to start with some basic modules just to try and understand the basics of how it's all working from a psychological perspective and within a social identity framework. And then we're hoping to move towards including that cultural component as well of cultural reconnection and have purako-based interventions. So I've got a two-part question. How do you choose the participants for the experiment and can you participate in them virtually? Yeah. So definitely we will be actually advertising for participants just in the first study. All we want to do is confirm our suspicion that those who practice are or participate in Dungeons and Dragons, see that grouping or that identity as a part of their own identity and that they draw on those experiences in their everyday lives. We just want to say, okay, are we on the right track here? Study two is going to be the longitudinal part. and that's where we're going to say, uh, let's take people who have never played dungeons and dragons and follow them over time and watch the manifestation of that identity. Um, so yeah, that could be where we could, we could do a bit of a ring around for those that are keen. Um, Oh, that sounds exciting. The hardest part is just getting DMs and making sure that we compensate everyone. Oh, I know the perfect person who would be keen to be a DM. So one of the former producers of this show, Alex, he runs his own podcast, which is all about dice-based role-playing games. Okay. Yeah. And it's funny how even Liza Bowden, when she came on, she's a statistics researcher up in Auckland. And she also used Dungeons and Dragons and the 20-sided dice as the example for explaining P-values and probabilities to us. So I don't know how role-playing games seem to be a feature on this show somehow. We keep coming back to them. Yeah, yeah, yeah. Well, I came into it cold. So I get given a reading list by my PhD student. Courtney will tell me what I need to watch. And I need to watch Critical Role on Amazon Prime, which I watched through, which was pretty cool. And yeah, the different podcasts that I need to pay attention to as well. What about that episode from the TV sitcom Community? They have a very critically acclaimed episode on Dungeons and Dragons 2, actually. It's definitely explored the social dynamics within the group playing it. as the game progressed so yeah we can add to it's not your reading list but to your viewing list yeah I should have said that's my viewing list yeah watch those two community episodes they're really good yeah Definitely. Maybe I can be a source of memes for my future presentations. Of course. There you go. Yeah. All right. I think on that note, and because we've been recording for about an hour and a half, I think it's time to probably end it there. Although we could keep talking for ages. Thank you so much, Taylor, for coming on. One last thing which we ask from all our guests is if you had one piece of advice to give all the listeners, what would it be? um i think it for those that are in psychology i think um don't be scared to change up what you're researching which is really like what i've done um if you if you find something interesting then follow your nose and don't be scared to jump into different fields and to collaborate with others and work with others and learn new things within academia and for those who aren't in psychology For those that aren't in psychology, I think the same thing. And that collaborating with others, whether it be on different sports or whatever the case may be, think about it in terms of... working with others, not just in terms of research, but also if you're talking about it in a non-research context, when you're talking with others, you also learn about the norms and values of those groups and people. So if you don't have a group, then find one. And if you have one, say you're going to, for me, I go bouldering now, rock climbing, or if you go to badminton, or if you play Dungeons and Dragons, think about how that group and influences your everyday thinking and beliefs and what norms you take away from that group that you hold dear. And I think really analyzing that is going to be really insightful and open a lot of people's minds as to how influential the different groups are in their everyday life. Awesome. Thank you so much for that. Thank you. So yeah. Any final thoughts from you, Amer? No, no. It's exciting to see when those flyers come up for the secondary experiment. Still waiting on ethics, so it's been five weeks in ethics at the moment. It does ruin everything, don't it? Yeah, no comment. All right. Thank you so much, guys. Thank you, everyone, for listening. And yeah, until the next episode, take care. Bye. Okay. Go for dead.

People on this episode