Let’s Talk Teaching

Game Plan: Transforming Maths Teaching through Strategic Play

Episode Summary

Game-based learning isn’t about making maths fun - it’s about making it stick. From strategy and reflection to rich investigations, we explore how games can reshape how students think, reason and engage with maths learning.

Episode Notes

For many teachers, games have long been a classroom staple - but their impact goes far beyond surface-level engagement. When designed and used with intent, game-based learning can transform mathematical thinking, improve classroom dynamics, and create lasting connections to content.

In this episode, Dr James Russo shares what makes a truly effective maths game, unpacking six key principles from his research and the importance of 'games into investigations'. Primary school teacher Matt Hale brings the classroom perspective, highlighting the joy, depth and unexpected power of games that promote strategic thinking, collaborative dialogue and differentiated learning.

You’ll also hear how games like Fact or Fiction, Multiple Mysteries, and Choc Chip Cookies are sparking genuine mathematical thinking - and why the simplicity of cards, dice and paper often wins over digital solutions.

In this episode, you’ll learn:

Topics we explore:

(00:00) Introduction to Game-Based Learning 
(02:00) Game-Based Learning vs Gamification
(03:27) Games That Built Deeper Thinking
(04:33) Stickiness and Replayability 
(05:51) Favourite Maths Games
(09:47) Six Principles of Effective Maths Games
(11:04) Turning Games into Investigations
(13:02) Deepening Learning by Changing Rules
(15:03) Games for Assessment 
(17:12) Sneaky Learning and Student Confidence
(18:05) Managing Competition in the Classroom
(19:47) Six Key Principles Explained
(26:40) Love Maths Website and Simple Games
(27:52) Non-Digital vs. Digital Games
(31:38) New International Research into Games
(33:36) How to Start Using Games in Teaching


Resources: 

Related research:


Special Guests: 

Dr James Russo
Senior Lecturer, School of Curriculum, Teaching and Inclusive Education
Faculty of Education, Monash University

Matt Hale
Teacher
St Patrick’s Primary School, Mentone

If you’re enjoying Let’s Talk Teaching, don’t forget to subscribe, rate and review! You can follow us on Instagram, X and Facebook, and share your thoughts on the show by using the hashtag #letstalkteachingpodcast. 

If you’re interested in hearing more about the short courses, undergraduate and postgraduate study options that Monash Education offers, please visit our website.

We are grateful for the support of Monash University’s Faculty of Education in producing this podcast.

Episode Transcription

[00:00:00] Rebecca Cooper: This podcast is recorded on the land of the Bunurong people of the Eastern Koan Nation. We'd like to pay our respect to elders past and present, and acknowledge that this land was stolen and never seeded. Welcome to Let's Talk, teaching the podcast created by Teachers for Teachers.

[00:00:19] Dr Jo Blannin: Some real critical thinking that has to go into where does generative AI actually have a space?

[00:00:24] And where should it sit in a teacher's role? Because the teacher is still the pedagogical expert and the teacher is the one who knows the child, knows how they fit into the classroom and knows how they interact and they work and they learn best

[00:00:39] Rebecca Cooper: AI in education. It's a topic that insights plenty of excitement, uncertainty, and debate.

[00:00:46] While concerns around cheating and plagiarism, often dominate discussions. AI has the potential to offer so much more.

[00:00:53] Dr Jo Blannin: Teachers are looking to generative AI to help them with what they are, not me. They are calling Drudge work, [00:01:00] and so they're trying to use generative AI to plug that gap.

[00:01:03] Miguel Regalo: If we did not talk about generative ai, the students would be using it regardless, and it would be a disservice to us and to them.

[00:01:13] If we didn't dedicate some brain power to figuring out how we'll adopt this as part of our practice.

[00:01:20] Rebecca Cooper: I'm Associate Professor Rebecca Cooper, assistant Dean of Initial Teacher Education at Monash University's Faculty of Education. Each episode we engage with education experts and alumni to explore real challenges and innovations in the classroom, providing valuable insights that can be applied to your own teaching practice.

[00:01:41] Joining me today to chat about how we can best use this technology in learning is Dr. Joe Bannon, who researches how technology continues to shape education. And Miguel Regalo, who works with colleagues at his school to implement AI in the classroom. Miguel, you work really closely with [00:02:00] teachers navigating ai.

[00:02:02] What are the really big questions and concerns that educators have got at the moment?

[00:02:07] Miguel Regalo: There's a variety of concerns, and speaking from my context, I have a range of colleagues who are pro AI who acknowledge the power that it has as a tool to support teaching and learning, but know nothing about how it works and what it can do.

[00:02:25] Uh, specifically speaking, and then I have on the other end colleagues who are a bit more skeptical about whether or not this tool actually might be a hindrance towards student progress and might be a hindrance towards their pedagogical choices. So the big questions there are, um, what is it and how can we use it effectively?

[00:02:47] In ways that are ethical and responsible.

[00:02:51] Rebecca Cooper: Yeah. Okay. So Joe, what is

[00:02:52] Dr Jo Blannin: it? What is ai? Okay, well, I'll just boil that down. I think you could, you could probably do a lifetime's worth of study to answer that [00:03:00] question. So what we're talking about is generative ai and so. Artificial intelligence is something that people have, um, seen in sci-fi movies, read about in books, seen in Star Trek and all of those things for generations now.

[00:03:15] But what we're talking about with AI has its roots in the 1950s, and it is a type of computer science that is built on algorithms, that is computer programs that follow a series of steps to achieve a specific goal. So a recipe is an algorithm. You follow the instructions, you get a certain output. That's an algorithm.

[00:03:36] And ai, artificial intelligence is a series of these algorithms increasingly more and more complex, that together can generate something, an output. And so from the 1950s all the way through, we have been seeing different types of artificial intelligence. Now, the one that we talk about today is generative ai, because it lets [00:04:00] us generate something.

[00:04:01] And I hesitate to say it generates something new because that's a very difficult frame to put around it because what's new? But what it does is it takes that idea of algorithms that we've had since the first computers back in the 18 hundreds, and those were women. They were called computers, and they were just women doing mathematical calculations all the way up to today, working out what these outputs should be based on a set of rules.

[00:04:28] Based on these algorithms. And so when you think about the big companies out there, so like Google and Open AI and Microsoft, and they're all building these large language models, they're essentially, you can think of it like a giant database, and that database has a ton of rules built in. And those rules tell the computer what word is most likely to come next in a certain sequence of words.

[00:04:52] And that's what we mean by generative ai. So it generates some text. That's the most kind of basic [00:05:00] understanding, is that there's a set of rules, there's a big sequence of data that we've used to make those rules. A large language model is what that data set is called. And then the algorithm, the program has generated a pattern and then it creates something that we think is new to us and it creates it at the end, and that's why it's called generative ai.

[00:05:22] So it's quite a specific type of artificial intelligence. Now that's what kind of all the hype is about at the moment. And that's where, uh, teachers and, and all of us academics are really interested and excited about in varying degrees. But we can see, because what it generates depends on what's sitting inside that giant database.

[00:05:45] And so that giant database, that giant large language model is one kind of aspect of what is generated. The other aspect is those rules that are created. To tell the, uh, computer what parts of the [00:06:00] large language model to draw on and then what to generate from it. So all of those fit together into this computer science model, uh, that is called generative ai.

[00:06:10] So that's kind of, does that clarify a little bit what generative AI is? It does for me.

[00:06:15] Rebecca Cooper: Miguel, how about you? It does for me too. Excellent. So. Thinking about that sort of definite and that understanding, what is it being used for in classrooms?

[00:06:26] Miguel Regalo: Well, the work of teaching is so complex. There's the teaching work that I.

[00:06:32] Involves interaction with students. And so when we think about it from that perspective, we can use AI or teachers, at least in my context, to use AI to help prepare lesson plans. So curriculum interpretation, curriculum translation. I. Differentiation of content. Trying to figure out how can one point in a syllabus be differentiated for a variety of learners at different levels?

[00:06:58] What kind of formative assessments [00:07:00] might be brainstormed from this particular point of content? For example, I. There's also then the teacher facing teacher work when it comes to how might we, uh, synthesize these meeting notes in a really meaningful way so that when faculties come together afterwards or different working groups come together, we can have an effective summary of that meeting or I.

[00:07:22] If we want to draft an email to a parent, how might AI help us in sounding more responsive, more professional, or to strike the right tone and everything else in between?

[00:07:34] Rebecca Cooper: Yep. Okay. Fantastic. So thinking back to something you talked about before in the notion of the ethical use. Mm. Are there spaces and places and that where AI just shouldn't go in education in, in your opinion?

[00:07:50] I think

[00:07:51] Miguel Regalo: when student safety is at risk in the use of ai, that's definitely an area from my [00:08:00] perspective, that teachers need to think more critically around how they use it and what it's used for to draw on Joe's explanation. I'm imagining that these big companies have various positions on the privacy of the data that is put into these systems, and so anything that might involve sensitive information with student wellbeing or with regards to confidentiality, I.

[00:08:26] I think that's definitely an area where we would love a lot more guidance and a lot more, I guess, direction on navigating those ethical troubles, those problems.

[00:08:36] Dr Jo Blannin: Yeah, it's a really good point. So that there, there is real critical thinking that has to go into where does generative AI actually have a space?

[00:08:43] Where should it sit in a teacher's role? Because the teacher is still the pedagogical expert and the teacher is the one who knows the child, knows how they fit into the classroom and knows how they interact and they work and they learn best. So generative AI can offer some [00:09:00] supports, like you said, like differentiating, taking a, a year seven piece of reading and making it more complex or more simple based on that child's needs.

[00:09:09] But it's the teacher who has to make that decision. Mm. When you're looking at things like asking it to analyze data, which one of the features that's been released in the last six months with one of the big companies is that it's enhanced its ability to analyze data. So what we have to think about with teachers is should we be taking our spreadsheet of class names and all of the kids' scores and uploading it and say, analyze the data, and should we be just relying on it to do it for us?

[00:09:37] Because what it doesn't do is it doesn't understand that that child got zero on that day because they were home 'cause their dog died. And it doesn't understand that that child was on a trip around Australia and so missed out. And so it doesn't have those contextual things. Those are very basic ideas.

[00:09:55] It also doesn't have the ability to make really clear [00:10:00]implications from the data based on your specific context. So it can help with some of that stuff. So there's those interpretive ideas that we have to think about. And then as you rightly mentioned, the privacy. So when we're working with children, we obviously have a duty of care to them around their privacy.

[00:10:18] And students don't own the right to their privacy, the parents do. And so that's why they all sign a form at the beginning of the year. That gives us. You know, yes, you can sign them up for Gmail or you can give them a school email account or whatever. And so we can't be just then putting children's names onto large language models, generative AI and using it because that's not in line with the privacy.

[00:10:41] So there's, there's lots to think about. Right.

[00:10:43] Miguel Regalo: Definitely. And I'd also like to add to that by saying whilst we can have more. Perhaps universal perspectives on the ethics of AI use in teaching and learning. I also appreciate how what is ethical and acceptable or unacceptable has to be [00:11:00] context based and how in one school context they might say, I.

[00:11:04] Students go for it. Use AI in these creative ways. Perhaps cite them if you are including that work inside assessment tasks, for example. Whereas in another school context, that would be deemed perhaps unacceptable or not within the scope of what a school is prepared to do.

[00:11:21] Rebecca Cooper: So if schools are making those sort of decisions and starting to work with ai, both with teachers and students.

[00:11:29] Are there some structures or frameworks or just things that they can take into consideration when they are making those sorts of decisions?

[00:11:35] Dr Jo Blannin: Yeah, so that's a really good point. So there's quite a few. So UNESCO has a position paper, it's put out on the use of generative AI in education, which is a really broad kind of stroke, but it has some really interesting.

[00:11:47] Definitions of terms that can be useful. And then at kind of the national level, we have, um, an AI framework that gives you kind of four or five pillars that you can think about as a [00:12:00] school. So, you know, privacy is one of them. Data security is another. So things to think about in your use and then depending on which system you're in, the Department of Education also has their own.

[00:12:11] Guidelines and rules. They actually have rules now that say, yes, you can use it for this and no, you can't use it for that. So there are frameworks depending on which system you're working in and where you should work. We've currently got a, an Australian Research Council grant, and we're going into schools and looking at how teachers are using ai.

[00:12:31] And one of the questions we're asking is exactly what you were saying is that, you know, how are your teachers using it? And what are you thinking about? What's the appropriate way and what's the not appropriate? And exactly as you were saying, it's totally context driven. We've got some schools saying it's totally fine to write, um, an email to a parent using generative ai and then another school down the road saying, absolutely, in no case should you write to us a parent using generative ai.

[00:12:57] And it's entirely contextually driven. It's about understanding the [00:13:00] parent population and it's what's acceptable in that space and in that. Context. So, Joe,

[00:13:05] Rebecca Cooper: in your work, what, what other examples have you got of AI use in schools and in classrooms?

[00:13:11] Dr Jo Blannin: So we're looking particularly at how teachers use ai. So we're not looking at what children are doing in the classroom.

[00:13:17] So from the teacher perspective, they're using it for planning, they're using it for administrative tasks, and they're using it for, um, unpacking ideas and concepts. So if they're looking at the curriculum area, so as you know, in Victoria, we have a teaching shortage so that we have a large number of teachers teaching out of field or back in a field they haven't been in for a long time.

[00:13:42] And so they're using generative AI to explore concepts, to refresh themselves or to, to relearn or to learn from scratch. They're using generative AI to unpack ideas and say, what a, you know, if I need to teach. The concept of friction in science. What are the top five things I should be thinking about?

[00:13:59] And it's not [00:14:00] planning the lesson, but it's helping them unpack where they should go first. One of the interesting things we're finding is that there's this idea of drudge work in schools and so this the idea that there is a lot of work that has to happen that is valued. At certain levels in the school, but it takes a lot of time and energy in a classroom, right?

[00:14:20] You know what I'm talking about? So often it's paperwork or often it's, um, repetitive tasks and teachers are looking to generative AI to help them with what they are not made. They are calling drudge work. Um, and so they're trying to use generative AI to, to plug that gap. Not always successfully, I have to say.

[00:14:40] Okay. So that's, that's an interesting thing where, where people were saying 12 months ago, AI will reduce workload. We're still in our initial phase, but what we're discovering, what we think we're discovering is that using generative AI actually increases workload initially. While people figure out how [00:15:00] to use it, answer all these complex questions that we're talking about today, and as they grapple with those and then come to a stable kind of point about how and when and where we use generative ai, then it might reduce workload.

[00:15:12] Okay. But there's a bit of a, a hill to conquer. First, is it hard to learn to use it? Not the actual use. But all of the situations around it. So when should I use it? Is it okay to use it? How do I critique the output? How do I know which one of these large language models is better for which situations? So each of these companies has a different database, a different large language model, and they've drawn on different data sets.

[00:15:37] So what are my options there? So you've got Claude. You've got, uh, Microsoft Copilot, you've got Chat GPT, they're probably the big three. Google, Gemini. Yep. Yep. I think most schools in Victoria have access to that one as well now. And so if you put the same question into each of those, you'll get different responses.

[00:15:55] So things like clawed is much better for doing, um, anything to do with [00:16:00] computers and software.

[00:16:01] Miguel Regalo: Right. Okay.

[00:16:01] Dr Jo Blannin: Anything to do with foreign languages is much better. In chat GPT at the moment. But this changes all the time, doesn't it? Mm-hmm. It gets updated and then it changes. So understanding that there's different outputs from all of those, there's this critical awareness that needs to happen.

[00:16:15] Okay. So it's not the physical opening the website, typing it in, getting an answer. That bit's fairly straightforward, but it's the, the contextual understanding of I need to plan for next Tuesday's year two maths class. I've heard AI can help me. How can it help me? Should it help me? And. How will I make use of what it gives me?

[00:16:35] Rebecca Cooper: So Miguel, how have you been managing some of this in your role and in your school?

[00:16:40] Miguel Regalo: I think when it comes to managing insights and perspectives across a range of different positions, it's really important in my experience to remember that AI is a tool and as with all tools, it serves a purpose. [00:17:00] One of the ways that I've started thinking about leading other staff in AI adoption and how we navigate the space has been through the TPAC framework, technological pedagogical and content knowledge, and it's allows us to reflect, is there an area, uh, that is a pedagogical choice that I could seek AI assistance or support with?

[00:17:24] Or as Joe alluded to earlier, is there a dot point in a study design, for instance, that could be explained using an analogy that AI could generate for me? So looking at AI use through those different lenses allows us to be really mindful around what we use it for. I also know that as with any technology that is introduced in schools, whether it's the smart boards that came in quite a while ago to throughout covid, how to use Microsoft Teams or Google Classroom for instance, [00:18:00] even though that's just an LMS.

[00:18:02] There's always going to be a variety of skill levels and a variety of an enthusiasm levels, and I think effective leadership in this space is effective leadership that's applicable in all kinds of school change, which is leading with empathy and recognizing that our staff need to. Be on board with how to use these tools because that's what our students are going to be needing from us moving forward.

[00:18:31] I was in a meeting where we discussed how 'cause of how fast AI and gen AI in particular is changing with all these updates and up, um, more data being included in these large language models. If we did not talk about generative ai, I. The students would be using it regardless, and it would be a disservice to us and to them if we didn't dedicate some brain power to figuring out how we'll adopt this as part of our [00:19:00] practice.

[00:19:00] Rebecca Cooper: So are, are students using it at your school?

[00:19:03] Miguel Regalo: Students are using it what? At our school, uh, for a variety of things. From asking chat GPT, for instance, or Quizlet to develop flashcards to help with revision and recall all the way to, I have this assignment that's coming up. I've written this paragraph. Can Google Gemini point out where I might have some grammatical flaws and where I can articulate myself better?

[00:19:29] Or there's this 20 page reading that we've got to do before Wednesday. Can AI give me a summary so I can expedite that process? So there's quite a variety of uses from students and of course in the extracurricular space, can you help me with an image of a dog next to an avocado or what have you, right?

[00:19:49] So

[00:19:49] Rebecca Cooper: Miguel, as a teacher, how do you think about assessment when it comes

[00:19:55] Miguel Regalo: to ai? Well, I wanna acknowledge there are genuine concerns about. Academic [00:20:00] integrity and equity in the classroom when some students are using AI to enhance their learning or the outcomes or the outputs that they're producing. The concerns have to do with authentication, I think, and it starts with, as teachers knowing our students well, when they have given us a piece of work that we're unsure about, does that align with previous data that we've collected about this student?

[00:20:27] Does this triangulate with other forms of knowledge about who these learners are? But I'd also like to say that because of Gen AI in its capacities, it allows us now as teachers to start reconceptualizing what it is that we want to measure. So whereas in the past we might have had concerns and, and well-founded concerns about student fluency, for example.

[00:20:54] Students can now put their writing in and say, make this sound better, even though they could type in better prompts. [00:21:00] Um, whereas now our focus might be around critical thinking. So if that's the thing that AI can't tell us about, perhaps that opens up a world of how do we assess this Then what is it that matters most that AI can then look after?

[00:21:18] If we've developed a multiple choice quiz and we can get it to self mark or AutoMark, then that's something that we can expedite and we can focus on the other skills, whether it's ethical thinking in the classroom and how we go about assessing that in ways that students can't cheat or plagiarize or fake through the use of ai.

[00:21:38] But what do you think, Joe?

[00:21:39] Dr Jo Blannin: You're exactly right. It comes down to what you're trying to measure, and similar to how. In about 2010, Google became location sensitive. So before that, when you type something into Google, you got the whole world's responses. If you can think back that far. But once we got to kind of 2010, when you put [00:22:00] something in the top, responses in Google were the local responses that really changed how things worked in the classroom because it meant that students could put something in and get a reply that would actually answer for their context locally.

[00:22:14] So we would have meetings in schools about my, my child no longer can list the capital cities of all of the African countries. And we'd say, well, that's okay. Maybe they're not rehearsing and learning those things, but they do actually understand the deep sociocultural political context of those places.

[00:22:34] Right? So we shifted what we were learning from rote learning those things, which we offloaded to Google. And we learned other things instead that we found had more educational value. And what you're saying is it's a very similar thing, and that's what we're finding in schools in our research, is what their schools are telling us is that we really are gonna have to start thinking about what we're valuing and what we're measuring in schools.

[00:22:58] We do know that plagiarism detectors [00:23:00] don't work. And one of the points you made about, you've gotta look at what the students have done previously. Now in three to four years, there will be no previously because the children in schools will always have had access to generative ai. So even that as a strategy is gonna be aged out.

[00:23:20] And at that point, in three or four years, the prediction is that everything generated by AI will be drawing off things off the internet. But most of the internet content will be created by ai. So the machine will be feeding the machine. And then we'll be assessing students' work that's drawing from the machine that was fed by the machine.

[00:23:39] So what, what are we actually measuring and valuing? So we have to kind of think future proofing for these kind of assessments. And we have to start shifting our thinking away from, I'm going to teach them about, I know this period of history and then I'm gonna get them, uh, write an essay that tells me they've remembered the things I told them because that is now something [00:24:00] that is offloaded easily to technology.

[00:24:02] What we need them to do is apply critical thinking and all those other skills you were thinking about and present their understanding and their knowledge. And their emotional connection perhaps in other ways to the learning so that we can identify that it's a personal expression of their knowledge.

[00:24:19] Rebecca Cooper: So we've really gotta think about the tasks we're setting. Absolutely. We've really gotta think about what we are valuing in terms of the knowledge that we are looking for students to express in those tasks. Mm-hmm. And then we've gotta think carefully about how well we know our students well for now, so that we are able to think about.

[00:24:37] The issue of plagiarism and cheating. But Joe, from what I'm hearing, this is an issue that's still yet to really. That's right. Yeah. We're

[00:24:46] Dr Jo Blannin: still, you know, on a, on a zero to infinity line for generative ai. We're still at number one. Yeah. So we don't quite know where that line's taking us. And I think one of the most important things Miguel said was the multiple points of [00:25:00] data.

[00:25:00] Compare it to multiple sources. So if you only have ever assessed a child's knowledge through written tasks, for example, that's not gonna be sufficient any longer. You're gonna need to have other ways. Whether it's through drawing or it's through, um, physical expression or whether it's through talking to them, talking, yeah, physically talking, standing up, debates, all of those kind of things.

[00:25:25] So multiple forms of assessment because it's going to be increasingly difficult. So we cannot rely on plagiarism checkers. We can't rely on hoping that technology goes away because it's really not going anywhere.

[00:25:39] Rebecca Cooper: So, Joe, where are you going next with ai? We, we've talked about the fact that it's changing really rapidly.

[00:25:46] What's next for you?

[00:25:47] Dr Jo Blannin: I guess it's actually what's now and what's next. So I guess we've talked a lot about the generative AI being something that is, that creates text. We also, as you mentioned it, it can create [00:26:00] images. It can now create video very easily. It creates audio increasingly, well, not great, but increasingly well.

[00:26:06] But one of the exciting things is if we take off that front end of creating an audio piece or a text, we still have these really cool algorithms in the back, these computer science kind of things that sit behind it. These algorithms and the what we can do with those on their own is actually where all the power is gonna come from in the future.

[00:26:27] So one of my research projects with a team of academics here at Monash and Dr. Joel Moore is our lead, academic lead on that is a project called Atlas. And that stands for Authentic Teaching and Learning scenarios. And what we've done is we've said these large language models can mimic, create, and respond in a human-like way.

[00:26:53] They're not human, but they can respond in a human-like way. We saw a real need in medicine, in [00:27:00] health, in business, and in education for some connections between the real world experience of interpersonal development and skills. So for example, in education, practicing having a parent teacher meeting, which is very hard to do when you're a student teacher.

[00:27:15] And in medicine, being a, a GP in a practicing giving bad news in a medical clinic as a gp, so how do you practice doing those in an environment where the risk is fairly low? You get to rehearse. And so we've taken those generative AI algorithms attached to those large language models, and we've built a platform where you can actually talk to a human-like avatar.

[00:27:39] Talk to your screen. Whatever you say to it, you can say Hello. Thank you for coming along and talking about Miguel. I wanna tell you how Miguel's doing in maths today, and it will respond to you. It takes on the persona of Miguel's parents who I've generated the personas working with schools to develop all these different personas.

[00:27:56] And they'll come back and they'll say, well, I think [00:28:00]Miguel's doing okay, but I really wanna know how he did in that science test last week, and can you tell me this? And if you upset them, if you do the wrong thing, they'll respond. And you can feel what it feels like to get into a difficult conversation with those parents, and you can feel what it feels like to bring that conversation back to a positive space.

[00:28:19] Or you can stop and you can start again. Same with the medical space. You can do that. You can do it for a business negotiation. You can do it for law. We've got one for law where you're, you've got a client coming to you as a lawyer and they've. They've committed some offense and you have to work with them to do that.

[00:28:34] So we've got this platform and that's all being created because we have these large language models that can mimic human interaction. And so. One of the exciting things with that is we can use the camera as you're talking to this human-like persona, the camera's reading your voice and your, um, face and body language.

[00:28:56] And at the end it'll generate a report and it'll say, you know, you didn't make eye [00:29:00] contact once, or do you know that you looked down the entire time? Or do you know that your tone of voice was really mono time? It didn't seem like you were engaged with that child's. Learning a tool. And because culturally these things are very different and we're working with student teachers to work in Australian schools, we can help them rehearse these skills.

[00:29:19] And that's because we have these large language models that it will work from. So we don't need it to be generating text, but we can build from these really interesting computer science algorithms in the backend. So. Taking that kind of idea where you take off the front end of it just makes text or images, and using those algorithms, I think is really exciting to see where that will go.

[00:29:41] Rebecca Cooper: Some really exciting stuff. But taking us back to the classroom, if there are teachers out there listening tonight who are maybe a little bit dubious of all of this, but also maybe a little bit curious to see what could work and what could happen if they gave it a try, where would they [00:30:00] start? Miguel, I'll start with you.

[00:30:02] Miguel Regalo: Start with the students. Yeah. When you think of ai, not as a replacement of you, but as an extension of you, someone who can be that critical friend and you have a problem of practice in front of you. Whether it's a group of students who just aren't grasping a particular concept or where you are thinking about the validity of your assessment tool and whether it's capturing what it needs to capture, asking for feedback and support allows you to be firstly an autonomous professional because you can make a better informed decision about.

[00:30:43] Perhaps this tool that I've developed, uh, whether it's a short answer question, assessment task, or a multiple choice question task, maybe I, I've included a bit of bias in there that I wasn't aware of, that I can switch some of those questions or those answers around. So I think when it [00:31:00] comes to starting, it starts not with some external problem out in the world.

[00:31:07] Back with the students in front of us and any problems of practice that we encounter through our day-to-day work, or even, I guess if there's a way for for teachers to adopt AI if they're slightly hesitant to do so in a student facing capacity to figure out what that dredge work is and to see what they could automate.

[00:31:28] I think in many ways AI can help us to grow our professional knowledge, but it can also help us to support our professional wellbeing when it comes to the finite amount of time that we have in our day. Is there something that we would like to try and automate? I. And perhaps that's the thing that we dip our toe into when it comes to using those, uh, gen AI platforms.

[00:31:51] Dr Jo Blannin: Mm.

[00:31:52] Rebecca Cooper: Joe, what do you think?

[00:31:53] Dr Jo Blannin: Yeah, I think there's a great idea. I would also, I think there's two things I would suggest. I think all of those ideas are fabulous. Start with where you're [00:32:00] at with your students. I'd also suggest starting from a critical space with students, you can do things like trying to convince it that man has never landed on the moon, um, or that the haggis is actually a rodent that lives in Scotland, not a meal.

[00:32:13] Have them work out, its fallacies that are built in and then have really critical discussions is a really great way to start so that they start from a place of control over the generative ai and then for your individual growth yourself, bit of shameless promoting. We do have a short course that runs over three weeks here.

[00:32:32] Um, in the faculty of education that I've put together that talks about a lot of these things we've covered tonight. Um, it's. I find it really exciting to work with teachers, so that one is self-directed and it's all online and it's designed for teachers who haven't got a lot of time, but have got some real interest and comfort.

[00:32:49] That place that you were talking about, Miguel of I really should know more, but I dunno where to start.

[00:32:54] Rebecca Cooper: Well, thank you both so much for joining me to talk all things AI in education. Thank you so [00:33:00] much. Thank you. As AI continues to evolve, the key for educators is curiosity. Looking beyond what we already know to uncover new possibilities.

[00:33:11] By exploring innovative ways to integrate AI meaningfully into the classroom, we can reshape how we teach and learn. The next decade is going to be an interesting one. We've included a wealth of practical resources in our show notes that support your teaching journey. Be sure to check them out. If you're enjoying the show.

[00:33:32] Don't forget to subscribe, rate and review and follow us on Instagram at Monash Education X at Monash Education and Facebook at Education Monash. And tell us what you thought of today's episode using the hashtag Let's Talk teaching podcast. We are grateful for the support of Monash University's faculty of education in producing this podcast.

[00:33:56] For more information on short courses and undergraduate and postgraduate [00:34:00] study options, head to monash.edu.au/education/learn more. Thanks again for listening to Let's Talk Teaching.