Episode 10: Simplifying simulation, with Sidney Icarus

Unpacking the essential elements of simulation and its relationship to sensemaking and play.

Games and simulations have a lot in common. They both give players a low-risk, engaging way to develop a deeper understanding of a system. But they don’t always have to be immersive 3D experiences. We welcome our friend Sidney Icarus, a facilitator, game designer, and former air combat professional, to discuss the practical ins and outs of how simulations factor into learning and decision-making processes.

Love what you’re hearing? Buy us a cuppa!


When you make a small donation to our ko-fi, it makes a big difference to our ability to keep making free-to-play games.

Links to resources

Sid’s personal blog
http://SidneyIcarus.wordpress.com

XCOM video game
https://xcom.com/

Buckminster Fuller’s World Game
https://www.bfi.org/about-fuller/big-ideas/world-game/

Model United Nations
https://en.wikipedia.org/wiki/Model_United_Nations

“Split S” scene from Top Gun
https://www.youtube.com/watch?v=f5MOgcpUdQI

Using Self-Designed Role-Playing Games and a Multi-Agent System to Empower a Local Decision-Making Process for Land Use Management: The SelfCormas Experiment in Senegal
https://www.jasss.org/6/3/5.html

Citizens Jury
https://participedia.net/method/155

Apocalypse World tabletop role-playing game / genre
https://en.wikipedia.org/wiki/Apocalypse_World

Capgemini Accelerated Solutions Environment (ASE)
https://www.capgemini.com/au-en/operating-model/accelerated-solutions-environment-ase/

Decaying Orbit game by Sidney Icarus, on Kickstarter
https://www.kickstarter.com/projects/storybrewers/littlebox-journeys-story-games-to-take-you-far-from-home

Transcript

Hailey: [00:00:00] This is Amble the podcast where we take a disciplined wander through the borderland between ways of working and games. I am one of your hosts, Hailey Cooperrider.

Jason: And I’m Jason Tampake.

Hailey: And Jase we’ve got our nerd pants on today. We are very excited because we have our guest, Sidney Icarus, they /them. Welcome Sidney to the podcast.

Sidney: G’day, Good. Hey, thank you. It’s a pleasure to be here.

Hailey: Very good to have you. So we’re excited because we actually met Sid through our fellow Ambler, Logan, who’s been on the podcast before. Logan was like, “Hey, there’s this person they’re a TTRPG game designer. And they’re saying they want to get into facilitation and what do I tell them?”

And we’re like, “tell them to listen to the podcast!” And Sid did listen to the podcast and was excited and had lots of great thoughts and we got to talking and, wow, [00:01:00] I think they downplayed, like ‘I’m thinking of getting into facilitation’. Cause they actually have like a huge amount of experience facilitating workshops in very important sectors like defense and health.

And interestingly have used a lot of tools like simulations in those workshops. But also because they are such an experienced game designer and a long time TTRPG gaming enthusiast and look at all of it through an academic lens. ‘An aspiring descriptivist’ I think you said Sid. They bring this kind of mindset that felt really aligned to analyzing how RPGs work both in the fully for fun setting as well as in the applied setting.

And so we thought, yeah, let’s do that. You know, Jason and I talked about models and simulations in episode 4, it was pretty abstract. We didn’t have a lot of practical experience while Sid has done it in context with professionals in these sectors. And we’re going to use those examples and stories as the grounding for our usual amble [00:02:00] through the subject matter.

So you can find more about Sid and their work at SidneyIcarus.wordpress .com. That link will be in the notes below. And I think I covered everything. Sid is there anything else you want to add about yourself?

Sidney: Just, I like to be clear about where I’m coming from. When I say that I’m an RPG descriptivist, I’m talking in contrast to prescriptivism there. The idea that I’m not here to talk about, like the way that things ‘should be done’ or like the way that the best case scenarios of things and make value judgements on them. I’m here to sort of talk about the way things are done and what I have seen in the way that the people do respond to that in the culture.

Jason: Nice!

Hailey: Yeah, thanks. It’s a fantastic, helpful distinction. And one that I think it applies in a number of fields and people can benefit from. So I think we’ve set up the idea. Usually what we do at the start is invite, either we or we invite our guests to share a game that they’ve played or that they’re excited about that [00:03:00] maybe has some relationship to the topic at hand. I think Sid you had a couple that you were going to bring into the picture.

Sidney: Well, I really want to talk about XCOM and specifically XCOM 2 and the two recent releases. And the reason for that is I checked my Steam the other day and I’ve got like two and a half thousand hours between those two games. I really enjoy them, and I think that they are going to speak to a lot of what we’ll go through later.

Hailey: So I’ve never played XCOM, but if it’s on steam presumably it’s a video game. So, give us the basics.

Sidney: XCOM is a video game, as you say, which is about an alien invasion. The world is not prepared for it. And so in the first XCOM government sort of falls apart and it’s up to this small organization called XCOM, X-C-O-M, to defend the planet. And you do that by creating a little band of soldiers and those soldiers go out and do missions and you take them through the tactical steps of it.

So this soldier moves to the high ground and[00:04:00] takes the aim action, and then will later shoot an alien. This soldier moves behind heavy cover and throws a grenade. Those really, really low tier tactical decisions. And then once you’ve blown up all the aliens and you’ve gathered up all their equipment and you’ve rescued the scientist, you go back to your big base and your like helicarrier in the sky, and you use that scientist to research new technology and that new technology becomes better armor or new weapons. Or like you can recruit new soldiers and then those soldiers specialize.

And so it has these two tiers of play, these two layers of play. One of which is a really tactical, nitty gritty, personal decision-making. All of your soldiers are named and get randomized and get nicknames and stuff as they go through. But you can do what I do, which is named them after all your friends. And then you’re like, “oh Jase and Coops are out there on this mission. Oh, no! Coop’s got shot. Now I’ve got to like, get someone over there to [00:05:00] rescue or give her the med kit” and it creates that really personal level.

And then you go back up to these really strategic level decisions and you’re like, “oh, I’ve only got so many alien alloys. Can I make new weapons? Can I contact the resistance fighters in this new area?” And it’s a brilliant, brilliant game in my experience because it does what games do.

Like Amble’s core idea about games is that they’re emotionally engaging, is that they get people to care about things in different ways. And XCOM is about I could be a strategic leader of a military force and send the troops off and whatever. But because I get to do the tactical stuff as well, because I get to meet those soldiers and they get to have bonds and connections with each other and they get to give each other nicknames, then I care about them more. And that gives the strategic layer more impact.

Hailey: I love that. Because I know so many simulation tools about the real world, especially about business or about political decision-making or [00:06:00] model UN or Buckminster Fuller’s world game. These things that they get excited about, they still kind of make you the manager on high overseeing the total territory at a very general level.

And that’s something that you can learn from. But it misses an opportunity for where most people are in the world. In terms of the decision making power is much more local. It’s much more about the small group or the squad that they’re part of trying to make something happen.

You know, Amble is a squad. There’s five of us. We’re not managing a multinational corporation. So simulating that doesn’t necessarily help us. So I love a game that combines and connects the 2 perspectives.

Sidney: Mm. Hmm. And I think that is part of what makes it stick with you. What makes it hang around is games where you get to play model UN and you get to choose what you do about distributing resources after an earthquake is an interesting academic exercise. But games, role-playing games, [00:07:00] computer games, anything where you get to go and meet people and read the stories and hear the stories of the people who were impacted by that earthquake or whatever, that’s where that emotional resonance comes from. And I think that’s a really important part of simulation. Emotional resonance is a very important part of simulation that sometimes it gets left off the table.

Hailey: Yeah. So imagine if our politicians could have a policy simulator at that general level, at that macro level. But then they also went in and played quests within the world that their policy was intended to create-

Sidney: Exactly.

Hailey: -you know, living as the people who have to live through that.

Sidney: Yeah.

Jason: Just quick question; does that trade off between what you’re experiencing at a granular local level, versus the sort of strategic stuff, does that ever actively come into conflict? As a player, does that tension between the two layers ever come in? The idea being is that, you know what, we really need this particular alloy from a mission and our [00:08:00] casualty rate’s going to be high. I don’t want to lose Jase or Coops on the mission. Does that sort of decision making ever come into play and how do you as a player, how do you resolve that sort of thing?

Sidney: So that that’s, that’s a really good question because so much of the game is about supportive loops, right? About like these loops that work with each other, where it’s like, “oh, you need supplies. So you make weapons to go and get those supplies and you use the weapons to get them. And then when you’ve got them, then they help you make more weapons.” And so very self-reinforcing loops.

But you’re right. There are these moments where at the narrowest level, the decision of, “this is going to be a hard mission. Do I just send all of my specialists who are all really experienced or do I use this as an opportunity to train up someone new as well?”

But then, “oh, that new person is like a burden they’ve got to be carried through that mission”. And so there comes to a lot of conflict there. And then the second degree of conflict- and look, when we talk about getting our [00:09:00] nerd pants on, I run a heavily modded XCOM too, with like a bunch of extra stuff in there. In a lot of the mods, there are missions where the decision of whether to like pursue a secondary objective or extract is one that exists in there.

And so there’s moments when someone gets captured and then the mission pops up. Do you want to launch a rescue mission? It’s danger high, and there’s these known baddies on there, and you look at it and you think, “well, to be honest Jase, you know, I don’t love all of the skill decisions I made for you. And, you know, we’ve already got someone else who’s good with grenades. So maybe, maybe we just don’t with that one”. And then the conflict on top of that is strategically time is a resource. And so every choice is a lost opportunity and all that sort of stuff.

Jason: Yeah. The only reason I raise it Sid is because I guess our intention is to enable people to make better decisions by using games or using models in general to help inform that kind of decision. There’s often this [00:10:00] perception in the environment that leaders are those who can step away from the the frontline sort of messiness and just hold themselves at a really strategic level because there’s pragmatic trade-offs that need to be made that require that sort of remove. If you were actually exposing yourself emotionally to decisions on the frontline all the time, you know, and this happens, I’m not just saying in a military context, I’m saying in a social context, if you constantly exposed to the cost of poverty in a particular zone, that can be very, very emotionally taxing.

Therefore there’s a sort of perception that you need to be able to step away and take a strategic view in order to make decisions that count. And it’s interesting that XCOM, from your description, gives us access to both and somehow having access to both of those sorts of lenses really makes that experience a lot more valuable.

Sidney: Yeah, I certainly, I certainly don’t disagree that that is a good, beneficial thing to have in our strategic leaders. What I would say is we always want that to be intentional. We never want them to be [00:11:00] ignorant of those emotional and pragmatic trade-offs. We never want our generals to actually think they’re playing with toy soldiers. Even when that is what they’re doing in terms of how they’re displaying information to themselves. So yeah. I think that it is incredibly important for strategic thinkers to be able to step back, but there needs to be some portion of their mind that is always occupied by the fact that yeah, these models do engage with real people and real worlds.

Hailey: And so Sid you have experience working in the military context. Maybe you could give us a bit of background on that and then you can go ahead right into introducing the example exercise that you’ve done, that we’ve talked about. Where you’ve actually got leaders playing with toys. Yeah. Cause we’re really keen to hear about how this actually goes. So go into as much detail as you like.

Sidney: So, I’m an ex air combat expert, no longer. And just for like clarity here, even though I worked in environments where I held a security clearance and that sort of stuff, nothing I’m talking about here is going to be above [00:12:00] the unclassified level. So, you know, no numbers and no details.

But I was an air combat expert working in headquarters. And my role in headquarters was to be the tactical link for strategic thinkers. So when air assets were required for global operations, chief of joint operations would say, “we want to achieve this effect”. And my job was to understand the tactical nuance of it and to say things like, “well, that aircraft can’t actually carry that much, but if we get a different type of aircraft in then we’ll only need one of them”. Or, “you know, it’s a high threat environment, so I’d actually rather the smaller aircraft, but we’re going to use two of them to do it”. And work on things like that.

I was doing that through the early two thousands which meant that- or, sorry, through the early 2010s- which meant that I was there for loss over the Pacific ocean. I was there for MH 17’s shoot down over the Ukraine. And I was part of the team for planning the aid drop on Mount Sinjar when the Yazidi population was stranded up there by ISIS. And then the end of my [00:13:00] career was planning and reviewing bomb drops in Iraq. And so that was, that’s kind of the scope in which I existed.

And so there was an exercise that we ran where we did it as what we called an ABC exercise, where in the morning we were testing our thinkers. And so we would have the chiefs of certain divisions in a room and they would be given an intel dump and then they would have to make decisions as to what they were going to do with assets. And then for the B part of the exercise, we would send those orders off to squadrons and say, “go and fly this exercise and like, let us know what your results are”. And then for C we would take the output from the squadrons into the intelligence centers, and then they would create the new dump for the next day. So it ran over the course of a week where we just rolled this constant testing of strategic planning, validated by tactical [00:14:00] exercise and then reinforced by intelligence profiling.

Hailey: So they would literally go and do with the actual planes. So we’re not talking about models or simulations here.

Sidney: Well no, because it’s still a model or a simulation. So I want to talk about fidelity as a, as a concept is…?

Jason: Yeah, yeah, go there, go there. Talk to us about fidelity.

Sidney: So there’s several ways that I can simulate an air war with you Hailey.

Jason: Sorry, just to give context to maybe some of some of our listeners Sid, just before we go into it. This notion of fidelity is there’s a tension in any space that uses models. How much fidelity do you need in that model, in order for the person using it or making decisions based on it to make good decisions or for it to be useful.

And, you know, the $60 billion trade off is if we had all the time in the world and all of the resources, we could make an exact replica of the scenario and test in that sandbox. The truth is we’re often [00:15:00] bound by time or we’re bound by resource constraints that we can only deliver a particular type of fidelity within a given amount of time.

So, yeah, I’d be interested to learn about your experience with fidelity and how that works.

Sidney: Yeah. So I think we’re throwing a word like ‘fidelity’ around a lot. Like if I were to ask you what does fidelity mean to you? What is it?

Jason: Ah-

Hailey: I mean…

Jason: -love these questions.

Hailey: I usually go for it etymology right. And so it has to do with sort of faithfulness, right? Or like an accuracy or a truth to some objective reality that we presume is out there. So fidelity is measured on a scale of high to low. And high fidelity implies it’s closer to the original.

So I think most people are familiar with the idea of fidelity from sound, you know, a Hi-Fi stereo system and that’s like, well, how close is it to if I had the band here in my living room. But then of course, well, if they’re in your living room in your particular acoustics, it’s like, so how close is it to me being at a [00:16:00] concert, I guess.

And you start to see some of the fuzziness just in thinking about it there. That’s how I think about it.

Sidney: And then the opposite of that is low-fi, which is ‘low-fi beats to chill-

Hailey: to study by.

Sidney: -to. Yeah, to study by.

Hailey: Which I’m not sure is that kind of fidelity, it just seems to imply a more sparse acoustic landscape as much as anything.

Jason: Yeah traditional lo-fi meant in the early noughties, as you know, we’re putting our nerd music hats on, the less bandwidth or the file size was smaller. So the amount of detail that you could get into the sound profile was inevitably less.

Sid what’s your view on how do you approach fidelity in this space? Because I’m aligned with Hails, I agree with that. But also would like to be explicit in saying that I don’t think, aside from reality itself, perception doesn’t lend itself particularly well. Any sort of model is going to be flawed because we still have to engage with it.

Sidney: Yeah. Yeah. And in the same way, reality is flawed, right? Cause we still have to run it through our own biases and our own, our own filters. Yeah.

So the one thing I want to do with [00:17:00] those two is link the two together. Jase, you mentioned resourcing. Hailey, you mentioned the closeness or the proximity to objective truth. Putting those together; lo-fi stuff, low fidelity simulation tends to be further from objective truth, but also tends to be incredibly cheap and quick to produce. High fidelity stuff; more expensive, theoretically closer to truth. The question is, what do you want to achieve?

And so an example of a low fidelity simulation is… Hell, like we could do one, right now! I could say to you, “Hailey you’re flying at 30,000 feet. On your nose is an enemy aircraft. What do you do, Hailey?”

Hailey: Um, I perform a split S maneuver!

Sidney: Great. So yeah, you perform a split S, you’re not attempting to radar a-

Jason: What is this split S maneuver sorry?

Hailey: That’s a top gun reference. I have no idea what it

Jason: Yeah.

Sidney: This is the thing that I love about low fidelity simulation. It doesn’t matter what a split S maneuver is, what matters is, what are we trying to [00:18:00] achieve? Right. And so, I know that that maneuver is intended to get outside of a targeting volume. And so I can just, I can say to her, “Okay so you perform your split S maneuver. You look at your radar warning receiver, you are not currently targeted. The aircraft is still tracking towards you, but because of your split S you have built some distance. What do you do now?”

And we can just keep doing that. We can just keep talking through that. All I’m doing is, we talked about loops before. Loops in games are feedback systems. Action, reaction, action, reaction, action reaction. And all I need to do to create a low fidelity simulation is create space for action and then provide a reaction to that. And that’s all I need to do. I can do that through voice. I can do that by doing the hands thing. I’m so glad we’re in a visual medium, like podcasting. You know the top gun thing of, if you’ve ever seen fighter pilots in a bar you’ve seen low fidelity simulation because that’s how they talk to each other.

Hailey: So each of their hands represents a plane in the simulation and they’re kind of showing with their hands to each other, what the two planes do.

Sidney: [00:19:00] Yeah, they’re like “we came in and then I went up into a one circle fight, but he turned down into a two circle fight, and all of a sudden we were like, we had to figure out how we’re going to resolve that dog fight”. And that is really, really low fidelity simulation.

It’s very, very, very far from reality. There’s no scale. Like hands are not the size of planes. There’s no- but what it is, is communicating a very specific concept. And the benefit to low fidelity simulation like that is that because it occurs in such an abstract form, there’s so much stuff you don’t have to worry about. You don’t have to worry about speed or pace or, you know, I can say to you Jase; split S maneuver, your radar warning receivers clear, what do you do? And you can get back to me a week later and be like here’s what I do next. And then we can go back to it.

High fidelity simulation is you and I are in aircraft together. I’m flying at you. You see me on your radar. Now, it’s not real. It’s this, this is, this is where I get back to [00:20:00] Hailey when you’re like-

Jason: I’m still terrified Sid! That’s-

Sidney: Well you should be. How did you get into this aircraft?

Jason: That’s right. What am I doing flying a plane?

Sidney: So yeah, you’re um, the booster seat is strapping me. So, so Hailey, when you said originally like, “oh, so this isn’t simulation then, cause you’re actually flying aircraft”. Like I’m not going to launch a missile at Jase, it’s still simulated. Right. So high-fidelity is, yeah, we put aircraft in the air. Those aircraft can target each other. They can lock each other. They can push a button on their throttle, which shoots a missile theoretically. And what that will do is the computer will track based on your position, based on directions, based on the math of missile intercept. And it’ll go through those calculations. And then 20 seconds later, it’ll come back up and say, ‘you hit’ or ‘you miss’. And you can program things in there, like a random dice roll. Sometimes missiles just miss. Sometimes these things happen, like it’s failure to fuse or whatever.

And so that is a really high fidelity type of simulation because we’re adding all that stuff back [00:21:00] in. We’re adding in the pace of the fight, which means decisions need to be made immediately. We’re adding in the muscle memory. We’re adding in the scale, we’re adding in the math and the physics of how a missile flies. And sometimes we’re even adding in things like how much fuel do you have or, how energy expensive is it for you to climb a few thousand feet before you get into a fight? And what benefit does that provide to you?

The intention behind fidelity and the reason that simulation is really important is they’re answering different questions. A low fidelity simulation is usually answering questions like ‘ what’ and ‘why?’ And a lot of conceptual questions. ‘how do we go on to go to a fight?’ ‘What maneuvers might be appropriate here?’ ‘What tactics are we going to initially start with?’

Fidelity simulation is really about getting down to those nitty gritty [00:22:00] ‘hows’ of like ‘how do we fly this maneuver?’ ‘How fast do I have to go to escape the range gate of your missile?’ ‘What flare pattern will defeat your sensors?’ All of that stuff is much, much more detailed. And this kind of breaks into the concept that I mentioned to you off air about like testing, to learn versus testing to prove. In a low fidelity simulation, you can’t prove anything because there’s no reality to measure it against.

If we’re talking and I say, “you’ve got an enemy on your nose, it’s like, I don’t know, 30 miles away. Which is, you know, it’s there, what are you going to do about it?” You know, like “I perform a split S”. And I’m like, “great, yep, you escape it’s thing. What now?” “Uh, oh, I guess I turn back in and target and shoot it.” And I’m like, “yeah. All right. That works. Yeah. All right. And you should get me down. Cool.”

That has not proven anything about that tactic because there’s no reality to that adversity. There’s no reality [00:23:00] to the challenge that that’s up against. But what it is good for is the really junior pilot, who wants to understand concepts as they’re unfolding in slow time.

So and then that the high fidelity simulation is really beneficial for proving things, because you can be like, “no, you didn’t defeat the maneuver. You didn’t defeat the missile. I have a computer here that shows us that when you flew that direction…” “Well, I’d fly it better.” “No, you can’t like, I’ve seen you do it. You just got up in the plane and you flew it and you failed and you got shot down. That is a failure of tactic.” High fidelity can prove that, but what it can’t do is help us learn because it’s moving too fast. And I can’t make you a good air combat tactician by putting you in a plane that’s going the speed of sound and putting you through an air combat phase because it’s moving too fast for you to be able to consider impacts and make decisions.[00:24:00]

Hailey: That makes sense. Yeah because of high fidelity in terms of how it tests the actual player, the pilot in this case, really is testing reaction time and decision-making on the spot. And so that’s not something you put a beginner through. It’s kind of useless. It’s something you put an expert through.

This test to learn versus test to prove distinction is interesting because I think we’ve talked about this before. If experts know what good looks like, or you have some access to objective truth in this case, the computer model which can kind of accurately simulate what happens with the missile and between the two actually flying planes, then you can do the test to prove step.

But we’ve talked about situations where we’re trying to create a policy or we’re thinking about a complex system, which is, you know, far more complex than two airplanes dog as complex as that is. Where we don’t actually necessarily have a sense of objective truth. We don’t feel like we even have access to what the real [00:25:00] outcome is going to be. But we still might use a simulation as a way to explore that complexity together. And so we’re sort of learning and proving at the same time, or, yeah. Do you have any examples from your experience that are closer to that kind of situation?

Sidney: Yeah. So it is a spectrum. Learning and proving is a spectrum, high fidelity low fidelity is a spectrum. And as I said before, I’m a descriptivist. My intent here’s, not to say you have to start low-fidelity or you have to start high fidelity or whatever. So in terms of simulations where we’re learning and proving at the same time I think really good examples, uh, from, hmm…

Hailey: We could look at that example that we looked at together, from Senegal. Do you folks remember?

Sidney: Oh yeah, yeah, yeah. Let’s let’s talk to that. Cause that is one that is really interesting in terms of how they developed what they were trying to prove-

Hailey: Do you want to introduce it?

Sidney: of developing. I don’t actually have it up at the moment, so I’m going to quickly [00:26:00] bring it up.

Hailey: Here I’ll chat it through to you as well.

Sidney: Beautiful.

Jason: Yeah, I think, this is fascinating because I think one of the things that it’s highlighting the tension between, in contexts that really count ie enabling a combat pilot to effectively exercise tactics that are going to mean the difference between life and death, or crashing and not crashing, or achieving a mission and not. High fidelity is really important because those are the sorts of decisions that are going to have to be made in real time.

The challenge is, it’s awfully expensive to build all those things. You can’t actually put them in real situations. So you’re using really high fidelity simulation tools to do this. Our challenge is in those sorts of spaces, the kinds of outcomes are largely known. Like the physics of missile flight, how a particular plane responds, what kind of range a particular radar sequence might have, et cetera. So we can build models that [00:27:00] do that. When we’re encountering scenarios and I imagine this is sort of what plays out in the ABC sequencing that you were talking about earlier. But when we’re moving into a scenario that we’re not really sure what the tactical environment is going to be, therefore we can’t model it in any sort of realistic fidelity, there’s this sort of tension that people have to play through around how much information is enough and how much modeling do we need to do in order to feel comfortable that we can act.

And in real world scenarios, in policy scenarios in, I’m sure it’s the case for military interventions, et cetera. Sometimes all that information is not known, so things can’t be made at a really high fidelity level. So this sort of process of how you bring useful decision-making into a context that has enough information to act reasonably, versus having to wait for the six months to generate the information required because you don’t have six months to wait. Yeah, I’d be [00:28:00] interested to learn more about that ABC sequence, but I think if we get into the Senegal example that might set up sort of this idea of an environment where people are co-designing the type of modeling landscape or the type of information that’s brought in in order to allow for reasonable decision-making.

Sidney: Yeah, no, I I, super agree.

Hailey: I can kick it off with a really basic introduction to it. So this is an article that I stumbled on ages ago in my early investigations into this space. And it’s by a group who were working in Senegal with local stakeholders there. The lead author is Patrick D’Aquino and we’ll put a link into the comments. And the policy that they’re trying to formulate with the local stakeholders is land use and land management. And they have actually the stakeholders in play that are represented in this model or in a simulation where farmers who want to use the land for agriculture; breeders, who want to use the land to [00:29:00] raise livestock; fishers, who are just trying to catch fish there and also migratory birds. So a non-human stakeholder.

And they brought together all of the knowledge that they had about this particular region in Senegal. So spatial data and weather data, and everything that they knew, and they combined that with a multi-agent model using something called CORMAS. And were trying to, in a sense, discover the optimal policy through which all of these stakeholders could share this region and not step on each other’s toes.

And the way they did that- and I know there was some scientific modeling that went into it- but what they’re featuring in this article is this process where they gathered representative stakeholders in the room from all these different groups. I don’t know if they had actual migratory birds there, but maybe somebody representing the knowledge of them and their perspective. And they worked with those people to design what they call a role-playing game [00:30:00] that modeled moving through the seasons, through the months of the years and the changing seasons and all of the different ways that these different stakeholders wanted to use the land and allow them to model different potential policies or rules that they would agree and abide by.

And in doing that, they found that it led to a better outcome in terms of a deeper understanding among those stakeholders. And also I think more increased trust or more increased willingness to share, because they could see that there was a possible world, a possible future, in which everybody benefited appropriately from the same scarce resource of the land.

Did that represent more or less your reading of the article?

Sidney: Yeah. I just want to highlight one really good line in, in it, where it talks about their model. And it is a good part of the introduction and discussion about fidelity. The risk lies in the difficulty of reaching a model sufficiently close to the complex nature of reality without producing a device that is so intricate, that it is no longer suitable to use directly.[00:31:00] That should actually be the title of this episode. I want you to character limit every pod catcher that you use.

Hailey: Yeah, Yeah, fantastic. And I mean, you know, the people that they’re working with were very knowledgeable people. Very savvy about their world, but weren’t necessarily people working with computers or computer models, or with a lot of experience in that field. The images they have of them playing this game are using pen and paper game pieces. They’re sitting in a mundane room, there’s not a sort of super computer in the corner, crunching things while they’re doing this. And so that was part of their constraint, right? Is that the being directly usable doesn’t mean that- scientists experience- if these tools can use it, it means people who have never used a spatial model before it can use it.

Sidney: Yeah. I, I think I want to challenge the idea of that there wasn’t a super computer in there because I think there was. So this is what we call endogenous design. Endogenous design, or I think they call it [00:32:00] self satisfaction criteria, or is that something that I’ve written down for myself? The important thing is endogenous design is design that comes from within the game, from within the simulation itself. And so they do call it self satisfaction criteria. They actually got the specialists together, the farmers together and said like, how do you get points as a farmer? What gives you a report? And-

Jason: What are your goals? What are-

Sidney: What are your goals? Exactly. Yeah. So they did that needs analysis. And I would say that that is putting the supercomputer in the room. These people were experts in their field and to say to a breeder, how do we calculate success? And they said, well, we need access to fresh water, no more than 800 meters from grasslands. And we should be no more than 800 meters from stations and they should flood twice a year or every year, but for no more than 20 days. And that, that is a very high degree of expertise, but a low degree of complexity.

Jason: And there’s a sense in which the criteria that is meaningful for the farmer or [00:33:00] for a breeder is actually in a sense importing all of the relevant high fidelity modeling criteria, because those are the bits that count. Like the no less than 800 meters from grazing land and source of fresh water and aha! Or am I reading that incorrectly? Like because this is interesting. It goes to the heart of the engagement. We’re acknowledging that the experience and expertise of people who are immersed in these worlds are in a sense the supercomputers that we’re bringing in, which, I mean, I think, you know, from a human centered design perspective is pretty characteristic of that sort of space.

I guess the challenge is though, while that experience is known to the individual, it’s how do you get that extracted into a shared model? Like, that everyone can agree, that this is how the relevant dynamics work, and these are the things that are important. And making that decision.

So, and here’s what I’d be interested in from your perspective. You’ve worked in spaces where [00:34:00] the amount of relevant information that’s brought in is actually really serious. Not enough of it there and you’re dealing with real world consequences. But you don’t have the time to import everything.

Yeah. Every piece of data in, and in a sense, I think this is probably for some of our listeners. I’m not trying to say somehow that people who work in the facilitation space or game designers are responsible for life and death scenarios, but there is a sense in which the design decision is the same.

How do we create a context in which there’s enough prompting to enable meaningful action reaction without spending so much time and crunch that we’re actually trying to replicate the entire environment itself.

Sidney: Yeah, the Senegal example speaks to iterative design and debate where that’s how they unpacked that. They got a bunch of people together and they had the debate and the unfolding. Which is really important because I’m sure we have all been in meetings where there have [00:35:00] been debates that were ineffective. But because of the structure they put around the debates, the term they use, I love, I love quoting from stuff, ‘every participant himself acknowledged this analysis, even the park officials and farmers. Hence debates are no longer fruitlessly attempting to place responsibility and guilt, but instead trying to uncover the means for handling these matters.’

Jason: A hundred percent.

Sidney: And that is, I think, a really important part of it. People tend to become exhausted by long game sessions inherently and in terms of attention spans and stuff like that. So iterative design is really a strong way of doing this. We’re getting people together, solve a problem. Talk about why it did and didn’t work, go back, solve the problem again, talk about why it did and didn’t work, and be able to like break that up with structure and motivation, I guess.

Hailey: There’s a piece in here in the conclusions that I think really speaks to our discussion about fidelity as well as this discussion about iteration that I think is really compelling. In 4.1, [00:36:00] if you’re looking at it and they’re talking about how developing the role playing game in conjunction with the modeling tools was crucial to the process because they say it would have been physically impossible without computer simulation to play out the different scenarios selected by the stakeholders, and to observe their multiple impacts over sufficiently long time intervals.

So without the simulation tools, if someone goes, “well, what if we do this? What if we do that?” That ‘what if’ questioning, which is so crucial to people arriving at new possible futures, and you have to recreate a new game on the fly out of paper and make up the different rules, you just can’t do that in a timely manner. But if you just need to sort of dial the sliders up and down on your computer model and then play that out, then you can actually start to see that. And then they go on to say that it was important that they talked eventually more in terms of dynamics then in terms of just ‘if then’, right?

Yeah. You’ve gone right on the nose. Because if they [00:37:00] see, okay, we’ve run the model a hundred times. It’s not that we can do A and then B will always happen. It’s more that if we do A, B tends to happen. And so that is a way more robust, usable set of understandings than trying to always just make a chain of events happen if you do them in the right order.

And I think that in that paragraph, it’s so rich. The last thing I’ll add is that because the players, the subject matter experts, participated in the making of the model, they no longer conceived of the model as a magic black box capable of seeing into the future, which is also can be very suspicious. Right. If someone comes along and says, there is this black box that sees into the future, people know not to believe that or not to trust that, they didn’t help make it. And so yeah, all of that sort of added up to this beautiful shared understanding that, yeah, like you said, moving from blame, through to a shared mental model and a shared pathway forward. [00:38:00]

Sidney: Has Amble discussed like black box design and like what that term means?

Hailey: Probably not in the way you’re thinking. I think it came up a little bit, but yeah, please go.

Sidney: So, black box design just means that there are inputs and there are outputs, but you kind of don’t understand how one gets to the other. It is considered a box that is opaque. And therefore you can only view its inputs and outputs. A really good example of it is the YouTube algorithm. You know, people make videos and then videos are recommended and no one kind of knows how or why they are. But they just, they’ve made a lot of guesses. But the thing that I want to talk to about that and about that black box system, and why that’s important to remove that is a concept called ‘robustness to distributional shift’.

Wow. That is…

Hailey: I’m really going to need to break that one down.

Jason: Great!

Sidney: That is a wanky term. Okay. ‘Robustness to distributional shift’ is a term that’s borrowed from artificial intelligence design. And it’s based on an idea called ‘the tea problem’, which is ‘I can make a robot that makes me a cup of tea, [00:39:00] but that doesn’t mean the robot can make a cup of tea’. Because as soon as it goes to your kitchen Jase, you have a different kitchen layout, which means-

Okay, I’m going to use the word trivial- and every AI designer out there, I’m not it’s, you know- it is a trivial task to create a robot that can make tea in one kitchen. It is a very difficult task to make a robot that can make tea in any kitchen. And the difference is I can teach a robot to reach three meters past the door, 20 centimeters up at 10 degrees to find where I keep my mugs. But I can’t teach it to identify where are mugs usually kept in a kitchen, like culturally, where do we keep mugs? And then, what does a mug look like? Or maybe you’re one of those people that don’t use a mug, like you use a cup without a handle to it. So now you’ve got to identify different, yeah. So robustness to distributional shift is the idea of, it keeps performing to a similar standard, distributional change of environment and shift change.

Jason: So across different [00:40:00] contexts, the model continues to perform particularly well.

Sidney: Exactly, exactly. So where this model is really good, as you said Hailey, and where I would say everyone who is creating simulations should be thinking is, how robust is the distributional shift? If we change the sliders and ask it to work on a global scale, or if we change the sliders and ask it to work for a doubling of population, or to work over 20 years instead of three years, does it work?

And that’s a really good question to know about your simulation. Because if it’s not robust, then you just need to understand that you’re only answering one very specific question. You’re not giving yourself very broad rules to live by. And so tying back in, Jason, to the question about like, how does this work in policy; models that are incredibly robust to distributional shift is what’s helpful.

A really good example of this is like a citizen jury, right?[00:41:00] You get a group of people in for a citizen jury they’re given some concepts, they’ve given some blank page report processing. The idea is you get a bunch of people in who aren’t experts in a field, but live in a community, and then you get them to produce a report that goes into your stakeholders. Or into your decision makers, sorry, they are stakeholders. Good sampling that gets you a good group of people is what makes that robust to distributional shift. The idea that if I had 10 people in this room, do I get similar answers to what I would if I had every person in this community?

And so if what you do is select everyone who is called Gary, you get six Garys in a room. You do not have a very robust model. Because Gary is going to be incredibly different to everyone else in the community. But if you get a nice spread, you get good sampling data, you get a good handful of people that are representative of [00:42:00] your community. Then all of a sudden that response that you get from them is very similar to what you would have if you had double the people, triple the people, everyone in the community, in that one room.

Jason: Yeah. I mean that distinctly speaks to the common heuristic that’s often used in large group facilitation work or in collaborative engagement with communities in that you do want a requisite variety in your participant group or in the folks that you’re bringing in to A. Make sure that you’re covering all of the perspectives that need to be represented. That sort of makes sense as a technique for making sure that your model is consistent for the diversity of folk that are going to be in a particular response. Hailey?

Hailey: Yeah, I just want to take a few steps back to recap this territory of the ‘citizens jury’ [NOTE: Hailey mistakenly says “journey” but means “jury”], because I think it’s going to be a fruitful example to move into the stuff that we want to talk about. So ‘citizens jury’ is a community engagement technique that you use if you’re trying to involve [00:43:00] stakeholders in the community in a policymaking process. So it might be something like, “look, we have a problem in this community. There are too many traffic accidents in residential areas. Too many people getting injured on their way to school, that sort of thing. We need to do something about that.”

And rather than just asking the civil engineers what they think should happen, we recognize and you know, not that they couldn’t come up with a good solution, but we recognize that this is as much a social problem and there may be factors at play here that we can’t see, and we really want to involve everyone in the community and also behaviors that will need to change. And people will need to act differently, most likely, in order for a solution to work. So we’re going to deputize this group of people from the community. With the sort of right and responsibility to create a proposed solution, which will be sent back to the decision makers to potentially implement.

[00:44:00] And that’s often an interesting point around power dynamics and stuff, which is maybe out of scope for this, about how much will that be fully respected or taken on board and how much audience is it actually gonna get by decision makers? How much are they bound to the outcomes of the citizen jury. And a lot of the vibrancy and effectiveness of a citizen jury really depends on that step.

And then another important piece of citizen juries is yeah, are they representative. So do they have the full cross section. Another term for this is sortition; making decisions by way of representative groups. So again, what are all the demographics we kind of need to cover off and can we find 150 people who then among them represent a reality.

But then once you’re in that space, how do you hold those people through the process of arriving at what they think the preferred outcome is. And so when we’re talking about their model and we started to say, “oh, and their model is appropriate or diverse and robust”, I think we were implying in a sense, their shared mental model or the shared understanding that they kind of [00:45:00] arrive at through the practice through whatever actually happens in that citizen jury space. The metaphor of a jury is appropriate because they’re often expected to be in a room together for an extended period of time. Usually a few days. Maybe if it was a big enough issue and there was enough political will we might be able to get those people in the room for a week if we thought we needed that.

And, yeah, so to sort of tie this into what we’re talking about today, how might we actually upgrade the citizen jury through some of these low fidelity, high fidelity simulation tools that we’ve been talking about? What does that look like? You know, how do we get it out of the air combat theater and into the local community policy?

Jason: A hundred percent.

Sidney: Yeah. So this is where, like, I think that’s such an amazing question. I think if we could solve that between the three of us right now, I think we’d…

Hailey: We can gesture at the possibility and inspire others. Yeah.

Sidney: Absolutely. I think that at best we can vaguely point our binoculars.

Jason: And [00:46:00] as gamers, the same dynamic, the same responsibility we’re taking on board all the time for our players. Especially in TTRPG kind of space, where as the GM or the referee we’re making active decisions all the time about how much context, how much rule, how do we enable our players to do the stuff that they want to do so that they have enough of a shared model?

Those sorts of things, we actually are negotiating all the time. And there are different styles of jamming, different styles of refereeing, all this sort of stuff, but it all exists. And people are practicing this kind of trade off all the time. And that’s, it’s really interesting to me to explore it because it’s a very similar set of decisions that are being made.

Hailey: Well, and what that reminds me Jase is that it’s rarely that there’s a clear solution or framework or methodology, and it’s usually more, it’s a basket of capabilities. Right. And so things like understanding action-reaction loops and how you as a facilitator draw those out, understanding what it is to be [00:47:00] a, what’s the term, you know, that you wanted to use, like the fair referee…

Sidney: Oh yeah. A constructive-

Hailey: -adversary, how to play that role, you know?

Sidney: Yeah.

Jason: Let’s go there. What is a constructive adversary? We’re talking GMs and you know, like…

Sidney: So, a constructive adversary is working with someone to achieve their goals while working against them in the fiction of your simulation. So an example of that is, if I’m the GM, I am controlling the goblins who are throwing rocks at you from above the hill. And I’m theoretically adversarial to you, but as a good GM, I have goals that are, if not shared, at least broadly coincident with yours. Which is, you know, I’m here to see the characters be challenged and I’m here to find a closing loop to the story of ‘where are the missing children going from the village?’ or whatever.

Another example of it is, in fact, I have a really good story about this. This one went to Twitter, so I don’t know if either of you saw it on there. But when I was doing [00:48:00] air combat, there was a guy who flew the baddies.

Jason: You’re talking about a previous job that, you know, this is in air combat simulation.

Sidney: Yes. Yeah. When I was doing air combat simulation, there was a guy who flew as the baddie who would get very like bored. And when he got bored, he would step outside the lines of being a constructive adversary and he’d be like, “oh, I’m going to just defeat their tactics”. And that doesn’t help anyone because what we had was a very junior pilot up there who was trying to learn the processes and prove to himself the processes of what a squadron leader should do as a, or a flight leader should do with four aircraft and very new to it. And so if like, of course he’s not gonna be able to respond to dynamic situations. He’s not at that point yet. But this badddie he was like, “well, you know what, I’m going to be interesting. And like pitch down from 35,000 feet down to 2000 feet, get under the radar horizon,” like do all this sort of stuff. Do a split S maneuver, inverted.

And then, you know, he gets the kill and he’s like,” yeah, hunter three kill here. [00:49:00] Like, aren’t I awesome. Check out how awesome I am at being a pilot.” But in reality, that doesn’t achieve anything in terms of the model, in terms of the simulation. It doesn’t answer the questions that we set out to answer.

And so being a constructive adversary is really important in any simulation because, all of these simulations- I think this is something we haven’t really said explicitly, but we’ve been talking around- all of these simulations require some form of conflict or require some form of challenge. In the Senegal example, it explicitly pits them as having points. And to get those points, they need access to resources and those resources are limited. So there is inherent conflict. In XCOM, their mission time is on the clock and injured soldiers and that inherently provides conflict. In Apocalypse World, there is always scarcity and there are no status quos. So all of these situations that we’re talking about and it’s because human stories are [00:50:00] built on different types of conflict or overcoming conflict.

I don’t want to talk too broadly and take up too much time on that. But I think that it is important when we’re building these simulations, whether it’s a citizen jury or a endogenous, calculable system, like the Senegal project, or policy development for really anything. There needs to be something there that challenges the planning and says, ‘Okay, but now this. Is it dynamic enough to respond to this situation?’ In the military these are called injects. As you inject a problem. So-

Jason: If I can offer something just for our listeners, the thing that seems, Hailey I’d be interested in your perspective, but what seems to be the parallel for those of you who ever worked in the accelerated solutions environment, Cap Gemini’s accelerated solutions environment. They’d be familiar in the focus rounds of stretch scenarios, where you might have a group of business leaders come up with a particular solution, and then you put it through a stretch scenario.

Can you do it with half of the resource? Can you do it in a quarter of the [00:51:00] time? What happens if we get policy shift? What happens if our strategic direction changes? Is this the sort of thing that you’re-? Okay.

Sidney: Exactly, yeah.

So like, my example is you put 13 aircraft in the air, the first one goes onto the refueler breaks their probe off in the refueler and all of a sudden you’re down 80,000 pounds of fuel. Like what now? It’s valid question what now?

And I say valid because that has happened to me in reality. These stretch scenarios exist. And so that’s how we can build robustness as facilitators. We can figure out good injects, figure out good stretch scenarios and they need to be-

Jason: -Constructive!

Sidney: God it’s important! They need to be so constructive. They need to be, you are playing the constructive adversary. If I go in and say, “Jason, Hailey, you built this really, really good air combat plan. Can I ruin it?” Of course I can. Especially if I’m playing the universe, I just say “all of your aircraft are broken. What now? All of your bomb fixes are broken. Like that’s it, no carry weapons.”

Jason: Sid, this is fascinating because it does speak to that [00:52:00] responsibility. Because there’s a massive normativity there. As a constructive adversary, in a gaming scenario as a constructive GM, it’s important that we challenge our players or that we’re injecting enough reality into something for people to make meaningful decisions.

I guess the weight that it puts on the designer is, how much of that needs to be in there. What is conceivably constructive? At what point is this challenging our players versus, making them terrified? What’s too easy? What is a challenge that’s too easy for them, and they’re not really going to engage with it cause they just roll a few dice and surmount the challenge. And this is a very normative element of game adjudication or context creation.

I guess what I’m trying to get at is it’s an important feature. You mentioned how important it is to be a constructive adversary and how conscious we need to be if we are responsible for doing this sort of work and in engaging well. Or effectively. Or creating game context that’s valuable for people. [00:53:00] And because of that, there’s a real responsibility. Like, what are the things that we could think about.

What I’m hearing from your real world experiences, things like being consciously aware of the sorts of feedback loops that you’re providing in order to give that sort of action reaction. Being highly aware of the satisfaction criteria for various people within the context of the game. Providing enough challenge in being a good constructive adversary. All these things are sort of guides to action when trying to create a good game or a good simulation. Yeah, it’s fascinating.

Hailey: I have a way maybe to frame this up Jase, in a way that’ll also maybe bring us to a satisfying conclusion. Let’s imagine we’ve got our citizens jury. We’ve got a relatively ideal decision-maker. Receptive decision maker environment, tick. We’ve got our selection of our participants and some rad facilitators and a decent budget, but realistic. You know, I’m being a constructive adversary here and saying your budget’s small, but not [00:54:00] tiny, but not huge.

What would we do? What would be on our wishlist of things to bring into that space if we wanted to elevate the citizen jury? Maybe we can just use the example of reducing residential traffic accidents in our Metro, in Fitzroy North where Jase and I both live. Maybe it’s Metro Melbourne where we all live.

What would you bring in? What would we need to make that scenario work based on everything we’ve discussed today?

Sidney: So I think first and foremost, the one that I want to speak to- and I want to speak to it because I think it’s the one that we are all the most invested in- is you want a good facilitator in the room. You want someone who can identify engagement and can respond to it and can facilitate engagement. Someone who understands process and is going to help people through the modeling is I think critical.

Hailey: I’d say a whole facilitation team. That’s had many, many days to plan for this exercise. Yeah as a minimum.

Sidney: Absolutely. Absolutely. Absolutely. And like that’s core in a way that we [00:55:00] get to identify as three people who have professional interests in facilitation.

Jason: Yeah, and part of our challenge in this sort of space is, and one of the deep interests of certainly Hailey and myself, we haven’t talked about this explicitly Sidney, but the idea that facilitation environments are often found in very, very specific contexts.

So your context of the military where it really counts to give emerging top gun pilots the ability to learn through how to engage in effective air combat. You find really good modeling tools and really good facilitators like yourself, that help be constructive adversaries to generate good outcomes. In the context that Hailey and I’ve worked in before in the big business sort of end of town, you have capabilities dedicated to convening these sorts of conversations. Our challenge is, how do we begin identifying explicitly the sorts of tools and [00:56:00] capabilities that can become more accessible to people in the community. And you know, I think the Senegal example is a good one, but again that was reasonably well-funded.

Hailey: Well, yeah, and we don’t know just how much these particular researchers are very special people, who are very rare in terms of their dedication and capability and just how much funding they were able to access to make this happen. But I do know that the multi-agent system modeling software they used is, I think it might even be open source or at least it’s purchasable off the shelf. It’s not deeply proprietary. Like these are things that can be done. I’ve been on some projects where people were making sophisticated models. And I know some folks that do modeling science, it’s not necessarily completely arcane and inaccessible, you know, so that would be on our wishlist. You know, the next place I wanted to go.

Jason: Yeah, a hundred percent.

Hailey: I know that City of Melbourne uses sophisticated traffic modeling software, that there’s a group here that makes. And then whenever they want to put a new highway in or test a new development and they can plug it into this system.

So this stuff [00:57:00] exists, right? So we’ve got a facilitation team in the room. We’ve also got our modeling team in the room and we’ve also got our game design team in the room, you know? So not just the facilitation, that effective hosting and thinking about the process as a whole, but the people who can, as they did in Senegal, design a role playing game in one step with the stakeholders that integrates the model, but also speaks to the sort of larger goals of the process.

Sidney: And I think is iterative, is something that’s really key to that. And one thing that having them all in the room does, is it inherently makes it iterative. Because you can turn to the modeling team and say, “look, what we’re finding is that your model isn’t providing the results that we want, that we expect, that we find to be real.”

Jason: I see it. I can, I can hear it from the participants. Yeah. Yeah.

Sidney: Exactly. Yeah. Yeah. So the participants say, “well like this doesn’t make sense to me. Like, I don’t think that’s right.” And ’cause once people lose faith in the model, they lose engagement and that’s it. You’ve lost that input.[00:58:00]

Hailey: And that may be because yeah, something was missed and we need to have a modeling specialist who’s open to that- that they missed something. Or it may be because actually, no, the facts are counterintuitive and we need to have a modeling specialist and a facilitator who together are able to create the conditions to bring the stakeholders along into that elevated understanding. Right?

Sidney: Right. Right. Which is that constructive path of the constructive adversary as well, yeah. That’s God, this is so great. I love this. Sorry. I’m so glad I’m on this two hour podcast. Welcome to welcome to act two. Ah…

Jason: Sidney, I reckon that’s the thing, you’re probably right there. The thing that I’m stoked on is just how deep this sort of investigation can be, and how rich this field is. And hearing from your experience how genuine gaming and facilitation, like there is a deep, deep crossover there that we get to keep exploring.

Are you going to come back on the podcast again, Sidney at some point? Please?

Sidney: Yeah. Yeah. Well, now I’ve got this, [00:59:00] ah, I got this leverage. I’ll be like doubling my fee…

Hailey: Luckily two times zero is zero. So.

Sidney: Perfect.

Jason: This is awesome. Sydney.

Hailey: Yeah, I am very thrilled. Cause it feels like we really went on a proper amble, hiking through some brush and over some craggy cliffs and then arrived at this outlook. There’s the vista we’ve been looking for at the end there. Which is a great place to end knowing that there’s somewhere new to go.

Sidney: Yeah, I think this really shows us that this isn’t a solved problem. This isn’t something that we’ve gotten a filing cabinet ready to go. That as engagement facilitation professionals, including those people who are GMs who are really committed to it, like this is still a thing that we’re learning.

Hailey: And there’s hidden gems like this example we’ve been talking about, which is more than a decade old already. It’s been happening everywhere around the world, in these sort of pockets. And the potential is for it to be less obscure, less basement dwelling. Kind of similar to Dungeons and [01:00:00] Dragons and such has really come out as being this global phenomenon. And now everyone’s heard about it again, you know. What’s the potential for this mode of community engagement and facilitation. So that’s what is really exciting me.

All right. Well, yeah, like Jase said, we’ll have to have you back. Meanwhile, thanks to our listeners for listening. As always, you can find the podcast itself at amble.studio/podcast.

And if you so choose, and you’d like to buy us a coffee for our efforts, we would love that, helps us keep making free content. So you can find that there on that page. And you can find the podcast in any podcast app of your choice. If you can’t, please let us know. You can let us know via the website or on Twitter at, @TheAmbleStudio. We also have a presence on LinkedIn. So if you want to follow along and get updates, know when the latest episodes going to be released or new games are being announced, anything else we’re doing, that’s where you can find us. And we would love to hear [01:01:00] from you and hear about your ideas. So, bye for now.