OrgDev with Distinction

Evidence Based Practice: An Introduction with David Wilkinson, Oxford Review - OrgDev Episode 19

Dani Bacon and Garin Rouch Season 2 Episode 19

We'd love to hear from you so send us a message!

How do we take an evidence-based approach to organizational dynamics and leadership. Join us as we sit down with David Wilkinson, Editor-in-Chief of the Oxford Review, to explore the myths that pervade organisational life. From misconceptions about diversity, equality, and inclusion to the intricacies of leadership styles and decision-making processes, David offers a wealth of knowledge and insights. This engaging conversation is filled with thought-provoking discussions and practical takeaways that underscore the significance of evidence-based practices in driving organizational success. Tune in for a stimulating exploration of leadership and organizational effectiveness in today's ever-evolving landscape.

Dr. David Wilkinson
Editor in Chief, Oxford Review
  / oxfordreview 

David Wilkinson serves as the Editor-in-Chief of the Oxford Review, recognized globally as an authority on navigating ambiguity and fostering emotional resilience. With an extensive background in academia, David imparts his expertise at prestigious institutions such as the University of Oxford, Cardiff University, and Oxford Brookes University School of Business. His consultancy and executive coaching services have left a mark on diverse organizations including Schroders and Royal Mail, while his impact extends to governmental entities worldwide, including the UK, US, and Saudi Arabia.

Thanks for listening!

Distinction is an evidence-based Organisation Development & Design Consultancy designed to support modern, progressive organisations to bring out the best in their people and their teams through training, consulting, and coaching.

Our professional and highly skilled consultants focus on delivering engaging, results-focused and flexible solutions that help our clients achieve their business objectives.

Find out more at https://distinction.live/how-we-can-help/

💡 Stay Connected:
Looking for a consistent source of leadership & OD tips? Subscribe to our weekly newsletter by clicking the link below and receive valuable leadership tips directly in your inbox:
https://distinction.live/keep-in-touch

We'd love to connect with you on Linked In:
linkedin.com/in/danibacon478
https://www.linkedin.com/in/garinrouch/

Speaker 0: Welcome to the Org Dev Podcast. So did you know that 70% of change initiatives fail, and students retain only 5% of information from lectures while retaining 90% when they teach others. So these and other can I say bullshit myths? Yep. Are sprouted with great certainty and little challenge in organizations across the world.

And much of this is down to bad theories based on intuition backed up by sketchy research. But there is so much quality research out there, but it's stuck behind paywalls, forcing many people to rely on sources like the Harvard Business Review, company surveys, and McKinsey. Luckily for all of us, one man is here to save us from ourselves and and is on a mission to help us do and be better. So David Wilkinson is the editor in chief of The Oxford Review. The Oxford Review is dedicated to putting the best possible quality research and insights into our hands, but in an accessible and summarized way.

Now full disclosure for this episode, Danny and I, we are fully signed up members to Oxford Review, aren't we? And it actually informs our work on a daily and weekly basis, isn't it?

Speaker 1: It does. Absolutely.

Speaker 0: So David has incredibly interesting and distinguished career. He's been a a lecturer at the University of Oxford for 29 years, lectured at other universities, including Oxford Brookes. He's also been involved in financial services leadership management consultancy at Schroders, and he's been head of professional development at Cranfield University. And, fascinatingly, he started his career as a police officer and actually did 2 years in counterterrorism in Northern Ireland, but I'm I'm sure that's probably another episode we can invite him back on to talk about as well. And, actually, his experience is underpinned by extensive professional and academic qualifications, and he has a PhD on the primary mechanisms of organizational cultural culturization.

Is that right? That's cultural learning, isn't it?

Speaker 2: Culturalization. Yeah.

Speaker 0: Thank you. We're recording this on a Monday morning. So once the caffeine kicks in, I can say that. But cultural learning for the for those of us who do not wear a bit. And I also published the book, The Ambiguity Advantage, which is a fascinating dive into how great leaders can use that.

Brilliant. So so welcome, Damon. Thank you so much for joining us.

Speaker 1: Lovely to have you here. So to kick us off, why don't you just tell us a bit more about the Oxford Review? What it what what's your role as editor in chief involved?

Speaker 2: Let me first explain what the the Oxford Review is there for. So I set it up, and it was an accident, like, all good things. Yeah. I I hadn't intended to set this up at all. I was doing something else.

I was in the middle of writing a textbook and, I took a sabbatical. But as an academic, you also do quite a lot of kind of consultancy work in organizations. I had to contact people to say, look, I'm I'm not working for the next 6 months. I've gotta finish this thing. The publishers would beat me over the head.

It'd be, like, years, and I hadn't done it. But I also realized that as a kind of consultant, if you're not in people's kind of view in organizations, they they've got very short memories and, you know, 3 weeks later, they go, David who? So and I needed to keep him front and kinda just present with them. So I just put up a little, small kind of email list and send out a load of emails just basically saying, look, I'm away for the next 6 months. I'll I'll be back.

I haven't gone. And then every couple of weeks, I just send out an email saying, because it was a textbook, I found these bits of interesting research that I think you'd be interested in because most of the people that that were kind of employed me in these organizations were kind of leaders and people in, kinda HR positions and those. And I got a little bit of traction. I'd get the odd kind of email back saying, well, that's really interesting. Thank you.

And that was it. So 9 years ago and just before Christmas, I kinda looked in the bank account and realized I had to get back to some useful work. And I sent an email out on Christmas Eve just saying, I'm back in the New Year. If you've got any interest in projects for me, you know what I do. I do the uncertainty thing, you know, da da da.

And at the bottom, I just put this little PS, and I'll stop sending these staff emails every couple of weeks to you, clear your inbox. I didn't think any more of it. And I did the usual man shopping on a on a boxing day on a Christmas Eve, went into town into Oxford. I tried to get all the the, the things that they were marking down Boxing Day. That is really good.

I mean, the pressure's on at that moment because you've got to get the Christmas present. Anyway, so I then met some friends in a a restaurant in town, but I was there early. So I grabbed a mulled wine, and I looked up my phone, and I thought I'll just go through. And there was, like, 300 emails in my inbox. And I thought, out of office, everybody's off to Christmas parties and everything because it's Christmas Eve.

And he wasn't. It was all these people going, don't stop the emails. We love them. I was like, what? This is how sad I am.

The day after Christmas Day, so Boxing Day, I I sent out another email just saying, yeah. Okay. But I've gotta eat. Are you prepared to pay? The Oxford Review was set up on January 2nd, 9 years ago, with 200 subscribers.

We didn't have a website or anything for, well, another 3 or 4 months, and it's just grown and grown and grown. And and the main idea of it really is, as you were saying, Gary, is about getting see, there's all this research. There's, like, masses of really good research that organizations, leaders, and managers could use, but it's locked away in kind of in journals, in academic papers. Academics don't exactly write for mere mortals. I've always been on this kind of quest to write in a way that people understand anyway.

So those two things just came together in that moment, and I thought, right. Okay. Let's do it. And it's just grown. We've got all sorts of organizations and consultants and all sorts that are now members.

We do a lot with kind of leadership development programs. Quite often, what they'll do is they'll put their leaders as members on onto the Oxford review.

Speaker 1: And how many sort of papers do you look through to produce the Oxford? Because you produce like a journal each month, don't you, with a kind of summary of key papers?

Speaker 2: Right. So the areas we look at so we've we kind of confine ourselves to anything to do with people and organizations. Now within generally, so without that. So if you just look across all of the research to do with medicine, engineering, whatever it happens to be right across, There's roughly about a 150,000 papers published every month, and that increases at the rate of about 9% per year per year per year. Right?

Wow. So that's a lot. Now in our area, which is the people in organization piece, the leadership management, organizational development, work psychology, decision making, all those kinds of things. That equates to roughly somewhere between 304100 papers every 24 hours being published. Yeah.

Okay. So so we it's not a, like, what it's more like trying to hold back the deluge more than anything else. So we're not short of material. What we're what we've got really good at is a so so we've got a set of criteria. I think this is the best way of of explaining it.

So we've got this set of criteria that we use that helps us strip through that mass. So the first criteria is that it's quality research. So, you know, it's not just somebody's ideas they fell out of the shower in the morning, and there's a lot that being published. The the second, big criteria for us that is of interest to our members, and as you know, we keep quite close contact with our members, and they tell us what kind of projects are working on, what they're thinking about, what they're doing. So that informs us as to what we'll, you know, what we're looking for.

So we've got this kind of algorithm that goes off looking for certain keywords and things. I'm really keen about keeping people up to date with the research. So they're they're not going off kind of theories and things that are from 30 years ago. And so the vast majority of research briefings that we send out are based on papers that have only just been published in the last few days weeks. And and then the last thing is that the the the research is practically useful.

So you gotta be able to look at it, and it's not like high level theoretical stuff that people are kinda squinting at and going interesting, but so what what am I gonna do with it? You know, people must be able to see a direct relevance. So that helps us narrow it down. On a daily basis, once our algorithm goes through it and we strip through it, I probably look at, I don't know, 40, 50 papers a day. And out of that, I'll maybe choose to, and then we'll write those up.

And then with the research briefings, but it and it's not just about that particular piece of research. As you know, in the research briefing, what we do is we then expand out and say, look, these are the other interesting papers around this area as well. So it's a wider piece as so, yeah, I would reckon so any one research briefing that we write probably looked at 20 30 studies for that one paper.

Speaker 0: Can I ask one question for me? So I'm gonna put all formats up. This is a question I've always wanted to have answered. So because I I I see academic, publications that come out. What is the right way to read a paper?

So if something comes out, you've got the introduction, you've got the the findings, the conclusion. Do you read it in an order, like, to sift out what you need to what what's the what's the secret sauce to this?

Speaker 2: Yeah. So the title usually usually, not always. Usually, the title's a good indicator. It'll it'll tell you where it's going. The journal's important because the journal, you know, it'll be the I'm just looking.

I've got paper in front in front of me actually that's from the Leadership Quarterly. You know, it's pretty obvious what that is. So, you know, if I'm looking for stuff on leadership, I know which journals to go to. And anyway so but the the title, then there's the abstract, and that will give you a quick overview. And usually, the bit of the abstract that I'm interested in is the first couple of sentences and the last couple of sentences.

So the first couple of sentences add a context. They tell you what this is about, and there will be something in there. There'll be a sentence in the abstract somewhere that'll be saying, this is what the study was focusing on. And then the last couple of sentences, hopefully, will tell you what they found. Hopefully, a lot of researchers don't like to give the punchline away.

So then what thriller. It is. Yeah. So in fact, let me just let me just see what this one's saying here. I mean, I just I haven't I don't know even why it's popped up during podcast.

It's just kind of come up. So this is about the context deficit. I haven't read this. Right? So this is this is only just popped up, the context deficit in leadership research.

So I'm quite interested in this any anyway. So there's something about, you know, research leadership. So complimentary evidence from narrative literature reviews, meta analysis. So I'm looking at the quality of the research now. So higher order things are better in terms of generalizability, I suppose.

So things like a systematic review, a meta analysis, and a lit review, they bring the research together. Systematic reviews are much more systematic about it, but I know now that what I'm looking at here is that I'm looking at something like it's a a synoptic piece, I suppose, which is handy. And then if I go to the the last sentence or so, we'll just go to the last sentence. I think we need to go back to school. Improve improve contextual appreciation can be facilitated by consulting contextual success stories embracing into this room.

Right. Okay. I don't need to go much further. I'm starting to understand what it is they're getting at here is they're saying quite a lot of the research seems to be taken out of context and that they're not putting it back in. What they're trying to do is be too general generalizable.

I get that right. Now from a Oxford Review point of view, I'll have a quick squint through it and see if there's anything that I can pull out that's a practical usefulness to probably leadership developers looking at this. So I then go straight down to the findings and have a look at what was going on with the with the the with the findings. Yeah. But and basically, what it's saying actually, just looking at the fact actually, this is interesting.

What it seems to be saying is a lot of leadership theory and research doesn't mention the context that the the organization was in, that the research was done in. So, you know, is the organization successfully? What kind of sector is it in? Is it failing? Is it you know, all of those things are actually important contextual factors.

Now that's fine. Probably wouldn't make it into an Oxford review research briefing because it's not of because it's research side of things. But it's something, you know, one of my students or I put it away aside, and it may crop up in another one. I'd just say, okay. There is a criticism about leadership research.

So sorry. If you're back, you just answer your question again. Title, abstract, first couple of sentences, last couple of sentences, and then I go down to the conclusions, and the findings at the bottom. And then if I'm interested from there on in and I'm gonna do something with it, I'll then start going back up. I'll have a look at the method.

I'm I'm interested in, you know, is it a big study or was it just a survey of 3 people in some kind of out of the way place, and it's just not very valid.

Speaker 0: I'm so glad I asked. And also, even become more appreciative of the work you're doing. Because there is so much out there, isn't it? And there's a like, it's like that experienced eye upon it, isn't it? Whereas the lay person may not notice these things.

Speaker 1: How do we excite more people about evidence based practice? Because you've got some people who have kind of almost evangelical about it. We need to do it. And then you've got some people who just aren't engaging with the idea at all. How do we convert them?

How do we get more people to to get involved in using evidence?

Speaker 2: A really good question, and it's one that I'm constantly grappling with. I think one of the problems with evidence based anything is that the people who are driving it sorry. My colleagues are gonna kill me. Is that academics drive them, and they run them, and they run them from an academic point of view. And it's kind of the wrong way around.

So when you look at the kind of theories, and I did this, analysis, and I did, I've got a blog post about it around. What are the things that kinda take hold? So what are the ideas that take hold? The memes, if you are. Right?

And when you look at them, is their story led? There's a story, there's a context, and they're short stories usually. You you know, you just look at the news, YouTube, you know, there are the little short stories. And it's a story that resonates with somebody. Well, academic research doesn't start there.

It's what it does is it presents all this stuff, and it presents it in a kind of a a language that's that's not very accessible. And and it's it's actually the language of disambiguation. The whole point of the academic language is to remove as much uncertainty, to make sure we know exactly what we're talking about, exactly what we're measuring, exactly how we're viewing it, and how we're not viewing it. So there's always language around trying to disambiguate this thing. But what it does by doing that is it kinda removes the human element, the story, the thing that captures it.

So you look at that that thing that you were talking about, the the the 70% thing, and I did a video about it that went viral a while ago. The you know, this idea that 70% of change fails. Right? So I went off and had a look at it. Well, there's a story behind that.

It turns out that, a, that's not true, but why is it not true? How did we get to that place where everybody believes this kind of figure? And it's ended up in it's ended up in academic papers as just a kind of a truism that that's the case. But when I kind of actually tracked it back, I went right back and and and looked at it. What I discovered was it was, a a paper that, Carter had written who's famous in, kind of organizational development circles, leadership circles, and things like that.

He doesn't mention it. Right? He just says quite a lot of, basically, all he said was something like quite a lot of organizational change events fail. But it didn't define what he meant by it failed. Like, failed completely or, like, it didn't quite reach everything that we wanted it to reach.

There was no definition. But what happened was the editor of the Harvard Business Review took hold of this and wrote a little editorial introduction to it in which he said about 70% pay.

Speaker 0: It was him. Yeah. Oh my.

Speaker 2: And then everybody's quoting the cotter paper, but it wasn't a cotter paper that says it. It was this little introduction that it is. So it was an interpretation by a non academic, and and then it just took off. And and the there's kind of and there's actually 3 or 4 steps in this in this process. But when I when you start to then look at how memes spread, they they spread from a com from the opposite end.

So that thing that I was talking about generalizability. Yeah. So what research is trying to do is try to find out some kind of wholesome truths about the world that we can apply lots of different situations. Or that we can say, look, this is true. Use this.

It works. And and there's a whole load of methods that we do in order to do that. But it's like a pyramid. And so there's always stuff at the bottom, and then there are layers of stripping away of stuff to get to this little piece of truth. But when you start to have a look at how how stories spread and rumors spread and memes spread, it's the other way around.

It's like an inverted pyramid. It's all contextual stuff. It's the story about what happened to one person. Now from a research point of view, we're not interested in what happened to one person. Matter because it may be subjective, but it still matters because actually, it may be the start of something.

You may see in one person and then you go, oh, hang on a minute. I wonder if other people are experiencing this. And then you start to find quite a lot, and then you find an awful lot. And then so it can be the start, but it's not the endpoint of the research. So it's like we invert the pyramid.

So stories start out with all this context and from us because we link in. Is it there's a really interesting piece of research actually about empathy that we tend not to empathize. You know, if if, like, a 100,000,000 people, are suffering from a drought, we find it really hard to connect with it. If we have a story of one person who's really struggling for this and, you you know, you look at charities, the way charities present their stories. They don't talk about a 100,000,000 people.

They pick a person, and they take some photographs. They go and visit them. They you know, there's a video of them struggling because we can connect with one person. So and and that's kind of the the stories and the thing that connects us as human beings and research are almost at the other ends of the same pyramid. And I think that's part of the problem with evidence based practice.

It's it's driven by researchers who are harping on about the research all the time and minimizing the experience of the individuals and the individual stories. And so one of the the the criticisms that I've got of evidence based practices is actually based on a medical model. All virtually all of the research that's being done to do with evidence based practice comes out of medicine. And therefore and I'll give you what why why this is important. So in terms of, you know, you want to know that the doctor knows what they're talking about.

You know, if they're gonna prescribe something to you or do an operation that it's grounded or good evidence, they're not just going, we'll open you up and see what we find. Organizations are all around things like experimentation and failing fast. We can't fail fast with a human body. You want to be certain, but organizations are completely different. They're in a moving marketplace.

They don't have time to wait and to read lots and lots of research. They need to do experimentation. But some of that can be informed by the research, and and it's that appreciation. And and I think the term evidence based, and I haven't found a better one, but there is one out there. So if anybody's got any ideas, that'd be really useful.

But I think there's a way of communicating this idea of using research with experience and experimentation to help organizations and people in organizations be wiser actually and to do better things.

Speaker 0: And And I guess, like, when you say something like the 70% thing, it it feels right. Like, you know, you if you think that you then sort of layer layer over your own experience, I will yeah. I'd probably say most change initiatives struggle to to do what they say that they're gonna do as well. So does that kind of, again, sort of feed the myth? Exactly.

And and it does feel right because it is right.

Speaker 2: You know, it it's you when you set off, you see so there's 2 things going on here. We've got this idea of organizational development that if we plan it, it's going to work. Well, hang on a minute.

Speaker 0: And we're not agreeing with you when you say that.

Speaker 2: Hold on a minute. The organizations are populated with people. They're messy things. You know, you you you can put as many plans into place as you want. You know, in the military note, it said before, actually.

So before I was a police officer, I was in the military for 4 years. And, you know, there's this whole thing in the military is, you know, every plan only survives, but never survives first contact with the enemy. And it's the same. You know, we think that, you know, and and consultants do this. They're selling something that's a certainty in an uncertain space.

You don't know what's gonna happen when it makes contact with people. You don't know what's gonna happen in the marketplace because the marketplace is constantly shifting. People are constantly shifting. But we have this idea with organizational change that we do this list of things in this order, and this will happen in a changing space with people like human beings. Like, hello?

Like, they're difficult things. You know? They call them soft skills, but it's the hardest bit are the people because we're variable. We have moods, and we have ideas, and we have conflicts. And some people in that department hate people in this department or love people in that department or only talk to those people.

But this idea of organizational development change is being in a box at a a solution with human beings in a moving environment. It's just nonsense, but we don't recognize it in the rhetoric. And it's the rhetoric when the rhetoric meets the reality. And and it is a nonsense anyway to say 70% fail. What does that mean?

It's like it wasn't how we thought it was when we started. Well, yeah.

Speaker 1: Were they even able to define what the change was that they were trying to make at the beginning?

Speaker 2: Yes. Yeah. And and there's this whole thing. So there's a there's a there's a whole thing in the literature called strategic drift. And there's this idea that organizations create their strategy.

Right? And they create their strategy on the understanding of the world as it is now and their perspective on how it is now. So how I perceive things right here, right now. So we then create a plan for the next 5 years. As we start marching out that plan, the rest of the world have firstly hasn't read it and isn't gonna comply anyway.

It's it's just gonna go, yeah. Right. And then all of these other things are happening in the background. We've got technological change, you know. You just look at the march of AI, and generative AI.

And society's changing, and the marketplace is changing, and people's, like, needs are shifting constantly. And yet we've created this plan that we think is gonna last for 5 years. It's just nuts. And and this is where the whole tolerance of uncertainty and ambiguity comes in. My kind of area of research is that when you start to have a look at organizations, just the very name.

You know, an organization is organized. What organizations are doing, they're trying to create certainty. So they're trying to create certainty inside the organization through a structure. They've got a structure that is fairly solid. You know, you come in in the morning, you know what you're gonna do.

You know, if you're in sales, you're in sales. They're not gonna week you out and say, actually, somebody's gone sick in finance. Would you mind coming and do sort of spreadsheeting for us? That'd be a disaster. So so we've got the structure, we've got the systems, policies, procedures, processes, all of these things that organizations are trying to create some form of status, some form of certainty within the organization.

Because actually what they're trying to do is have a certainty of output because the customers want to know what they're going to buy. Makes sense. You know, you if you're buying a BMW, you want a certain quality and you want it to last. You don't want it, like, you know, there's the whole thing in the seventies about British Leyland vehicles. You know, if you got Friday afternoon vehicle, you it'll probably last about 2 weeks.

You know, you had no idea what the quality was gonna be. So organizations at a at a kind of metro level are about creating certainty in the organization to have a certainty of output, in products and services as much as possible in profit as well. But and this is the kicker, is they're trying to do it in a changing world, and they don't recognize that. And that's the problem. And that's the problem with a lot of theory.

That's a problem with a lot of evidence based practice, and a lot of the way that we work within organizations doesn't recognize that kind of constant shift and change that's going on anyway. And the whole planning thing that we're starting to get smarter, planning protocols and things like that. So so some of the research that I've done, for example, what we've discovered is service is range of people in terms of their tolerance for uncertainty. So people who tend to have high tolerance, they like uncertainty. They like operating in situations of chaos and things like that.

Just kind of where I come from with the emergency services, the military, they're, you know, they operate in situations uncertainty. And then down at the other end, we have quite a lot of people, the majority of the population, who actually don't like uncertainty very much. They like to know what they're doing, and they like certainty, and they work better like that. And then organizations do this really flip floppy thing without realizing what they're doing. They have all of these system structures, policies, procedures, processes.

They're trying to tie things down and create more certainty. And at the same time, what they're doing is extolling everybody to be more creative and better with change. They go, hang on a minute. You want both? Well, how does that work?

We just get a lot of confusion. And so my thesis is that actually organizations need to recognize the people who'd like certainty because when you think about it, what their day job is predicated on those people doing their job in the way that the organization wants them to do. That's where the money comes from. And then there is this small group of people who like chaos, who like kind of experimenting and don't like certainty, who are really good with change. And actually, they're the people who spot the change way before anybody else, largely.

Now what organizations are not very good at and and and there's a whole there's a term for this. It's called organizational ambidexterity. It's this idea that organizations need to get good at the day job and do the day job. That's the certainty stuff and and following things and bringing the money in. But they also need to be both looking towards the future and watching what's coming over the horizon and innovating at the same time in order to create a competitive advantage.

Mhmm. Right? But they need both, and they need both at the same time. And we know, and there's a lot of research around this, is very few people are good at both at the same time. There are a group of people who sit in the middle of all of this, who are kind of not quite the chaos junkie type people and not the certainty people.

They're kind of in the middle, and they're very good at seeing both sides and being able to communicate with both sides. But this means that the leadership need to understand this idea of organizational ambidexterity so that they're then managing and leading the organization in both areas. And it it's not as dichotomous as that because there's a there are communications between these two sides. It's that's the whole organization. It's certainty in a changing environment.

And how do we do that? How do we do that well? And that's a leadership problem. And one of the issues is often leaders are just caught in the day to day stuff, and they don't get that nature. That the nature the dynamic nature of organizations whilst they're trying to create certainty.

And so my thesis is stop hassling the people who like certainty. They're doing the day job. They're bringing the money in. They're the people who are doing the stuff. Not so much leave them alone.

They need developing and helping, But they don't need to be at the other end. There are group of people who are really good with that. And then there's a group of people in the middle who are really good at mediating that. So when change comes around, that we can do it in a much more much more sympathetic way for the organization. So the organization's shifting on a more continual basis.

And every now and again, something big like COVID comes along and we need a sudden quick change. And that's when you start to really have to listen to the chaos junkies, I suppose.

Speaker 1: So interesting. We've got so many questions. Like I say, Yaron's got lots of questions. Just just before we leave evidence based practice then. So if if I'm an HR manager or a change manager, where do I start?

What what are the kind of 2 or 3 simple things I can do to start bringing in, taking a more evidence based approach to my work?

Speaker 2: It's connecting with proper research. That's one part of it. Yep. The other part of it is understanding what's going on. So it's it's kind of collecting evidence.

It's we're because we're only seeing what whatever we do, we're only seeing things from our particular perspective. One of the things that the big short shows is that they and we know about good entrepreneurs and the way entrepreneurs actually solve problems is they don't solve problems. And that sounds a bit weird. Right? What they do is they they start collecting evidence, and they start they work from the assumption that they don't know what's going on.

And they start they start asking people, what do you see? What are you seeing? What are you seeing? And they're trying to work out what's going on from the assumption that they themselves don't know. And one of the problems is organizations tend to promote the idea of knowing.

If I'm a manager or a leader, I should know what's going on. But how can you? You can't know everything, and you certainly won't know what's going on out in the market because it's horrible and messy and, you know so organizations really need that kind of just, so if HR or whatever it is, what is actually happening here? And not what I think is happening. What does everybody else What is everybody else seeing and staying in touch with the world in that way, which is anyway, it's one of the issues that AI has because it's not in touch with the world.

Speaker 0: So in in in relation to AI, so what's some of the issues there? Because that is starting to inform practice in organizations. Dan and I were at a change management conference last week, and we spoke to a number of HR people that started to write policy about it, about how organizations should use it as well.

Speaker 1: Or or should not.

Speaker 0: Or should not. Yes. What what what is it that we need to be mindful about it? What's the research starting to show?

Speaker 2: I think the first thing is that people need to recognize there are lots of different types of AI. Right? So so the when you say, AI, most people talk about, like, chat GPT, generative AI. And what the next thing is to know is what generative AI is. So the majority that most people come across is generative AI.

Now all generative AI is doing is is it's a predictive large language model. And it's and and, unfortunately, we've got this idea of artificial intelligence. And the the way that I put it is it's not intelligence, and it's not intelligent. It's a model of intelligence. And that there's a big difference there.

It's just one model of intelligence. And it's a pretty limited model of intelligence for a lot of lot of reasons. So the first reason is that you, me, we're kind of in touch with the whole world. And we're kind of in touch with the whole world, not just in terms of living and being in the world. We're in touch with the whole world through a whole series of senses and mechanisms.

So there's obviously our brain and logic and the way that we think, the way that we perceive things. But there's also our affects, and what that means is our emotions. So the vast majority of our decisions are emotional first. We there's a lot of evidence for that. We see them in fMRI scans, things.

The our decisions emanate first in in the emotional regions of the the brain and the body. So and and thinking of the brain just as the brain, I. E. The thing that's in our head, is also a misnomer. We know that it's, you know, that that there are neurons in the stomach, in the heart.

You know, this is we are a system, and you can't separate out one from the other. So the affect talks about the emotions, but our emotional responses are based on our values and our beliefs. So when we come into contact with another person, the first thing we're doing, and it's a constant thing that we're doing, is we're evaluating things. It's a constant process. And we're evaluating through all of our experiences, all of our values, all of our beliefs, everything that we know about the world, and we're doing it in comparison.

So this whole uncertainty thing when we when we start, when we come into a situation that we're not what, you know, we'd never come across and we're a bit uncertain about what's going on. What we start to do is we start to search our memory banks for the nearest comparator. And then what we do is we draw comparisons, and we work out how scary this is first. And then there's a whole series of other evaluations. It's like cascade that's going on.

Now all of that, when you think about all of your senses and your values, your beliefs, and everything else that you're interpreting the world through, as well as being informed, listening to other people, being informed by society. You know? And you you just look at the way that other people influence us. Is massive. Even down to little silly little things like going on to Amazon, for example, and say, am I gonna buy this?

And it's only got 4 two star reviews. We are really, really socially cued in. We are social animals. We learn socially. It's what calls social mediation of learning.

That we we everything that we do is interpreted in a social way and not just an individual way. AI, not even connected to the world. Maybe connected to the Internet, but that's not the world. That and there's no values. There's no beliefs.

There's no emotions. So actually, what we've got is this model of intelligence that hasn't got all of the faculties that we've got and isn't connected to this vast social network that stretches across, you know, and and this is one of the reasons why we wrote research brief, why we put down on paper, research papers, because it's part of the society of researchers so that we can see what other people are thinking and criticize it or agree with it, whatever it happens to be. So that's the first thing is really understanding what we've got here. What is this artificial? What is it's giving us?

So the the next thing is that it is generative. What it's doing is it's a probabilistic thing of what word is likely to follow this last word. That's all it's doing. And it's good at it. It's really clever, but it fools us.

So we ask a question, and it gives us an answer. And there was a study published just, 4 4 weeks ago now. And what they did was they looked at the the rhetoric of Chat GPT and how it's using its language, and they analyzed it. And what they found was when you start to have a look at all of the research on manipulation and persuasion, it's that language. And and it and it kind of breaks down into 2 technical breaks down into kind of 3 kind of sectors that goes right back to Aristotle, actually.

This idea of logos, logic, pathos, and ethos, the idea of, emotional appeals, logical appeals, and the chat GPT has all of them. And then when you see where it's gathering its data from so I looked at it, and there's there's a whole thing called paradox theory, which isn't a theory, isn't a paradox. Right? They they kinda caught on. It's a meme.

And it's a meme in the research thing, but it's wrong. Right? And there are 2 papers that set this off. So if you go on to chat GPT and ask it, what is paradox theory? It'll give you a whole thing.

And you'll say, where have you got that from? What what are your references? And you'll reference these 2 these 2 papers by, Lewis and Smith. But it's Lewis and Smith that made the mistake right at the beginning. Their definition of paradox is wrong.

It's not a paradox at all. And as a result of that, what's happened is this whole kind of clump. Then when you go so it's based this its answer on 2 papers. If you go into Claude 3, if you go into Bard nor rest of it, they'll do they if they're going to scholar or something else, they'll do 8 papers. So how much of that of the actual research is that actually representative of?

I can tell you what it is. It's naught point naught naught 2 8% of all of the research around Paradox Theory. Now if you knew that as a context, would you still trust it? And the answer is, well, no. You think this is ridiculous, but it doesn't tell you that.

All it does is there's a little sign at the bottom saying, we may be wrong. And and this is one of the problems is that the language is false. It's the rhetoric of persuasion, which is an issue. And when you actually start to dig down into what it's basing it on, it's very little.

Speaker 0: Wow. If we start to have have a built dependency on it as well, then it just compounds it as well, doesn't it?

Speaker 2: Yeah. Well, there's a thing called a truth bias. Now this is another paper that was published this year, 2024. So we have a truth bias. If somebody tells us something, unless we have a context that suggests that we shouldn't believe this, we tend to believe what people say.

Right? So and there's an index of this between naught and 1. And humans usually sit on the truth index roundabout6, roundaboutnaught6 naught 0.65. So it's over half, but it's not quite everything. So, you know, if somebody tells you something and it doesn't quite chime or you look at them and think, you know, you're lying swine or something like that, All there's a context.

So one of the studies that was done was looking at statements, police statements. If they were a if the person knew that they were a witness statement, they were much more likely to believe it than if it was a defendant's statement. So the context is there's somebody here who's been nabbed for this. Right? Therefore, I'm gonna be more skeptical.

So humans sit on this this continuum between 0 and 1 at about not about 0.65, which is kind of a little bit skeptical, but I'm tend to believe most of what is is told to me. AI, 0.99.

Speaker 0: Oh, wow. Okay. It's got

Speaker 2: a huge truth bias, and it's way more than ours. Way more discerning than it because it has nothing to depend it can't look at our body language. It can't listen to intonation. The context doesn't apply unless you give it the context. We're already evaluating the context all the time.

Speaker 0: So it's really important to sort of develop critical thinking really now, isn't it, as well?

Speaker 2: One one of the things

Speaker 0: that you've been talking about quite a lot recently, and Danny and I were talking about is, is functional stupidity. Yes. Which is in terms of terms of themselves is probably one of the best things we've heard in quite a while. What what is functional stupidity, and and why do we need to be aware of it in organization? In essence, functional

Speaker 2: stupidity is this idea that we kind of collude to dumb things down. So you're in the organization, and you know this isn't the right thing to do, but everybody's agreeing with it. And it's gonna be too much effort, too many people to convince. You know, so we just go, yeah. Okay.

Fine. Just get on with it, and I'll go and do and and this idea that it works, and it works within the organization, but there's lots of people kind of squinting at it going, really? And so you get this kind of what's known as functional stupidity going on. There's there's a little bit more to it. It.

So and and and this is, again, it's a social thing that that's happening. So there's a thing called the Abilene paradox. The idea is this is family. It's a Sunday morning. They're out in the Midwest in in America.

It's a really hot day. Everybody's just you know, there are flies buzzing around and everybody's bored. Somebody comes in and says, should we Abilene's the local town. Let's go to let's go to Abilene. And they've got nothing else to do, so eventually everybody agrees.

They don't particularly want nobody wants to go to Abilene. Not one of them. They're quite happy just, like, lying on the sofa watching the flies buzz around the center of the the room. But, because they hadn't had a better idea, they all go. And they all go, and they have a miserable time.

And then they say, who on earth suggested this? Well, you all agreed. And and it's the and it's what's become known as the Abberline paradox. And it's the similar kind of thing. It's this kind of idea of functional stupidity in organizations where we end up agreeing to things that we think this is naff or I don't like this or but or the system itself, the culture itself drags us down into a a a kind of a dumbed down version of something.

And that we we we either and everybody's kind of everybody is looking at it thinking this is nuts, but we all end up agreeing to it. So and it's the same kind of process.

Speaker 0: And the these are often sort of highly intelligent adults engaging this that that should know better. So what checks and balances can we put into organizations to sort of address this? Because, like you said, it's potentially, it's a systemic thing, isn't it?

Speaker 2: Yeah. It yeah. It is. And it's a social thing. And and I think that it's important.

So there's this whole idea of voice in organizations that people can kind of speak up and that you you want to promote that. And it's it's getting to a place where people can, like, put their hand up and say, hang on a minute. I've got a problem. And that they're treated equitably for doing that. That because quite often, and I've seen this in organizations where, you know, once once the mask get going, it's kinda hard to put them down because everybody kinda falls into line.

And, you know, you're seen as the troublemaker, and we don't want to be seen as the troublemaker. So nobody wants to be seen as a troublemaker, so everybody agrees. And it's like, we we need a role within the organization that I that I suppose shows a mirror to the organization. And when you think about it, this is what happened in courts of kings and queens years ago in the middle ages, The jester. The jester was there to poke fun at people and show a mirror to them.

And they were, and I hadn't realized this until a little while ago. Jesters were protected, so you couldn't you couldn't hurt, damage, kill the jester. Right? On the wrath of the king. And the jester was there basically to show the absurdities of the court.

Alright? So they could laugh at themselves. But there was a real message in there, of, like, you know, this is ridiculous, or you're ridiculous, or what you've just done is ridiculous. This is the counter side to it. And there there was, a a group that I was involved in years ago and that they had this, system with a wild card system and that they were making decisions and there were really high stakes decisions this this board were making.

But any one person had, every 24 hours, they had, everybody and it was reset every 24 hours. They had a wild card, which means that they could block anything. No questions. No nothing. If they thought that it was wrong, they could block it for 24 hours.

And they could just lay it down and there were no questions asked. Everybody would just walk away at that point and then start to think about it. And actually, when you think about it, at first when I saw it, I thought it would be but then when I started to think about it, I thought, actually, this is really clever because there's no there's no comeback. There's no argument. There's nothing.

Once it's laid, that's it. There is nothing else. And you've got you've got to wait for 24 hours before you can come back, and it enables people to kind of start thinking again and slow down again. And I think in organizations, we need more of that kind of the ability of anybody, and it doesn't matter who it is or whether it's the cleaner, you know, who come in and go, hang on a minute. This is ridiculous here.

You know, stop. Just stop for 24 hours and think. And and no no come back really makes a difference. And, you know, and I think, personally, I think all organizations should have an organizational gesture. I'm not quite sure how long they'd last these days.

Speaker 0: We've got chief happiness officers. So, like, I'd That's slightly different. This is quick.

Speaker 2: This is kind of a chief misery officer. Okay. So

Speaker 1: Okay. So just to switch gears slightly. So there's a lot of talk about the importance of diversity in our organizations and in leadership teams. What are organizations getting wrong about diversity from the kind of research?

Speaker 2: I think one of one of the issues is that the term diversity isn't defined. It's just this amorphous theme, diversity. And usually when people think about it, they think about something like color of skin or something like that. But there are lots and lots of different types of diversity within organizations. You know, there's, obviously, there's cultural there's ethnic diversity, but these things, particularly, you know, at the moment, I'm interested in age diversity.

There's all sorts of different types of diversity. Now when you start to have a look at the research, a lot of the rhetoric around why diversity is good for organizations is that it promotes and breeds, innovation and creativity and different perspectives on things. Well, it does, but certain types of diversity are better at creating innovation than other types of diversity, and certain types of diversity can actually go too far and create problems. So what the research is showing that things like age diversity, for example, tends to create greater innovation and creativity within organizations. When we kind of get to ethnic diversity or cultural diversity, firstly, that's less clear.

There are studies that are showing that, but there are also studies showing that it's not as it's not as significant as we think. And that you can have 2 greater diversity, particularly so if you can imagine that you bring, I don't know, you you bring a team together from 10 different countries, and and this is an extreme example, and I'm just using this example. And and their say their English isn't that great, or that they're all just speaking their own language. You've got diversity, but nobody can communicate. Or their expectations of the communication are different.

Or there's it's very hard for them to kind of come together. So you've got in that kind of circumstance, and I know it's an extreme example, you've got too much diversity because people just can't communicate, and they're not taking the time to communicate. So there's this kind of sweet spot in diversity. So it's not just more is better. And it's the same with lots of things.

You know? There there's a thing going off at the moment that I'm involved in around psychological safety. Right? There's this kind of narrative that psychological safety is good. Really?

Always? So if you're completely safe, totally safe, and nothing can happen to you, Don't you know, we know, for example, creativity and innovation requires tension and that there are tensions and conflicts. And it's those tensions and conflicts that bring about creativity, the need for change, and those kinds of things. There's also a bit of a misnomer in the the whole psychological safety thing. So psychological safety actually isn't an individual characteristics.

It's a group characteristic.

Speaker 1: Yeah.

Speaker 2: The whole idea is that the group feels safe enough to be able to voice exactly what we've been talking about just before. That that the that the people in the organizations feel safe enough to be able to say what they need to say, to be innovative, to experiment, and and not feel that they're going to be ridiculed or something like that. So it wasn't meant to be this idea of psychological safety for an individual. Right? But just like diversity, go too far.

You know, if you're too comfortable if I'm too comfortable and the money's just coming in, where's the motivation coming to do more? Like, you've really got to be, like, intrinsically motivated to keep going because there's another goal. But most people are just gonna put their feet up. I go, oh, this is cool. I'm not I can't be sacked.

They're just gonna keep paying me. It's like, you know

Speaker 0: Because things like social social loafing is a thing in the research, isn't it? Oh, yeah. Yeah. Yeah. Yeah.

Speaker 2: It's a big thing. So, yeah. And cognitive laziness, we know that this whole idea about so anyway, so there's there's a lot around social loafing, cognitive laziness, and it's tension that actually kind of moves us out of these things into trying to solve problems. Human beings are problem solvers. We we're happiest when we've got a problem to solve.

We just don't want a problem to solve that is so intimidating or feels so threatening that actually makes us want to run away. Mhmm. So there's a happy balance here is, I don't need to be terrified because if I'm terrified, I'm not operating at maximum. But I'd still need some tension. I need something that gets me thinking about, you know, I go, oh, there's a real problem here.

And how do I solve that? And that's tension. So when we think about what real psychological safety is going to be, it's like like total psychological safety, total diversity. It's a nonsense.

Speaker 0: From your career so far, it's been a very extensive career. And you talked about the time in the military and the police and, and academia as it working as a consultant. It's a difficult question. You'll and you'll have to 2 or 3 actually. But what's the biggest lesson or lessons you've learned in your career so far?

The biggest one is

Speaker 2: just to be both be able to say to yourself, but also create a culture of being able to say, we don't know what's going on here. I think that's that's the biggest. If organizations can just move to that, if boards can just move to that place of we don't know what's going on here, we we assume we think we know, but we don't really know. We need to do a bit of research. We need to do a bit of investigation and experimentation that I think will transform organizations more than anything.

It's the it's that the the feeling that leaders and managers particularly have, but all everybody has it

Speaker 0: in all Well, we collude to a certain extent, don't we? Like, we we kinda agree together unconsciously that, yes, there is certainty and absolutely.

Speaker 2: Exactly. This is a functional stupidity thing. It's like they're the boss they must know, so we're gonna ask them and they'll tell us what to do. Well, they're probably as clueless in this situation as they they're just not gonna admit to it, and they've they've got better over the years at being certain or pretending to be certain or whatever it happens to be. So I I think that's the Yeah.

I think that's the biggest thing is just saying, actually, I think I know what's going on, but I probably don't. So I I need to go out there. And then I think the second thing is this experimentation. Running little experiments, trying new things. One of the problems, and and this also came out of the research that that I did a few years ago, is that this this idea of developing resilience within organizations and what that actually means.

When you think about organizations, everything that we've talked about, this idea of the way that they structure things, you know, the system structures, policy, procedures, processes. Right? Is that when you think about what that's actually doing is they're heuristics for decision making. What they're doing, each of those things are making a decision for you so you don't have to make them. So that you come in and you know what you're doing.

You're in sales today. You know, if you're in a one person startup, you can put you've gotta do everything. But in an organization, they're every single one of these, every time there's a new policy out, what it's doing is this is how to think. It's stripping away a decision making process, and it's doing it for every type of decision. Now when we start to think about the kinds of decisions that actually need to be made in organizations, and this is why we get this problem in organizations of people not wanting to make a decision and bouncing it upwards, is that they won't take responsibilities.

They're not used to running a, they're not used to running little experiments, and they're not used to failing because failure is punished in organizations. So we get this kind of blame culture and all the rest of it. If we then start to think about decisions in organizations and start to think about the cost of those decisions, we can do something a bit cleverer than this. So I I've kind of separated them out. There are what we call critical decisions.

So a critical decision is if you get this decision wrong, it you know, you you're betting the house here. You could lose the whole business. And the the the thing is there that no one person should ever make a critical decision in an organization. So you just look at the collapse of Lehman Brothers. They put that decision, the amount of money that that was being bet there, they put that decision in the hands of 1 person.

Big mistake. Because as we know, you know, one person making decision. Right? So, so they're the they're they're the top ones. The second group are the expensive decisions.

So that if you get them wrong, they're not gonna bring the building down. They're not gonna end the the the business. But they're gonna cost a lot. And the same rule applies. No one person should have the responsibility for that decision.

There should be more people involved. We should be asking people, working out what's going on because, you know, a couple of those could bring the the house down. And then under that, we have all these other decisions, the day to day decisions that are neither expensive nor critical. And we keep applying policies and procedures to those like we have the others. Right?

What? And the trouble is that what it's doing is it's removing decision making from people and they get used to not making decisions or not wanting to make a decision in case they get it wrong. And if they get it wrong and it's a failure, then what?

Speaker 1: Yeah. It's like a learned helplessness, isn't it? Yeah.

Speaker 2: Well, that's how learned helplessness starts.

Speaker 1: Yeah.

Speaker 2: And so really identifying all these cheap decisions, like, let them fail because they're

Speaker 0: gonna learn. How do how do we learn without failing? The anxiety. The anxiety. I can do it with the managers already.

But I'm like Exactly.

Speaker 2: So and and it's this process about, like, not castigating people for trying things out. You know, it's where that's where innovation comes from. It's where the first step of something. We go, oh, okay. It's creating it's creating a kind of a culture where where those they don't matter.

What's the problem? Why put a policy in place if it's a failure? We just learn from it. Let's move on.

Speaker 0: Brilliant. Well, David, I just wanna say a huge thank you. I think some of the things I've really enjoyed in the conversation is is the fact that, you know, there is this incredible academic research out there, but you've been able to sort of translate it and make it hugely practical. And it's also been really nice as well. We've talked dealt in different areas, and all of a sudden, you've kind of brought it to life as well.

So some of the things I'm sort of taking away are things like well, a, I can now learn how to read a a journal in less than an hour. You talked about a strategic drift, a tolerance for uncertainty, you know, just really acknowledging how our organizations are, so operating at the edge of chaos. And I really like the idea of wild cards. The idea we can actually take a time out because we don't take enough time to think. So before we actually take that action orientated need to make a decision, we can take a step back and then apply logic as well.

Dani, what's what's stood out for you?

Speaker 1: Oh, so many things, but I think just if I pick 3, maybe the the kind of simple steps to kind of taking a more evidence based practice. So the, you know, the 2 simple things, commit to kind of finding proper research to inform your practice, and then collect evidence, and don't assume you know the answer and kind of bring in lots of different perspectives. I love the point you made about AI and the fact that it's connected to the Internet, and that's not the same thing as being connected to the world. I think it's really easy to forget that. And then I love the idea of a court jester, an organizational court jester.

So having somebody who's, you know, who's gonna hold a mirror up and say, you know, just look at what's going on here. So I think that's sometimes the role we play, as OD consultant

Speaker 0: outfit. Without the outfit. Yeah. And, David, if people wanna reach out to you and contact you, what's the best way for people to reach you?

Speaker 2: Yeah. Certainly. I think the best well, LinkedIn, David Wilkinson, the Oxford Reviews on LinkedIn, or you can just go to, oxfordhyphenreview.com. There's a contact thing on there.

Speaker 0: Brilliant. And if you wanna reach Oxford Review, they'll be in the show notes whether you listen to this on audio or on YouTube as well. David, a huge thank you. And on behalf of, like, Sonar professionals that rely on you, and and we're just extremely grateful that you're out there sort of applying such expertise. So so thank you for all your efforts, and please may it long continue.

Speaker 2: Thanks, Gary. Thanks, Danny. I really enjoyed this.

Speaker 1: Fabulous. Thanks, David.

People on this episode