OrgDev with Distinction

Understanding Complexity with Dave Snowden - OrgDev Episode 92

Dani Bacon and Garin Rouch Season 6 Episode 92

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 48:21

We'd love to hear from you so send us a message!

How to apply complexity thinking in your organisation
If we can’t reliably predict what will happen in our organisations, can’t control outcomes in any simple way, and can’t even agree on what’s true – how do we lead?

That’s the territory explored in this conversation with Dave Snowden. We move beyond tidy change models and into the realities of uncertainty, distributed knowledge and contested meaning. This episode challenges the instinct to simplify too quickly, offering a more grounded way to think about judgement, intervention and action in complex systems.

From Cynefin to estuarine mapping, the discussion examines what it really means to make sense of the present before attempting to shape the future — and why leadership in complexity is less about control and more about creating the conditions for coherence to emerge.

Wish you had a handy recap of the episode? So did we.

That’s why each week in our Next Step to Better newsletter, we’re sharing From Pod to Practice – a 2-page visual summary of each episode designed to help you take the learning from the podcast and into your work.

You’ll get:
■ Key insights from the episode
■ A reflection prompt
■ A suggested action

Sign up now to get From Pod to Practice delivered to your inbox each week: https://distinction.live/keep-in-touch/


About Us

We’re Dani and Garin – Organisation Development (OD) practitioners who help leaders and people professionals tackle the messiness of organisational life. We focus on building leadership capability, strengthening team effectiveness, and designing practical, systemic development programmes that help you deliver on your team and organisational goals. We also offer coaching to support individual growth and change.

Find out more at www.distinction.live

We'd love to connect with you on Linked In:
linkedin.com/in/danibacon478
https://www.linkedin.com/in/garinrouch


(00:00) Hi, welcome to the org dev podcast. So if we can't predict with any certainty what will happen in our organizations, we can't control them and we can't agree on what's true. How do we lead? And that's the kind of question Dave Snow has spent his career exploring from canvan to estrine mapping. His work challenges how we think about change complexity and what it really means to make sense of the world around us.

(00:22) Dave is the creator of the canvan framework and originated the design of sense maker the world's first distributed ethnography tool and a new way of waving risk and strategy together called RAS he's a lead author of managing complexity in times of crisis a field guide for decision maker and he divides his time between two roles founder and chief scientific officer of the canvan company and founder and director of the kvvinan center his work is international in nature and covers government and industry looking at complex issues

(00:51) relating to strategy and organizational decisionmaking. He's pioneered a science-based approach to organizations drawing on anthropology, neuroscience, and complex adaptive systems. Dave's hold positions as an extraordinary professor at the universities of Ptoria and Stelenbos, as well as visiting professor at the University of Hull.

(01:09) The Canvan company, formerly known as cognitive edge, was founded in 2005 by Dave Snowden. And Dave has had a fascinating journey starting out studying theoretical physics and philosophy, which has helped shaped his approach. And interestingly, he was also sea level leader at IBM, probably one of the most interesting organizations to study there are, which gives him fascinating insights into how organizations work.

(01:30) So Dave, thank you so much for joining us. And we're we're literally the last people or the second from last call before you go on holiday today. So thank you so much for making time. >> It's great to have you with us, Dave. Thank you. So Garin's kind of outlined what you do, but just bring that to life a bit more.

(01:50) tell us a bit about the roles you have and what does that involve? >> Okay, so we're a small company and we want to keep it small. Uh we're an action research group, but I think the unique aspect of what we do is we totally reject the idea of using cases to create practice which dominates management science and there are a couple of reasons for that.

(02:09) Uh, one is, and this is my physics background talking, no social scientist ever has enough data to form any valid conclusion anyway, which doesn't mean there isn't value in it, but you never have enough data. And the other problem is, and this isn't most social science, but what you see in management science is a consultant or an academic will go and interview 10 or 15 companies.

(02:29) They'll believe what they're told, which is really stupid. uh every time I put ethnographers in to check what executives say about their company, we get completely different results. But they believe what they're told and then they assume that qualities they can identify from those cases which are actually emerging properties.

(02:47) They there something which is emerged. They actually think they have causality. This is called the confusion of correlation with causation. And they create really easy recipes. So yeah, here's 15 cases. Here's a recipe. Do these things you two will be successful. It's the foundation of needle all the big consultancies.

(03:04) Now I say yeah there are many problems with that. One is that you can't trust what you get told. You don't get enough data but also confus two main issues. One is the confusion of correlation with causation. So for example if any country wants to increase the number of Nobel prizes it wins. It doesn't need an educational system.

(03:21) And all it needs to do is increase dark chocolate consumption because dark chocolate consumption per head of population directly correlates with Nobel prizes for head of population for the last 50 years. It's a bigger data set than I've ever seen any management consultancies report. So go and eat chocolate. The one which I think is causal in peaks in attempts to commit suicide by drowning correlate with the release of Nicholas Cage movies.

(03:44) But I can see a reason for that right now. >> I can vouch for that. I think go on to the false correlation site and you'll see it. The other big problem, this is a massive issue in organizational development and you can see it in in behavioral behavioral approaches which are becoming increasingly discredited is they confuse an emergent property with something which has causal aspects.

(04:03) So for example, you notice that innovative people are very creative. So you start to run creativity programs. You know, you convert an office into primary colored children's toys and yeah, you don't take me there. Um the reality is creativity and innovation are both emerging properties of starvation, pressure and perspective shift.

(04:22) And so the fact that you can identify characteristics of leaders, you know, those characteristics aren't causal. They've arisen from multiple interactions over time. And I think there's a major issue on this. You can see it in, you know, in in leadership development theory, and all sorts of theories. You know, you see it in agile.

(04:40) you know, you got to have an agile mindset and people decide they define ideal behavior and that is just the wrong way of going about it. So we take a very different approach. We spend about five, six, sometimes 10 years playing around with an issue or a problem and we look into physics, chemistry, biology, anthropology, philosophy and then things start to come together and then one day it all sort of looks right.

(05:04) It's a bit like theoretical physics. One days the mass looks beautiful then you hand it over to the um experimental physicists who kind of like get it right and then when they get bored they give it to the engineers. So that's kind of like the approach we adopt. So to give you the most simple example on that if you give radiologists a batch of x-rays and ask them to look for anomalies and on the final x-ray you put a picture of a gorilla which is 48 times the size of a cancer nodule.

(05:27) 83% of radiologists will not see it even though their eyes physically scan it. And the 17% who do see it come to believe they were wrong when they talk with the 83% who didn't. And 100% of non-raiologists do not see it. It's called inintentional blindness. And it's part of what we are as a species. Now the minute you realize that any approach which says if I give everybody the right information, they have the right training, the right competences, they will make the right decisions is completely out of baloney. Which doesn't

(05:54) mean that you shouldn't give them training and competences, but fundamentally you've got to find the 17%. Now some of our work for example is to find the 17% before they talk with the 83% who didn't and I could go into other things you know that once you get above about seven people dynamics of silos come into play which don't at sort of seven or five so there's a whole body of natural science we can draw on and I think to sort of conclude this the big difference between natural science and social science and yeah I've got a

(06:24) background in that as well social science tends to provide explanatory but not predictive capability ility and it's very powerful as such. Natural science creates predictive because the experiments have been subject to repeated experiments by third parties. You've got peer review and it's interesting if you look in psychology at the moment you got what's called the experimental crisis in that people have tried to repeat the experiments and they failed and a whole body of OD practice you know falls at that point. It's a

(06:52) long answer but that's the essence of the approach. >> We've used the term complexity science to describe kind of the field you work in. If somebody's not not heard that phrase before in kind of leadership context, how would you explain it, what it is, and why it matters? >> Okay, so firstly, don't confuse it with systems thinking.

(07:07) Uh systems thinkers have tried to address complexity, but their theoretical models really all derive from Ashb's theory of information. Complexity science, the clue is in the second word, has a very different background. It comes from physics, from chemistry, from biology, and then got applied to economics and elsewhere.

(07:24) There's some overlaps but actually the big fight in the Macy conference between Bateson and Ashb sort of is what are the bifocation points we can go to that more detail if you want. So complexity science deals with systems which are inherently uncertain where there is no linear relationship between cause and effect. Yeah.

(07:40) So the same thing will only happen again the same way twice by accident. And one of the ways that I get executives to understand that is there's a famous phrase which you see over the congressional report on 911 you know royal commissions of inquiry all of these things and it says why didn't we join up the dots so something goes badly wrong and with the benefit of hindsight everybody says we should have joined up the dots and they can see the causal links okay so this is a fun exercise you give people four dots you point that with four dots there are

(08:09) six linkages that can form between the dots you know the square and diagonals which means if you take dots and linkages in any combination as a pattern there are 64 possible patterns. So you say with four dots how many possible patterns are there and wait and you give them like a minute to do it. All right.

(08:26) >> Ideally you get them to write it on a piece of paper and hold it up because most executives are lying bastards when it comes down to it. They think they got the answer wrong. Right. And I've had guesses as low as 150. Most people start in the thousands. It's actually just under 3.4 trillion.

(08:43) If I go up to four, it's just under 4.8 quadrillion. Now, there's an old Chinese story about that. A Chinese sage asked for a reward by the Indian emperor for teaching him chess. Said, I've one grain of rice on the first square of the chessboard, double it thereafter. Right? There wasn't enough rice in India to satisfy the demand.

(09:00) So the question is how many dots are there in the human system? How possible patterns are there? Everything is deeply entangled with everything else. And critically, hindsight doesn't lead to foresight. And the concept of entanglement is key because in a complex adaptive system and this also has major implications for OD connections matter more than things and anybody with teenage children knows it.

(09:21) You know when when your children hit puberty it really matters who their friends are because they just come out of your influence and now another influence set and those interactions will shape them far more than anything else. So complexity deals with multiconnected sim systems systems which don't have linear relationships between cause and effect but they we can measure patterns we can measure dispositional states we can influence their direction and we can observe regularities.

(09:49) Two other examples to illustrate complexity antelopee if they there's another little exercise you can do is getting people to understand complexity physically is really useful. So the dots is one. The other is get 20 or 30 people to stand up in a room and ask them to identify their best friend and their worst enemy.

(10:06) Before you do that, you say, "Don't look at anybody. Don't say anything." And you can choose people at random, but it's worth videoing because they always do choose their best friend and their worst enemy so you can study it later. So that would be unethical, but you could. Um, and then you say, "Organize yourself." So your best friend protects you from your worst enemy, and the group dissipates over the room.

(10:27) Then you switch the rule and you say, "Protect your friend from your enemy." And the group comes together instantly in the middle. It's a really powerful demonstration because nobody predicts it. So when antelope spots a predator, they identify another antelope by the pattern on its bum. That's why they're so vivid. And they position themselves between the predator and that other antelope.

(10:46) So the herd stays together even though it doesn't have a goal or a purpose in sense of a destination. Now that is one of the ways a complex system works. And some of the stuff we're doing at the moment is how do you create alignment without goals? >> Because under conditions of extreme uncertainty, goals could be dangerous.

(11:04) And people keep talking about north stars as if they were destinations when it's meant to be a navigation aid. But that's another matter. Uh the other example which we've just launched something on is if you look at bees swarming that there's a temperature trigger in the hive which means the worker bees hatch out a new queen and then a group of the worker bees go with that queen and hang off the nearest branch.

(11:26) Then individual worker bees fly out to try and find a new hive location. Uh they fly back to the swarm and they do a figure of eight dance. It's called the waggle dance. And the plane of the figure eight angle to the sun indicates the destination and the intensity of the dance says how good a place it is. And there are other bees which disrupt the dance of some bees and that's important statistically.

(11:46) It creates variation and after two or three days the entire swarm suddenly goes to one destination by agreement. Yeah. And it's always the best of all the destinations which have been investigated. I can let you have the papers on this. Now that's an optimization without directing intelligence entirely defined by interactions not by attitudes, motivations, beliefs or anything else.

(12:07) So some of the stuff we're now doing, for example, is present a problem to the whole workforce, get them all to interpret it, then go through a series of iterations over a very short period of time until we know the consensus and we know the outliers. And we can do that in a day as opposed to three months of staff consultation and communication exercises.

(12:27) And that's the real power that complexity science gives you. It's the ability to do significantly more with less resources because you're working with the natural contours of a system rather than trying to impose a mechanical or engineering metaphor onto an organic system. >> And that's a real shift for organizations, isn't it? Who are used to that kind of top down we set goals, we cascade them.

(12:47) How do they start to make the shift? Yeah. Ironically, it wasn't the case in scientific management. Um, so I had the great privilege of teaching leadership with Peter Ducker for a few years, which is a lot of pleasure. Partly because the first time I met him, I was I was at keynote in a conference and he spoke after me and I made the mistake of criticizing Taylor and I got beaten up. Yeah.

(13:07) Badly, right? It wasn't quite the I knew Frederick Taylor speech if you remember it from that one American election, but it was pretty close to that. And um either way, he decided I was rescuable. I was a puddle of humiliation on a stage in a hotel down in California right in San Diego took me out for dinner and then we talked together which was a huge privilege right and basically if you go back to scientific management it's this is Taylor atal everybody forgets what they were replacing so we look at time in motion now we say oh my god that's

(13:35) terrible but before that the accident rates in factories were appalling it was near slave labor yeah it was a huge improvement and if you retailer you had a very strong ethical driver behind and what he was doing. But what he never did was to abandon apprentice models of management.

(13:52) So the assumption was people would grow up in a firm that there might be some new blood from time to time. But we know for example to understand a social system takes four or five years of social interaction at scale. So they understood that and their management model was a military one. Now again people I do a lot of work with military.

(14:08) Fascinating stuff going on in Ukraine at the moment. I'm back there in January. But fundamentally, military models are highly adaptive. They're not hierarchical. A weapons sergeant outpoints a brigadier in some respects. So they're highly adaptive and they're distributed and they're rolebased, not personality based, which is really important.

(14:27) What happens in the 80s and 90s with the popular introduction of systems thinking through systems dynamics and then things like business process re-engineering is we move into a world where we think the entire world is ordered and can be planned and structured. people's authority is just taken away like you know it's all a spreadsheet you know this is what you can do you got no no judgment is involved and really that's the stuff which is coming to the end now then but engineering models of management are not flexible whereas military are and I

(14:53) think that's the big flip we're moving on to we're now starting to see and these things happen very quickly when they happen they happen in one or two years and co started to trigger it that's when we wrote the guide with the European Union first ever government publication based on complexity theory by the way And then after the Trump election from that point onwards nobody believes the world is predictable anymore.

(15:14) So what we're now in a position where that sort of people are now looking for tools and methods which assume you haven't got certainty and you can't control outcomes. But the critical thing this is the really fascinating thing about complexity. You can understand and manage the present. And if you understand the present you've actually got more control over the future than if you try and set goals because you know what can happen and it's called the science of can and can't to quote Deutsche.

(15:38) You know what can change and what can't change and you also know whatever has the lowest energy gradient is probably what's going to happen. Yeah. Modern theories of evolution look at energy minimization rather than survival of the fittest. >> And and you've got some fascinating experience at IBM which is often described as the mother of all bureaucracies. Yeah.

(15:56) >> And I think that's >> government feel dynamic and userentric. you sort of shared some insights about you know how organizations work and you know often how the world of the sea level leader is often not how others actually understand it to be and there's oft there's a lot of frustration directed towards sea level leaders but is there's often a lack of understanding in terms of the complexity in organizations how much is that kind of sort of informed your your work >> it's a mixture so I mean IBM for me to

(16:21) quote Dickens was the best of times and the worst of times yeah I mean IBM to again to quote um Richards and the Williams stories and Violet Elizabeth Block. When it's good, it's very, very good, and when it's bad, it's awful. When I joined it, it was quite interesting. Senior VPs almost had a competition as to who could employ employ the most disruptive maverick.

(16:40) It was like a status symbol. So, that gave me infinite opportunities for employability, right? And I was hidden in their rounding error. You need to understand some people in IBM, their rounding error every quarter is quarter of a million dollars. If it's less than I think it's rounded down to zero or did, right? So they didn't transparency, lack of transparency allowed them to experiment.

(17:01) And by the way, that's a key issue. Too much transparency, you destroy innovation. No transparency, you get corruption. There's kind of like a golden mean to quote Aristotle between the two. So IBM gave me a job in which I could do whatever I wanted as long as I upset the right people. And I achieved my targets endlessly on that. >> Was that in your job description? >> It was. And it was funny.

(17:21) I mean, my my boss, he was he always worked out targets retrospectively. He was a very civilized guy. You'd sit down at the end of the year and you agree what your targets would have been at the start of the year based on what you both agreed you achieved. And HR for some reason didn't like that. So he decided to teach them a lesson.

(17:38) So I got X,000 for every vice president of IBM who demanded in writing I was fired and and $1,000 for every director. HR went ballistic. But he said, "Look, we acquired data sciences to create a services business. We were the foundation for what became IBM global services. The strategy says we will disrupt traditional IBM senior people and we have to protect people from the disruption.

(17:59) So he said it's a strategic goal. It's measurable. I've got authority. So I've done it. And they never ever made him set a target again. He taught them a lesson on that. He was a great guy was Philip. All right. But the great thing about IBM is you could go into a client and you could do something risky because you were IBM.

(18:15) A lot of the stuff I developed I couldn't have developed as a standalone company but I could develop within the context of IBM. That was its great strength. Its great weakness was the bureaucracy. And the key thing in IBM and this taught me a lot as well is informal networks matter than formal systems.

(18:30) So the first thing any two IBM people will do when they meet is try and work out which social networks you're in because those are trusted. Yeah. And those networks make things happen for you. The formal system doesn't. So that was one aspect of IBM. But then the sea level issue, I mean I was sea level before I joined IBM. I was strategy.

(18:48) I mean you sit at your table and three people or five people come into you with well-prepared PowerPoint slid sets that they worked on for the past 3 months in areas where they have deep expertise where you've got no bloody idea what the basis of their science is. They present proposals and you're meant to choose between them.

(19:04) It's bad enough if you have to do it individually, but if you have to do it in a board meeting, it's even worse because whatever you do, if it doesn't work, you're going to get blamed for. People don't understand that pressure. They also don't understand that at sea level you've got demands on you from stakeholders, from other people that you can't communicate. Yeah.

(19:21) I mean the characteristic of a good CEO is the ability not to suffer pressure and always appear you've always got to appear confident. You can't appear vulnerable because you're in that role. It's not superhero or superhero in time. Is you got to hold the ship together. And you know one of the things we developed in KVIN for example is to help those executives is okay you've got five proposals test which is coherent and which isn't.

(19:44) uh coherence testing is a lower level than it's right. So I can agree for example your idea is coherent even though I think you're wrong. Yeah. So that reduces conflict in decision- making and every coherent idea you say okay 3 months here's $10,000 go away and see what happens if you try your idea up and then you see what happens and that's called safe to fail probes which is a key complexity technique and I think we need to start to understand you and that's what we're doing with swarm compass.

(20:08) We're getting the whole of the workforce involved in determining what's viable before people commit their reputation to it. >> And I think we kind of know in organizations because we do hear the phrase it's safe to fail, but often it's truly not. So how do you sort of create the conditions? Because a big part of this is creating the conditions, isn't it? Where people feel able to take those kind of calculated risks and see what happens.

(20:28) >> You you basically target the manager. You say 40% of your probe, we call them probes, not experiments. That gets the right attitude. 40% of your probes don't fail. You failed. That's actually quite fascinating when you do it. I mean, somebody in IBM when I did it in Denmark said, "But they'll just create some not some stupid projects to achieve the target.

(20:47) " And I said, "Well, that's what I'm hoping they'll do." And what happened is the stupid projects generally succeeded. So, they found people they thought were idiots and gave them some money for a change. All right? And then suddenly discovered that those people were seeing the world differently from other people. Um, but the key thing is you do them together and you do them collectively.

(21:04) So yeah, you you you yeah you yeah you yeah you yeah you yeah you yeah you yeah you yeah you yeah you yeah you yeah yeah yeah you do the coherence test you agree things are coherent you agree that you don't all agree what's the right thing and you say right we're now going to run five or seven or eight probes and that will change the space and then we'll know what the right thing is to do so it's more of a research method it's not competition as to who's right or wrong it's a research method to understand what's viable >> and this is different to a pilot because

(21:27) I think you sort of talked about the fact that often it's like the hawthorne effect where you know it doesn't matter what's in the pilot he will create some form of >> and and that's the problem with it. So again what what and again you'll never see anybody in the canving code talk about behavior or how you should be.

(21:42) What we do is we create processes which are likely to generate it which actually means we can scale and we're not making moral judgments. I mean I had this argument with Amy Edmonson the other day. Well it wasn't an argument. I disagreed with her got patronized and when I replied got ignored. All right. So that's what happens when you fight with Harvard professors.

(22:01) I think the whole psychological safety movement is becoming terribly oppressive. Yeah. It's becoming a sort of reverse form of oppression within the system and it's like yeah adult development theory which Norah Bateson has rightly called out as eugenic. Yeah. Which privileges people who've reached the higher levels of enlightenment.

(22:19) In fact the language if you look at it even Keegan is the more respectable end is the language of North Yeah. North Atlantic re you know enlightenment thinking. Yeah. This is our ideal. Have you achieved it? By implication, I have. And all of that is kind of like to my mind deeply manipulative. If you actually get people together working together in small groups, they sort this stuff out for themselves. So give another example.

(22:42) One one of our methods and complexity methods generally hit multiple goals with one intervention, which is also powerful. So you got several problems like how do you take on new employees and how do you do innovation? Right? Simple thing. We take somebody who's just joined the company. We put them in a partnership with somebody who's about to retire from the company.

(23:01) This is called a transgenerational pair. Now, we've done it in communities, by the way, with teenagers and retired people. Uh, and that draws on something called young driver syndrome in that young recently qualified drivers see things that experienced drivers don't and vice versa. Now, we've done it with newly qualified doctors, with experienced consultants.

(23:21) They see the world differently, but they got enough in common. So I got young, bright, and naive with old, wise, and cynical. Says, and by the way, the cynics are the people in the organization who care. If somebody calls you a cynic, as far as I'm concerned, it's a compliment because you're not just being compliant. And then we put them in a trio with somebody who's identified as fasttrack managem

(23:41) ent, i.e. somebody who's on the track to be a senior manager, who's got a reputation to build. So I throw 15 of those trios at a problem for a month, right? Some of them will see gorillas which they won't do if I do a tiger team where I have everybody together with authority. But also what I'm building is networks between young people in the organization and old people and I'm gathering the stories from the old people before they leave which is the most valuable form of knowledge out.

(24:06) So I'm doing multiple things in the same way. We do the same in it. We take young bright coder with you know experienced cynical systems architect and user train to talk to IT people. It's a lot easier to train users to talk to IT people than train IT people to understand users. And instead of sending out a systems analyst to interview people who will go out with a whole series of assumptions and users don't know what to ask for in it these days either because they don't know what it can do for them. It's moving so

(24:33) quickly. So we put those, you know, 20 of those trios to work for a month and then we synthesize what they come up with. That's a much better requirements documentation. But I've also built social networks between users and IT people which will carry forward into the project and derisk it.

(24:49) Now again you see what we're doing is we're creating a process which generates a result. We're not saying what result we want. We're not saying to old people you need to listen to youth. Well yeah everybody tells them that. I remember being told that. I now know that they were right but I didn't know it at the time. Right. Um, yeah, ad admonitions to be good have zero effect on any human being who's competent to survive in an organization anyway. They just feed it back to you.

(25:13) >> It's interesting because I because it also does a thing to the organization as well, doesn't it? One of the things I've sort of enjoyed getting ready for this podcast is looking at some of the terminology and one of the things you talk about is epistemic justice. >> Yeah. >> Yeah.

(25:26) So you got well please describe it better than I could but it's about viewpoints and personas that are more represented than others and and think little mechanisms like this would go some way to sort of >> yeah but I mean that's a much bigger issue. So I mean Beth who's one of our consultants all right she has a wonderful way of saying it.

(25:42) Um she said the illustration of epistemic injustice is old men are called philosophers where old wives tell tales. So the way you describe something it legitimizes people. And you know, one of the many advantages about being Welsh is we've grown up next to the English. So we recognize the phenomena. I mean, interesting just to tell you on that, my grandmother was subject to something called the Welsh not.

(26:02) So in the middle of the 20th century, the English passed a law which said Welsh was an univilizing influence on the Welsh and had to be eliminated. And so if my grandmother spoke Welsh in school, she had a wooden badge hung around her neck which had WN written on it, which was for Welsh not. She then had to catch one of her friends speaking well.

(26:18) And whoever wore and hand over the knot and whoever wore the knot at the end of the day got thrashed by the teacher. Interestingly, it was the same happening to indigenous people in Australia and Canada at exactly the same period. Tyson Cor and I talked about this the other day and then they created a nursery rhyme. Taffy was a Welshman.

(26:33) Taffy was a cheat to reflect that. Yeah. So taffy is a real insult. I mean we can use it in Wales but you English use it. That's not all. And the right to have your voice heard is key. And the critical thing here and I think this is where AI is going down a really bad path at the moment is the algorithm is interpreting what you wrote down.

(26:51) Now that has a problem because you can write down less than 10% of what you know anyway. Focus on text and tokens is problematic. Secondly, what that text means is for you to interpret not for the AI or not for an academic or not for an expert. So a huge aspect of our work is to allow people to interpret their own experiences into what's called high abstraction metadata which is the primary unit of analysis.

(27:15) Um and as you can see a lot of our methods put young people with older people in small groups so their voice is heard. We don't say you should listen to the voice. We create a process by which that happens. Um, so that's a really key concept in what we do and it's one of the reasons why we're now starting big research program we're launching next week or the week after to look at the balance between AI and human reasoning.

(27:37) So we're going to audit all the decisions that make you make in your company identify the balance between human and machine reasoning effectively between what's called abductive logic and inductive logic. AI is inductive, humans are abductive, and then identify what inductive capability human beings have to practice for 5 years before they create the abductive capability.

(27:58) There are things AI can do better than us, but if we don't do them, we never get the higher capacity. And then we're going to create an audit tool. So that's something we're going to run over the next 6 months because at the moment the the real danger with AI and it's it's been coming over the last 30 years. It starts with systems thinking and the excessive focus on information and information in signal form. Yeah.

(28:17) Which then became text form is that we're reducing human intelligence to the ability to process text. So in the knowledge management community they don't they they don't talk about knowledge. They just talk about text. I mean they they're calling it knowledge but if it's not written down they don't believe it exists.

(28:33) And that's deeply deeply problematic. So at the moment the danger with AI is we're meeting it halfway and and fascinating a big issue for small consultants on this by the way is the big consultants are getting really hit because their whole raise on data for the last well not all of them but for the last three decades is to be repurposed existing text for new clients at huge margins.

(28:54) Well AI can do that better. So I have a simple heruristic at the moment. If anybody says AI has made us more productive that probably indicates they were doing a bad job in the first place. I think you also said that it should be mandatory that all software engineers had e ethics training as well. >> Yeah, I wrote wrote an article on that.

(29:08) Well, they're making decisions without any understanding of the implications >> and and the real scary thing and had this in Sweet Georgia Browns in Washington with Peter Theal once my one encounter with him and I remember walking out of the meeting and saying you're not immoral, you're amoral and that's far more scary.

(29:24) >> Hi, we're just pausing this interview for a moment. Have you ever finished an episode of the org dev podcast and wish you had a cheat sheet that summarizes all of the key points? Us too, so we made one. It's called from pod to practice. And each week in our newsletter will share a two-page summary of the latest org dev episode.

(29:43) And it includes key takeaways, a reflection prompt, and one small action you can try. And it's all in a digital format with space at the end to add your own notes and reflections. And it's designed to help you take the learning from the podcast into your day-to-day work. So to get your copy, just sign up to our next step to better newsletter, the links in the show notes, or you can visit our website at www.distinction.

(30:03) live to get the latest from pod to practice in your inbox and let us know what you think. We'd love to get your feedback. >> And you touched around decision-m there as well. And one of the things that's sort of found fascinating is you talk about the fact that decision- making doesn't bring out the best in an individual and that often distribute >> and and therefore the way in which decisions are designed actually bring out the worst things is actually there's much more sophisticated ways of doing it.

(30:28) >> Well, all good decision makers actually have people with them they trust. I mean, I've yet to see an executive who didn't carry two or three people around with them between employment. So any intelligent person has already worked that out. But we do know some basic scientific facts on this comes from biology.

(30:44) The primary decision there's a sexual pair which is nurture more than less than decision- making. There's the small hunting party or extended family which is normally about five in terms of active decision makers. Maybe slightly bigger in numbers but never more than seven. And there are things called deems which are collective groups of about 20.

(31:02) And the deems tend to hold together in summer when there's plenty and compete with other deems. Then in winter they came together in what are called macro dees which are groups of 500 or so because now they got to cooperate to survive not compete. There's a lesson in that for economics by the way. And that's where Gerber and others argue monument building came from.

(31:21) You have to have a task that you agree to do together otherwise you'll fight. Now we've actually built that into a lot of peace and reconciliation work. So you make decisions differently in different groups. And the interesting is once you go above seven decision makers, people fall back into their silos. So I was working recently on an NHS triage issue.

(31:39) So we have doctors, we have nurses, we have ambulance drivers, we have hospital administrators. They all want to improve things, but nobody will depart from their silos because that's the respect. If I take one person from each silo and put them in a in five or six parallel teams, they innovate.

(31:55) So some of the really radical stuff which we announced yesterday is to actually allow RO based combinations. So if I've got six roles, yeah, not people but roles and I accept a seventh role which is completely anonymous. So I don't know who it is, that creates what's called a panopticon effect.

(32:14) And if you've got an unseen observer, you're honest. And those groups of seven can spend money or make decisions without seeking approval. Now what that allows me to do is basically to let a thousand flowers bloom to go back to the 760s. All right. I can actually have a very small amount of money allocated to lots of people doing experiments and then the real money can follow to the things which work rather than to people who say they can make things work.

(32:38) Now that's one of the most radical things we've created and I'm really excited by that. We know in the NHS we could take 50% of the bureaucratic costs out of a hospital and we could have faster decisions made in the field with higher retention of medical staff. It's going to take 2 years to get people to listen to us on that. >> Yeah, I was going to ask that the kind of the radical some of the more radical approaches.

(33:00) How do you get how do you get people to engage with that? How do you get kind of almost permission to do the work? >> So I mean there's there's two ways. If if you're working in industry, you go and find the early adopters. And the trouble is they tend to be oil companies, farmer companies, and military and intelligence.

(33:15) So it's yeah, if you want to do novel things, get used to working with people who do say things other people consider evil. But I've taught just war theory at West Point. So, and I quite like those guys because they know they're going to have to kill people, so they worry about it. Whereas people who don't have to do it just rely on them.

(33:31) But that's a story for another day. So there's always people there who will do something novel for the first time. Yeah. And this is Moors Cross in the chasm. This is where you build your early reference sites. Um, and to be quite honest, you don't want to be overformed. You want to be working with the client.

(33:47) So, you jointly collaborate to develop stuff. When the market switches, which is has on complexity, then you start to create product because people are now buying what it will do for them, not how it does. Now, medical research is different. It may be about to change in the states. We're talking in California at the moment because the sudden withdrawal of funding is creating a crisis and they've got to look at new ways of doing things.

(34:08) But generally in medical research, we're going to be holding an invitationonly seminar next year for people in the health sector who get the basic science. And what we'll be doing is constructing controlled financed experiments to test these ideas out from which we can publish papers because until we do that you will not get mass adoption in the NHS even though it can make a big big difference straight away.

(34:31) So you have to understand your markets and the way you work. I think >> one thing as we're sort sort of shaping the systems I think one of the things that sort of really stood out for me as well is you talk about you know you can't predict change but you can create a vector of direction almost as well and then that sort of introduces the world of constraints and constructors >> and that not all constraints are the same as well.

(34:51) So so people that are actually sort of relatively new to the field what is a little bit of a definition of how they actually work within an organization. So if I work in a complex system there are kind of like what I'm always doing is I'm trying to manage for emergence. So as things interact with other things properties will emerge which can't be predicted from the parts.

(35:09) That's the key concept of emergence. So there are four things I can manage in a complex system. One are called actants. All right this comes from Lour's actor network theory though I've modified it a bit. So an actant is anything with agency in the system. Now process philosophy argues the same thing but I'm not wild about that to be honest. Yeah.

(35:28) So actants can be actors i.e. people or roles. Yeah. They can be constraints and constraints can connect things or they can contain things. Then you get constructors which is a really important concept. A constructor changes things but doesn't itself act change in the act of changing them. So it can be so for example a software object is a constructor. A ritual is a constructor.

(35:52) Contagion also works with constructors. So constructors give you stability within a complex system. So when we're doing what's called estro mapping or affordance mapping, we identify all the actants in play in on an organization in its market. And we can do that in a workshop, which is the best way to get started, but then we use software to get it in at scale.

(36:12) And then we map that material um onto a grid between energy cost of change and time to change. And there's a key principle we use on this. If people can't agree what something is or they can't agree where it's placed, they break it down until they agree, there's no discussion, there's no argument, there's no dialogic, there's no facilitation.

(36:30) If you don't agree, break it down until you agree. This gets to what in complexity is called the optimal level of granularity. And then on that grid, we identify the stuff which is kind of like top right on that the energy cost of change and the time to change is so high realistically, it's not going to change, but I may need to monitor it.

(36:46) and and monitors become key because monitors give me weak signs of failure of a constraint or early signs of emergence and stuff in the bottom left is highly volatile. So it can you know use the expression turn on the dime which means it's quite dangerous and then and this takes half a day to do in the second half day because we normally do this overnight to give people time to absorb it.

(37:05) We come up with actions to change the energy cost of changing things but we don't necessarily try and change them. So what we're doing is we're changing the landscape so it's more favorably disposed to what we want to achieve before we try and intervene. So that's called estrime mapping and that uses actants and I've introduced the concept of monitoring.

(37:23) The other key thing in complexity is the interactions. So one of the ways you change things is change the interactions and I've given you some examples of that and of course some actants can be made so solid that they won't change and some interactions can be ritualized so they won't change and that's called scaffolding.

(37:38) That's called the Ames framework. Actants, interactions, monitor, scaffolding. Now the key thing on this and it's really upsets some people is in a complex adaptive system, none of the actants has any knowledge of the whole. So if anybody says we need to think holistically, they do not understand complexity theory and they're in a very bad place because you can't and you shouldn't.

(37:59) Remember the bees? None the bees are thinking holistically because the minute somebody tries to do that they impose their view of the world on the system and therefore they miss things which they would otherwise need to pay attention to. The other thing the est map gives you it's now an alternative scenario planning because it says what can change easily and that's what's most likely to happen but because we build it bottom up without discourse without dialogue without argument we've got an objective assessment of the situation.

(38:24) >> Now I was just thinking a lot of the leaders we work with are kind of in a kind of state of overwhelm. What would what advice would you give them? What's one small practice or one small thing they could start? >> Convexity theory is for lazy managers. >> Okay. >> And lazy consultants because you stop trying to control everything and you just say I'll change the actence, I'll change the actance, I'll see what happens and if I like it, I'll give it more energy.

(38:46) I mean, complexity theory is a gift for senior executives because it gives them time to monitor the whole. >> Well, yeah. Because I I think sort of the definition I think you sort of talk about the Latin root of complicated and complex, don't you? >> Yeah. Complicated is folded. complex is entangled. Something which is folded can be unfolded and folded again.

(39:02) It doesn't change. Something which entangled, sorry, you can't >> look at look at a fish in that on the side of a harbor. The fisherman can untangle it, but you can't. Well, come walking with me. I've got one route this week which I know is going to involve bramble bushes and ferns and woodland and I am not looking forward to it because everything is entangled.

(39:19) I've got a machete somewhere in the car I may take with me. I mean it's a big question but when you look back at your career so far what are some of the biggest lessons that you've learned that you you kind of >> do what seemed right at the time I mean I was lucky as well I mean I had a very secure home life that that makes you yeah we know that's associated with risk takingaking but it was also intensive I mean my my father was born yeah you know as he was a son of a a small farmer quite abused physically and everything

(39:45) else as a kid because farmers sons were and one day his father got a bad vet spill so he went down to the school and said which of my kids is bright enough to be a vet. So that was hiked out of being apprentice to be a carpenter and sent to Glasgow veterary school. Then ended up as a veterary officer to the Blues and Royals regiment in Kashmir during the second world war.

(40:03) So workingclass ex Northland farmer suddenly in the height of British aristocracy. So he learned a lot from that. And my mother was born above a wh house in Cardiff docks and fought a way out through education. She would study German first class honors in that. But she decided if she was going to study German, she'd do it in Germany.

(40:21) So she went to Hanover University in 1947, >> which meant she had to wear a passport around her neck so she wouldn't get raped by British soldiers. So I grew up in a household where education was everything and argument was everything. I mean, if if we liked you, we argued with you. If we were being polite, you needed to worry about it.

(40:39) And that was huge because and you know, I debated and that probably the most seinal thing I think in my life was debating. So I still remember at the age of 11 walking to the front of the classroom on Friday and got given a card and it said you support capital punishment and my mother was then leading the North Wales Labor Party campaign against capital punishment.

(40:57) So this is the teacher being wicked. All right. >> I had to speak for seven minutes without preparation for something I profoundly disagreed with. We did that every week from the age of 11 to 18. And those of us who are any good at it got formally taught rhetoric. That made us generalists. It made us confident. You didn't you read everything but you know what you're going to hit with and you became hyperritical because arguing for things you don't believe in means you've got to be critical.

(41:21) You see my point about process nobody told us how to behave but in those days the grammar school threw out generalists. Now part of the problem we got in society at the moment is there are hardly any generalist left. Everybody's a deep specialist and that T-shaped generalist stuff is nonsense. If you're deep in one field you you just can't really get the other fields.

(41:39) General journalists are shallow in everything. So I think generalist education yeah basically taking the view whatever I wanted it to did it would do it would work out and generally it did and by the way more people should do that particular if you work for American companies by the way I worked out pretty fast American companies are very authoritarian so if your boss tells you to do something you do it I didn't do that I just told them no so I have the world record I didn't fill out a time sheet for seven years I just refused yeah refused to fill out

(42:05) time sheets not wasted my time every year this little band six would come up and say I've been told to fill out your time sheet. So, I'd let him have my diaries, right? Um, defiance is a wonderful tool when you use it. >> So, because you're talking about there about the importance of disagreement, people coming together in organizations, they all have different perspectives on >> anodine nonsense about everybody's views are equally valuable.

(42:26) Solely, they're not. >> How do you create conditions in organizations where people feel able and have permission or give themselves permission to actually have the discussions that required? >> You do it by micro interactions. You put out small groups that allow them to make decisions, sort it out between themselves. Yeah.

(42:42) Trust will arise from working together. Yeah. You basically do swarm compass. You pull the entire workforce backwards and forwards continuously and anonymously and see what patterns are sustainable. This concept that you have to empower people to speak out in order to learn, well, it's a nice idea, but it's never going to happen. Yeah.

(43:00) Nobody in their right mind is going to it's it's like you know you've ever seen those facilitators say own up to your biggest failure to to show you trust the group. Everybody's got a well rehearsed failure which shows how bright they are anyway. All of this, you know, ideal behavior stuff just encourages people who play the game rather than people who are genuine.

(43:18) And and you just touched on there just just one second moment because you talked to in in um I was watching a keynote that you were giving where um you sort of gave an example about how you can so for things like whistleblowing or things like speaking out against things that have happened >> is is often sort of seen as too difficult for his career whatever >> they won't I mean the all the evidence is whistleblowers get punished and people know that.

(43:41) So you got that problem. And the other problem we've got is nobody wants to cry wolf, right? So we've seen that with engineers. You know, they'll look at something. We had this in Boeing and they say something smells wrong here, but it's not serious enough for them to report it because they don't want an investigation and be proved wrong.

(43:55) And it came home to me about two years ago. So I was speaking at an agile conference in Eastern Europe. So I I was opening keynote. A woman spoke after me and she came off the stage. The third keynote was pretty notorious to be honest. We all know him. Slapped her on the bottom and said, "Well done, lass. I'll see you in the bar later tonight.

(44:11) " So, I thought we finally got him. All right. Because there's a general rule amongst those of us who know him that you never allow women on their own in a room with him. Yeah. He's he's notorious. Um and I remember going to the woman. She said, "No way am I reporting that. I'll be subject to secondary abuse when it's when you confront him.

(44:26) Then all his mates will gang up on me. It's not worth my life." Now, I found that because I did a lot of investigation on this and in the big consultancy firms, people will not report racism or sexism until it's so serious they've got no alternative. And actually, if you've seen the morning show is a really good illustration of that.

(44:42) Somebody gets things and then they start to feel they're entitled to things. Nobody feeds back negatively. So then they become an abuser. And we know that nobody will report fraud until they're absolutely certain of it. Nobody will report safety issues until they know it's a real problem. So what we do, and this is done with Sense Maker, is people can report something as a micro issue.

(45:00) The high abstraction metadata we developed and patented comes in then because that's non non-judgmental. It's purely descriptive. So they identify the thing, they use descriptive metadata. We destroy any identity and we destroy the content apart from some key words. They then look for a pattern in multiple micro reports and we create an auditable report which says you've got an emerging problem here go and deal with it.

(45:23) But that doesn't involve the company in investigating cases. This is weak signal detection. It says you've got an emergent problem here, go and investigate. And by the way, we do that for business opportunities as well. There may be a new opportunity here, but you're not spotting it because nobody's certain yet.

(45:38) And again, that's something which came out of a whole body of work. >> One question I was asking as well is is you know, you're intensely well read. How do you invest in your own learning and development? Like how does that how do you make time for it? What do you do? >> Oh, lots of argument with people on social media. That's people don't realize just how much I'm enjoying myself.

(45:56) Um uh I guess read a lot and I check chase references down. I got one big advantage. I'm dyslexic, right? So the negative is I can't learn foreign languages. I can't pronounce a word from the text. I have to hear it 15 or 20 times and use a mental trick to lapse it. But it means I read a book two pages at a time.

(46:12) Literally I just go through it like that and I pick up the overall pattern and some books get put on one side and I need to read them in detail. But I have used four different colored pens. I I wasn't diagnosed with with dyslexia when I was young. So I developed coping mechanisms to read it a lines at a time takes a lot of effort and eclectic.

(46:31) So, I'm the moment I've just packed to go up to North Wales and I've got a book on the history of intelligence which goes back to pre- Roman periods. Yeah, that's history. I've got five science fiction novels and I've got a couple of really heavyweight stuff on anthropology and biology. And so, read eclectically and don't worry if you don't always understand it.

(46:51) You know, you'll pick it up. All right? Then you'll apply stuff. Oh, and talk with lots of people from lots of different backgrounds and don't be afraid to argue with them. The good people like an argument. the people who can't cope with an argument. You probably waste of time anyway. >> And is there a particular book or a podcast or something that you would recommend to other people? Is there is there one standout >> no >> resource for you? >> I'd refuse to do that on broad principle. Yeah. Go and read a lot, see

(47:14) a lot, but read outside your discipline. >> If somebody's kind of just taken their first steps into this area, what advice would you give them? >> Find an intractable problem. Find a problem people can't solve with conventional excuses and come and talk with people like us and we could probably help you.

(47:30) Don't go after the lowhanging fruit because conventional techniques can deal with that. >> Yeah. So go for something really really sticky and tricky. >> Sticky which is is frustrating people. So for example, what we're doing with Swarm Compass is how the hell do you consult the whole of your workforce within a day and get results back so you don't have to wait months? Well, we can do that now.

(47:46) >> Thank you very much for for taking the time to talk to us. We've really enjoyed it. there's lots for people to to take away and digest and I think you've just helped just help people think about things through a different lens and challenge some of the perceived wisdom that's um that's all too prevalent out there.

(48:00) So, thank you very much for the time and it's been a delight.