A transcript of Episode 245 of UX Podcast. James Royal-Lawson and Per Axbom are joined by David Dylan Thomas to discuss designing for cognitive bias.
This transcript has been machine generated and checked by Bevan Nicol.
Transcript
James Royal-Lawson
Thank you to everyone who is helping us with our transcripts. You’re doing a great job helping us make sure they’re published together with the podcast. If you’d also like to help out, then just email us at Hey@UXpodcast.com. HEY or HEJ, English or Swedish – you can choose.
Computer Voice
UX podcast episode 245.
Per Axbom
You’re listening to UX podcast coming to you from Stockholm, Sweden,
James Royal-Lawson
Helping the UX community explore ideas and share knowledge since 2011. We are your hosts Per Axbom, and James Royal-Lawson
Per Axbom
With listeners in 194 countries from China to Taiwan. And today, we’ve got David Dylan Thomas. He’s given presentations on the intersection of design, bias and social justice at numerous conferences.
James Royal-Lawson
His work combines more than 10 years of content strategy experience in entertainment, healthcare, publishing, finance and retail. With a deep understanding of bias cultivated by researching and producing over 100 episodes of his podcast, the cognitive bias podcast.
Per Axbom
And now, David has released Design for Cognitive Bias through A Book Apart.
James Royal-Lawson
Now, I don’t think we’ve actually had a full-on episode about cognitive bias before.
Per Axbom
No, it’s about time because we reference biases all the time, I think.
James Royal-Lawson
Yeah. And we’ve we’ve had episodes about dark patterns. We’ve had episodes about persuasion. But you I think we’ve just mentioned biases over the years.
Per Axbom
Yeah, it’ll be interesting to get a deep dive now.
[Music]
James Royal-Lawson
In your book, it’s split up into four chapters. What is Bias?, User Bias, Stakeholder Bias and Your Own Bias. But I think, just like you do in the book, a good place to start is What is Bias?
David Thomas
So, bias is basically a series of shortcuts Your mind is taking just to get through the day. You have to make something like a trillion decisions a day. Even right now, I’m making decisions about how fast to talk, whether or not to look at the thing to make sure it’s still recording, what to do with my hands. And if I had to think carefully about every single one of those decisions, I’d never get anything done. So it’s actually a good thing that your mind is mostly on autopilot. But sometimes the shortcuts lead to errors, and we call those errors biases.
Per Axbom
So, I know if I’m skipping almost to the end of the book, you have this reflection on that we put on different glasses during the day. The different glasses mean you’re thinking in a different way. But we also keep forgetting that we have glasses always on whenever we come into a room, when we’re about to record a podcast, when I’m about to ask a question. My glasses are on all the time. I mean, is it even possible to avoid these mind shortcuts as you’d call them?
David Thomas
Not even a little bit. So the whole glasses thing, there’s an effect called the framing effect. And it’s, in my opinion, one of the most dangerous biases in the world. And the reason: The basic effect, just to explain it, is let’s say you’re at a supermarket and you see a sign that says beef 95% lean, and next to it a sign that says beef 5% fat. People might be drawn to the 95% lean, but it’s the same thing. I’ve just framed it in a way where one seems more appealing.
And so once you understand that bias, you might think to yourself, I walk into a situation, decide what frame what like what glasses I’m going to put on to view the situation, and then that will bias me one way or another. But the truth is far more dangerous than that. The truth is, you were already wearing glasses when you walked in the room. You just didn’t know It. And just imagine that your whole life you’ve been wearing these glasses and you didn’t even realise it, and you take them off one day and realise the world is completely different than you thought it was.
I think a lot of people are going through that right now with like systemic injustice and race and all those discussions. So it’s a very jarring thing. But having those glasses on totally frames how you live your life. The problem is, as I said before, about, like needing those shortcuts to live, that frame is kind of a shortcut, it gives you a very easy way to understand the world. And depending on the frame, it could actually be a very useful thing. But by necessity, it is limiting. It is cutting some things out it is taking some shortcuts, it is not actually analysing the situation as it is.
And so, you can’t really get rid of the frame, so to speak, but what you can do is invite people who are wearing different glasses, so to speak, into the room when you’re making a decision. People who who have a different lived experience than you. And this is why, when you start to hear that cliche that representation matters and that diversity matters, it’s not just a cliche, it really does change the information you’re using to make the decision and the perspective.
So if you’re inviting people to the table who looks like you and act like you, your decisions are going to be the same as the decisions you’ve always been making. If on the other hand, you invite people to the table, who might be impacted by the outcome of that situation, who have a different lived experience than you who have less power than you, then I think you increase the odds of these new perspectives producing less harmful outcomes.
James Royal-Lawson
Generally, though, I think the whole thing of perception is fascinating. I mean, the whole thing about how much information we’re taking in and that we’re effectively using the biases to survive. Well, you quite quickly get to the thing about: Does the world even exist? How does it look to other people? How does it sound and feel now? Because I mean, ultimately, it’s all just frequencies and vibrations, isn’t it? Our perception of the world is is just what our mind has decided to project, making the film analogy and it’s just what you decide to project on in our heads.
And, the only thing in common with everyone is maybe that we’ve got shared inputs or overlapping inputs to a certain degree. But I have absolutely no idea if words sound like words do to me to you. I have no idea if if you look like you do to you. I just know what I see is what’s been projected inside my head as a result of all the bells, whistles, frequencies and vibrations that are going on in the world outside.
David Thomas
Yeah, and you very quickly – I forget if it’s solipsism or sophistry, but one of those is basically the brain in a jar philosophy, where it’s like, how do you know you’re not simply a brain in a jar experiencing all these things, because electrical impulses are being sent to that brain, right? It’s why it’s why the matrix is so compelling as a story premise. Because I can’t tell you it’s not true. You know, I can’t prove it.
But what I think that what I do with that sort of unsolvable riddle is it gives me a great deal of humility. Because one of the things you get from studying cognitive bias is either a great deal of depression or a great deal of humility around “I don’t really know what’s going on. I’m taking my best guess, that’s what my brain is doing all the time.” And, if that’s the case, I need something to base my decisions on. Right?
That isn’t simply the inputs of the outside world. Like that should be a part of it, but ultimately, I have to know that there’s going to be some faultiness there. So I start to say, “Okay, what else can I bring in to make that decision more valid? And I think it’s a mix of other people’s illusions. As well as just the fundamental set of values.
I think this is why things like religion grew up or things like ethics and values grew up is because we started to realise – whether we realised scientifically how faulty our information gathering and processing is – just experientially we realised, “Oh, I thought this was going to happen. But this happened instead.” And after that happens enough times, people start to notice that they’re not great at predicting the future. Or they’re not great and guaranteeing positive outcomes.
And so you do start to develop things like ethics, like morals, like religions that sort of say, “Okay, I need something other than the material world to base my decisions on.” And it gets tricky, because you don’t want it to turn into “Oh, I’m going to ignore the physical world.” And that’s where you get into people deciding not to wear masks. You don’t want to go that far. But you do need more than just your own experience to rely on. Because if you just rely on your own experience, “Oh, maybe you grew up with a lot of racist inputs.”
Per Axbom
So that’s sort of an answer to my first question, really, “How do I avoid being biassed?” I have to bring in other people and compare notes and see: “Do they see the same thing I see?”
David Thomas
Very much so and the thing is, if you look at a lot of the processes that have been created to fight bias over the years, they are versions of what they’re doing.
So you talk about something like Red Team / Blue team, where I’m gonna have a blue team, let’s say they’re working on a product, they do the research, they start to create some wireframes. But before they really commit to anything, the red team comes in for one day. And the red team’s job is to go to war with the blue team, and to look for all those hidden assumptions and biases, or even potential causes of harm that the blue team missed because they fell so in love with their initial idea. And that, fundamentally, the idea there is that if I just rely on the blue team, they may create a good product, but it is inherently going to have all the biases the blue team has.
If I bring just one other team in, I increase the odds that we’re going to create something that isn’t as harmful. And if that red team is made up of people who don’t look like the blue team, I even increase the odds more. I think I feel like that is very often the strategy we’re taking. We’re trying to not eliminate bias, because you can’t really do that, but to mitigate its impact.
James Royal-Lawson
Yeah, that example comes from chapter four of your book, doesn’t it? Red Team Blue Team. And I particularly like that, because it’s it’s basically peer review, isn’t it? A kind of a form of peer review.
David Thomas
Absolutely. None of these ideas are new per se. Peer review has been with us ever since we’ve had science. I basically leverage the scientific method when I describe the mindset you need when you design which is, “Okay, I think I’m right. So, let me ask this question: If I’m wrong, what else might be true? True scientific method doesn’t just do an experiment to confirm a hypothesis and say, “Okay, we’re done.” True scientific method says, “Okay, if my experiment has confirmed this hypothesis, my next move is to try to disprove. It’s to say, “If I’m wrong, what else might be true? Okay, let’s run some experiments on that.” And in design, we almost never do that, right? There’s no part of the process where we’re supposed to stop and say, “Okay, it looks like this design works. Let’s try to break it now and find a better design by completely ruining the last week’s work.”
James Royal-Lawson
I actually actually tweeted, I think it was last week, a call for, “Why don’t we start an open hypothesis movement?” Because I completely agree. I think we totally lack that peer review way of working. I mean, we’re getting to the point now where there is lots of hypotheses being built and we’re data driven and even Red Team Blue Team is a great idea. But, I think we lack that openness about what we’re doing. So much gets hidden, algorithms get hidden under the surface and design ideas get hidden under the surface and they get validated, but we don’t check them. We don’t have the checks and balances.
David Thomas
Yeah, and to a certain extent, there are economic motivations not to be open, right. It’s to be proprietary and I’m not anti capitalist per se, but I can’t say that I’m not. I don’t know. But there is a version of capitalism, I’ll put it that way, where you are motivated to not share. That’s one problem.
You are motivated to lie, frankly, because if someone’s paying a lot for something, and you know it doesn’t have any real value, it’s not in your best interest to prove yourself wrong. It’s not in your best interest to say, “Oh, this person wants to invest $10 million in my company. What motivation can I possibly have to make sure my product is good?” To test it and say, “What motivation do I possibly have to ask myself if I’m wrong? What else might be true when someone’s about to give me $10 million?” It’s very easy, when you have nothing but capitalism driving the design process, to make some very bad choices. You kind of want these other elements.
So going back to science, there was definitely capitalism funding science and making it work. But there are a lot of other factors as well. My wife is a paediatric neuropsychologist and so she does research. And there’s something called the IRB, which is a review board that if you want to do some research, they have to sign off and say this is valid research. You are taking into account all these different factors, you’re making sure it’s not going to be biased, and it even does look at things like equality, like are you looking at vulnerable populations who might be impacted at the results of this research?
So, I’m not saying this is specifically how to do it. But an example of a less biased approach might be to introduce some kind of design review that says “Okay, this has been given the stamp by this design review to say the people who made this product did the did inclusive steps to arrive at their solution. They did check themselves and challenge themselves and ask themselves, you know, could they be wrong before they released this product. Whereas this other product over here, we don’t know how it was made.” Even that gets you closer, I think, to inclusive less harmful design than just “Oh, everybody do what they want.”
James Royal-Lawson
Yeah, but I think that does sound like a great idea. But would that allow us to actually avoid exploiting biases in a market economy?
David Thomas
No, I mean, I think it would start to give you – if we’re going literally the IRB approach – it would give you choice to say, “Okay, this is the same way that I can decide whether or not I’m gonna, you know, go to a LEED certified building.” Or give you a tax break if you want to build a skyscraper in my city, if you’re not gonna make it LEED certified. That’s a choice I have. I can’t say it has to be LEED certified, but I can give it. So there’s that element to it, which frankly, at that scale is when it really makes a difference, because individual consumer choice isn’t really driving that versus “Hey, I want to build in New York City, but I can’t because my design isn’t inclusive enough.” Okay, that has teeth.
James Royal-Lawson
Yeah, as consumers, we start to expect that you have that stamp of design approval on it, then yes, it becomes an incentive for the organisation to make sure they do it because they product won’t sell if it doesn’t have that little stamp on it. So yeah, it fits in with the the market.
Per Axbom
For me, this is really how you end your book. You’re saying that now you have the tools, now you have the choice, if you really want to mitigate harm, you can do a better job of it by the advice you’re giving in your book. But if you really don’t want to mitigate harm, okay, so fine, but that’s still your choice.
David Thomas
Yeah, I don’t need to write a book about how to not mitigate harm, because we’re doing that already. And honestly, I think it’s a very positive message. I’m talking about some very disturbing things in the book. But to me, it is ultimately a very positive message because, you know, if someone said to me, “Okay, Dave, we need to redo design. And in order to do that, you need to completely invent ethics, you need to completely invent peer review. You need to completely invent all these tools to mitigate bias.” I’d be like, “Okay, we need to completely redevelop design.
By the way, here’s how journalism has already done it. Here’s how the medical industry has already done it. Here’s how ethics has been doing it for 2000 years, right? And by the way, here’s how the Design Justice Network and Pro Social Design network and all these other groups are already doing it. Then it goes from being an innovation challenge in terms of “I need to invent the wheel” to a “Oh, I need to convince people to use the wheel.”
Per Axbom
Yeah. And something else I appreciate that you borrow from the medical world is this concept of the duty of care. Realising that you actually have this duty because you have the power to influence other people’s decisions. You actually also have to take responsibility for that power.
David Thomas
Yeah and this is riffing on the theme that Mike Montero talks about in his amazing book Ruined by Design, where he’s really doubling down on the déformation professionnelle part I focus on around “How you define your job is critically important.” And he’s really taken his whole book to make the argument that “You cannot simply think of design as making cool shit.” You absolutely positively need to think about the political implications of design, the social implications of design. Your your job is bigger than you think.
So, absolutely, I think that’s critical and I feel compelled to tell you that the duty of care line, while it is ultimately traced back to the medical industry, my familiarity with it comes from Doctor Who. In the Peter Capaldi years, there’s a whole riff or theme he does around duty of care, with his companions and stuff like that. And it really struck me and that’s the phrase – it was Peter Capaldi’s voice I was hearing in my head when I was writing that section, not Hippocrates.
James Royal-Lawson
Just to fill in for the listeners here, what Per did just then was he changed his background on this video conversation to the TARDIS, from Doctor Who.
David Thomas
I need to get that. I don’t usually go in for video backgrounds, but I could totally get behind that one.
James Royal-Lawson
Thinking back about some of the aspects of the book, one thing that actually struck me was when you talk about user bias, you asked us to conjure up an image of a developer. And I did a little dramatic pause there where people actually think of a developer. In the you in the book, I think you wrote, “Skinny white dude.”?
David Thomas
“Skinny white dude” is the phrase. Yeah.
James Royal-Lawson
And I’m sat here and I’m thinking, “Oh, crap, in every single Hollywood film that’s ever been made, it’s a white long-haired, probably slightly overweight, scruffy dressed guy sat in a leaning-back office chair in a dark room. Isn’t it? And that sterotype – you think about how deeply entrenched just that one stereotype is.
And then, how many of these stereotypes do we have all the time?
David Thomas
Yeah. And it’s a pattern. I mean, the motivation for even writing the book came from Iris Bohnet, who gave this amazing talk called Gender Equality by Design. And the insight that she brought to it, that I had never really worked with before, is this idea that a lot of racial bias/ gender bias isn’t necessarily explicitly “I wake up in the morning hating women or hating black people.” It’s “I love I explicitly love and support black people.
I voted for Obama, I voted for Hillary. But when I see a resume, where the name of the top of the resume is female, I have the snap judgement that I’m not even aware of making necessarily, where I start to devalue and be much more critical of that resume than if that same resume with the exact same qualifications had a male name at the top.
Because I’ve seen this pattern in my head that, oh, you’re trying to be a web developer, well, clearly you must be male. And if you’re not, oh, you’re trying to be something that doesn’t fit my pattern. So I need to be very sceptical here.” From an evolutionary psychology standpoint, it’s “Because the pattern doesn’t fit, I think I might see a tiger in the tall weeds or something.
So I get to be much more critical. I need to be on alert because it’s bad when something doesn’t fit the pattern.” And again, it is not necessarily that I wake up in the morning believing less of women, but my behaviour ultimately is going to be harmful for women. So once again, going back to duty of care, once we recognise that we have to start asking questions like, “Do we need to leave the name of the resume? Is that actually helping anyone and is it potentially hurting the process?” Maybe we leave it out now or maybe we don’t even ask for it when you’re doing your job application. So I feel like that patterns are very much what you’re fighting against when you’re trying to design against the bias.
Per Axbom
You have this excellent Douglas Adams quote in the book that, obviously, is attuned to this. “The hardest assumption to challenge is the one you don’t even know you are making.” And that is, I think, what scares me the most reading your book is that there are so many biases I’m not even aware of. So how do I fight them? How do I come to terms with them? But, we have talked about part of that is bringing in new perspectives, bringing in other opinions, bringing in more people. It’s the awareness that you’re creating that’s the important thing.
David Thomas
There are two things you’re dealing with. There’s awareness and there’s also power. So one of the most interesting movements right now in design is participatory design and this idea that, even before I begin my research, I am going to make a power map of everyone involved in this thing that I’m designing. And I’m going to have one axis be level of interest or impact. So on one end, low interest. On the other end, high interest. And then I’m going to have a vertical axis around power: At the top, high power. At the bottom, low power.
And the people who fall into the “I am highly impacted by this design, but I don’t have very much power” category are the people I need to focus on the most and give power by, not just interviewing them and saying, “See you later”, but interviewing them, synthesising my research, coming back and saying, “Hey, did I get it right?” Coming back again and saying, “Hey, is this design right?” and designing with them. And then, even before we launch anything, saying, “Hey, before we launch this, is this, right?” Letting them decide whether this thing actually gets launched rather than some CEO.
And that may seem unrealistic – again, going back to capitalism – but, in terms of a just design process that accounts for – because it isn’t just the bias in and of itself. The bias would be fine if it didn’t hurt anybody. The problem is it does hurt people and almost always hurts people who traditionally have less power. So isn’t even just a matter of introducing other perspectives, which I endorse, but when you’re prioritising, “Hey, who needs to weigh in on this?” You don’t just roll a die. You say, “Okay, who has the least power, but is gonna be most impacted by this?” If we can’t get anybody else to weigh in on this – those are the people who need to weigh in on this.
So it’s about recognising bias, but then also recognising the role power plays in that bias. And that’s where I start to go toward the ends. But then there’s a whole other book to be written about participatory design, which – call out to anybody who’s doing that, please write a book about it. Get in touch with me, I’ll get you in touch with folks who might be able to help you write a book about it. I think that’s the next book that needs to be written here.
James Royal-Lawson
I’m thinking about the way that the market forces work and the way that business works, and the way that our biases work and the way in which we unavoidably utilise biases in our design work. Because not everything’s bad.
There are good aspects to using biases in design. I think one example you gave in the book was about the perception of ease of use. Cognitive ease into actual ease, where if something’s presented in a simple way, it’s perceived as simple. Now, that feels like a pretty good design principle. And, if we’ve got that kind of aspect to biases, then we can’t control them, we can’t get away from it, so I wonder, should we maybe be more forgiving when we realise we did shit that may be exploited biases in a bad way?
David Thomas
It’s interesting question, right? Because I certainly, having studied bias – I don’t I don’t know what forgiving is the right word, but understanding around when I see people doing stupid stuff. When I see for example, for example, people voting against their best interest, by people who are living in poverty, but voting for tax breaks for the rich. I get that now in a way that I didn’t before. And I don’t suddenly think it’s a good thing, but I do kind of understand why that’s happening. And the same thing with design: I get why – I forget if it was Snapchat or Facebook – people present beautification filters on photo apps that make you more white? I’m not necessarily forgiving of that, but I kind of get how that happens. I kind of see the steps that lead there and they’re very human steps. I think I’ll say that. I think these things are very human. I think I get less forgiving when we’re past that. You know. You’ve hired sociologists to look at your stuff, and you’re still doing negative stuff. That’s when I’m like, “Okay, you knew better. At that point, you knew better.”
James Royal-Lawson
Lstening to what you’re saying and thinking about what I said about forgiving, I think what I was trying to articulate there was “forgive quick enough.” Basically, if we’re open about the mistakes we’ve made hrough our work with biases, if we do that quick enough, then perhaps we should be more forgiving to allow us to keep on going forward and keep on getting better. Whereas, like you say, if you’ve, if you’ve employed an entire team of experts who are working with persuasion and exploiting these biases, you probably are beyond forgiving.
David Thomas
I think you’re using some different ethical calculus at that point, right? Because there’s certainly ethics around the things that you do that you did or didn’t know. But once you know, that’s where we get into duty of care. And again, this is a very big part of why I wrote the book – to just introduce that that was a thing.
Because I don’t think we teach this in design schools. I don’t think people who are self taught designers necessarily will stumble across this on their own. To sort of write it down somewhere, that this is the power of design and it’s not just to get you to buy stuff, but it is working on your subconscious. I can’t put it any more cleanly than that.
And when you’re sticking your hand in the back of someone’s head and messing around there, okay, if it was brain surgery, I’d want you to get a licence, right? I’d want you to have some kind of ethical training before I let you go doing that, to a certain extent, to a scientifically studied extent.
That’s kind of what you’re doing when you’re doing design. Like the thing you were alluding to before. If something is clear, if something is easier to read, it seems more true. If something rhymes it seems more true. It’s not too many ethical chess moves from that to “Oh, you’d better be really careful what you make rhyme.” Right?
Per Axbom
I teach a course in ethics and design here in Sweden. And one of the most common questions I get, and I think that is on listeners minds right now as well, of course, is, “I’m coming new into the industry. Sure, I know about these principles about designing ethically.
But how do I get that buy-in from my superiors, from my UX leads. from the managers? How do I get the time? How do I advocate for that without sounding like we make less money by not doing the things that others would like me to do?”
David Thomas
The section on stakeholder biases in the book really tries to answer that question. I’ve been to lots of conferences and watched lots of people give talks, and during the q&a section there is always, I can guarantee you, at least one person who’s going to say, “Hey, what you just talked about was awesome. How do I get my boss to do it?” And I think that’s a legit concern – going back to power, right?
So, just as our users have biases and we have biases, our stakeholders have biases. Our bosses have tendencies. Our clients have tendencies. And if you understand them, not to put too fine a point on it, you can sort of manipulate these biases to get what you want, or at least to get them to fully consider what what you have in mind.
I was actually talking to Kristina Halvorson about this the other day. One thing that she brings up, that she knows works, I think is a very important thing in general is, she makes sure those people feel heard. It isn’t necessarily about constructing a really powerful presentation, or doing your research and presenting them with the research. They need to feel heard. Right? And that’s just that’s not just a boss thing, that’s a people thing. If people don’t feel heard, they will never change their minds. I can’t guarantee they will change their minds once they’ve been heard. But if they haven’t been heard, it’s almost impossible. Because that’s what people consider first is: Do they feel respected? Do they feel like they have dignity in this situation? Shutting down dignity shuts down everything else.
So I think that’s one thing is to – however you are presenting you’re arguing – make sure they feel heard. And I think a really clever way to do this – there’s a part of the book I talk about stakeholder inception, and what I mean by that is, I’ll give you an example: I had a client who was convinced that their content needed to be behind a paywall. And we did our homework and we looked to see ‘Okay, is this really moving the needle? Are people really willing to pay for this for this content? Can they get it elsewhere? Blah, blah, blah.’
Long story short, there was no logical reason to keep it behind a paywall. And we sort of made that argument and a few times that it became clear, it was not it was not a non starter. So finally, we said, “Okay, let’s write down all the reasons you’re creating that content. Let’s write down all the reasons users are looking for that content. And let’s just do a brainstorming exercise where we say, ‘Okay, how might we use content to align these two goals?’ Get out your Sharpies and your stickies and just go to work.”
By the end of that exercise, the client said, “Oh, what if we moved this content from behind the paywall?” And we were like, “That is a great idea that you had just now. I am very glad that you came up with that idea.” And that sort of combines the ‘feeling heard’ with the ‘new idea’ because now it’s their idea, and which not for nothing, they’re going to feel more committed to then – even if I had convinced them to use my idea, well, it’s my idea, so if the going gets rough, they can always abandon it.
Per Axbom
Exactly. I love that but also because you allow them the time to actually reflect on it enough that they actually can reach that conclusion. I’s about actually listening to them to allow them to reflect.
David Thomas
Yeah. And the truth is what I like about that approach is that it also leaves room for me being wrong, right? So if I think, “Oh, the obvious solution is to pull it out from behind a paywall,” and I’m arriving at that because I’ve been given a certain set of data and I’m coming into it with a certain set of biases, okay, I might still be arriving at the wrong conclusion.
However, if they come into that process, and now they’re being given the same data, and the same sort of tools to come to a decision, and they might come to a better decision, right? They might be like, “What if we did this?” and “I didn’t even think of that,. You know, “I am glad you came up with that. You really did come up with that.” So I any process that leaves room for people’s true selves to arrive, I think is a better one.
Per Axbom
That’s beautiful. I want to end there because that’s such a beautiful note to end on. Allow people to actually be themselves and you will arrive at important things. I mean, we had so much content in this short episode. Thank you so much, David. It was fantastic having you.
David Thomas
Thank you for having me. I really enjoyed it.
James Royal-Lawson
Thanks, David.
[Music]
James Royal-Lawson
Thinking about the whole idea of getting your boss to adopt things and power struggles, and his idea of as being like brain surgeons and sticking our hands in the back of someone’s head to control them through biases.
And what we’re saying is that we need to stick our hands in the heads of our users, in our stakeholders and bosses as well, and possibly even our colleagues. It’s very weird. But it made me reflect on how David had structured his book. He had broken it up into those aspects – there are users, the stakeholders and there’s ourselves as designers.
But what this implies ,and even possibly says is necessary, is that we are the hub in this. We are the brain surgeons putting our hands in the users’ heads. Or being responsible for how we put our hands in the users’ heads and also responsible for how we put our hands in our bosses heads. The example of meetings and making bosses or stakeholders feel like they come up with the ideas. We are the puppet masters.
Per Axbom
Right. But a puppet master, that implies so much power, which is scary.
James Royal-Lawson
Exactly. But that’s what we’re talking about. We’re saying that we have unavoidable power because the biases themselves are unavoidable. And we’re also saying no-one else in all of this is possibly as well positioned to influence how those biases are acknowledged, exploited, or mitigated.
Per Axbom
I think we’ve always been been the hub in a certain kind of way. Because we’ve always been the mediators, we’ve always been the ones who try to understand the business, we try to understand users, and then we talk to the developers.
So we’re communicating between all these different types of people to create something that we’re not necessarily building ourselves. We also have to communicate to someone else how to build it. So we are the hub, which puts us in a unique position, of course, then to be puppet masters, as you call it.
But it also means that we have to – because the way I interpret David there – is that we actually have to step away from designing and do way more listening. And because that is within our control, but that is also our responsibility, bringing in more people.
James Royal-Lawson
And I wonder how much of us saying that we are the hub of all this, or we are the puppet masters is our own bias, about the influence of design and the power of UX?
Per Axbom
I completely agree. It’s almost why we started the podcast, because we saw that bias and we thought, well, that can’t be true.
James Royal-Lawson
Yeah, no, exactly. It’s a very, very difficult one to deal with and mitigate. But I think he’s got a lot of value and I think it’s convincing.
Per Axbom
I also appreciate what David was saying, I appreciate what David was saying about when you have that workshop with your stakeholders and your bosses: You’re actually allowing yourself to be wrong. You’re opening up for that possibility that you actually are wrong as a designer, because other people’s versions and perspectives of the truth, of course, also matter.
James Royal-Lawson
Yes. One example he gives at the very end of the book as a tool is what he calls the Black Mirror rule. And this was another checks and balances suggestion,
Per Axbom
I’ve been seeing people do workshops around Black Mirror technology or actually thinking of what could possibly go wrong.
James Royal-Lawson
Exactly. You take your design solution and you get people to come up with a Black Mirror episode using the possible outcomes of your product. And it’s a wonderful thing. Again, maybe we’re quite bad at predicting the future, but at least this is a way of uncovering and unearthing current presumptions, current fears about what you’re doing, embodying it in a made-up Black Mirror episode.
Per Axbom
I mean, David, he goes even further. He says, “My assertion is that everyone working on a new technology should, by law, have to write a Black Mirror episode about it.”
James Royal-Lawson
He did push it very hard. This ties in with what I mentioned about peer review, or he mentioned as well, about peer review. I do think there’s a lot to be said about this and I half-jokingly pushed the idea of the open hypothesis movement, that we we should be more open about what what we’re getting and why we think it might be good to change something, and to allow us to have the chance to come in and and tell our colleagues, tell our people in our branch, “Actually, maybe if you do that you risk doing this,” or “That’s maybe not something you should dabble with or, if you do, this is maybe the consequence.”
So we can be more open about the consequences of what we’re doing. I think, yes, it clashes with propriety thinking and how you maybe want to keep ideas to yourself because you think there’s money to be made from it. But I think a lot of the stuff we work with in design isn’t maybe unique, and maybe isn’t the thing to make money . So we can be more open about it and maybe – not save more lives – but care more.
Per Axbom
I was about to say I’m 100% behind that idea, but I fail to see how it could be accomplished. At the same time, when you keep talking, I’m thinking “Well, all the code that people are using in their products, they’re copying and pasting from code online. They’re using open source libraries. Even all designers are looking at other websites copying and pasting ideas. Everything is out in the open. So…
James Royal-Lawson
The medical branch – when they develop new drugs – ultimately they have to be peer reviewed and tested and analysed and checked and things. When it boils down to it, it’s no secret at the end. It’s just a matter of being maybe who who was first? Or being clear about original ownership, if you want to go along those lines.
Per Axbom
Definitely. So thank you for spending your time with us. Links and notes and a full transcript for this episode can be found on uxpodcast.com. If you can’t find them in your podcast playing tool of choice.
James Royal-Lawson
That wasn’t easy for you to say, was it? No. Recommended listening – we’ve got that for you as well. And I think I’ll recommend now – I’m going to mention that I’ve been working on a room research database for the uxpodcast episode, so you can quickly search and pull out things and it’s insane but it’s really good fun.
So I’ve pulled out from the database Episode 85, which is Wheels of Persuasion, with Bart Schutz. And this was actually recorded six years ago. And we talk about psychology and a lot of biases and how to use them to increase conversion and so on. And I think it’d be fascinating to listen back to this interview from 2014 with our 2020 ears.
Per Axbom
And remember our intros back then were much longer than they are today.
James Royal-Lawson
Yeah, they can be.
Per Axbom
And remember, you can contribute to funding the show by visiting uxpodcast.com/support.
James Royal-Lawson
Or you can even send us an email and volunteer to help us, That’s worth loads to us.
Per Axbom
Remember to keep moving.
James Royal-Lawson
See you on the other side.
[Music]
Per Axbom
James, did you read my blog post about confirmation bias?
James Royal-Lawson
Yeah, but it only proved what I already knew.
This is a transcript of a conversation between James Royal-Lawson, Per Axbom and David Dylan Thomas recorded in September 2020 and published as episode 245 of UX Podcast.