Ethical experimentation

A transcript of Episode 280 of UX Podcast. James Royal-Lawson and Per Axbom are joined by Craig Sullivan to discuss the explosion of mass digital experimentation and the ethical implications of this on people and how we carry out our work.

This transcript has been machine generated and checked by Lexa Gallery.

Transcript

James Royal-Lawson
Thank you, Marcus, Hannah, Chris and Krystal, and everyone else who has contributed to UX podcast, the help we get from volunteers and the contributions we get no matter how small, all help keep UX podcast going and cover some of the cost of producing the show. Make sure you visit uxpodcast.com/support and contribute or volunteer to help.

Computer voice
UX podcast episode 280.

[Music]

Per Axbom
You’re listening to UX Podcast coming to you from Stockholm, Sweden,

James Royal-Lawson
Helping the UX community explore ideas and share knowledge since 2011.

Per Axbom
We are your hosts Per Axbom

James Royal-Lawson
and James Royal-Lawson.

Per Axbom
With listeners in 201 countries and territories in the world from Ireland to Nigeria. Did that go up to 101?

James Royal-Lawson
It did go up.

Per Axbom
Oh, wow.

James Royal-Lawson
I don’t know which one. Yeah, I need to research.

Per Axbom
Nice. So today, Craig Sullivan. He’s been experimenting and optimising websites for decades. He’s an entertaining and informative speaker regularly shares his knowledge and thinking through his writing and engagement on social media.

James Royal-Lawson
Yep, and Chris is always interesting and always entertaining to talk to, and has joined us on UX podcast more times than anyone else. His first appearance was 10 years ago, back in the giddy days of 2012 when we all had more hair on our heads.

[Music]

James Royal-Lawson
Craig, I mean, you. You have actually been on the podcast, I think seven times now, there’s no, there’s no other human that’s actually been on the podcast apart from of course, me and Per and a few hosts. But as a guest, you’re the one who’s been on the show, I think more than anyone. And the first time goes back to almost 10 years ago. It was 2012. And back then we talked a lot about mobile and optimising for mobile. I think our first two interviews, we’ve talked a lot about mobile. And you know, over the years I think we’ve talked to you 2015, 2017… things have changed. So how do you see yourself as changing during that time?

Craig Sullivan
I think some things have changed, and some things haven’t changed. We’re still doing the same stupid things that we’re doing with designing products 20 years ago, right. And companies have a short attention span, I’ve seen companies go through cycles, where they’ll learn a lot of stuff over three or four years, they’ll change some key team members. And all of that knowledge will vanish from the company’s DNA in fact we see them go back to doing all the stupid things that they were doing. And this cycle perpetuates itself. You know, it’s a kind of short term thinking, and we’re still getting people building products, but not checking that they actually work with people.

And that problem was there 20 years ago, and the subtleties and nuances of it have changed. But they’re still, for me, I thought that gap would have been closed, right? Between, you know, making design better for users, right, I thought it would have been way better now than it’s actually turned out to be. So I’m really quite disappointed it’s not improved. But on the other hand, in that period of time, we’ve had all these other changes happening, like the mass explosion of experimentation, which is what we’re talking about today, has happened during that time. And that is when we’re now not just making design changes in products, we’re actually making multiple design changes and testing it on millions of people. And this can have huge consequences.

Per Axbom
Yeah. It’s like anything, once you start diving deep into something, it just grows and grows and grows. And it’s it’s so hard to know where to start really, because I mean, I get into these issues with, where do we even get our devices from? So we get the cobalt from the Democratic Republic of Congo, and we got the children mining for our batteries. And that’s the only way that we can have our phones and laptops…

Craig Sullivan
and the software running on top is a narcotic right. Now a lot of digital products have been turned into narcotics, gambling products, news feeds, right, things like these. You know, I hoped it would free the human race, but in some ways, we’ve actually enslaved ourselves. It’s terrible. It’s not the promise of the internet that I read about in sci fi novels and started thinking about back in the 90s. You know, this isn’t what we meant.

Per Axbom
I mean, that was a big wake up call for a lot of us in that just five, six years ago, we start thinking, well, all the things I wanted to happen, none of it really has happened, we haven’t democratised the world, but quite the opposite, we’ve given more power to the people who already had a lot of power.

Craig Sullivan
We haven’t reduced inequality we haven’t improved fairness, we haven’t improved inclusion, we have created mass polarisation we have run into situations where undermining democracy itself, you know, these are, these are quite fundamental and important things you know, and I, one time I realised, when I did a calculation about the impact of a bug on a site, I realised how these problems scale up, right, because the BBC had the bug on their website. So every time you went back a page took you back to the top of the previous page. So you’d have to scroll down like eight, nine times, right? So I did some rough testing with a few people to work out how much extra time that was right, and then a multiplied across the billions of pageviews that they have every month, right and across the year. And we came up with like, 400, odd 1000 days of time lost to the UK population, right? And productivity is just, puff, vanished, right it’s gone, right?

And all that time has been spent scrolling, right, because someone didn’t check if it worked, right, you know, and people have these conversations… well, what’s happened with the productivity gap? I’ll tell you what’s happened with the productivity gap, stupid things like that! It’s missing metrics as well. So, I recently wrote several articles about perverse incentives in experimentation, right? So if you make revenue, your main driver, then people will think, oh, okay, we’ll put up the shipping fees, right, that will increase the revenue, or we’ll change the mixture of products that we sell, that will increase revenue, or we’ll try and get, you know, we’ll put loads of upsells. And cross sells on the page, we’ll add extra pop ups, right? The metric itself that you set, actually conditions the behaviour that then occurs, right? And if you’re only metric is, does it make more money? then, of course, you’re not looking at it’s invisible to you whether that’s harmful to the user experience.

And I ran a lot of experiments where, fortunately, there was companies that had access to the UX metrics that would tell me whether it was doing something harmful or negative to the user experience, and I often found that there were tests that would win on, you know, a conversion rate or a revenue metric. But they actually failed horribly or they caused harm, right? Or introduced friction, or they disproportionately affected one groups experience over another, you know, and that allowed me, because I bothered to look at those metrics, I was then aware of that trade off those making, but if my sole Northstar metric had been revenue then all of those harmful tests, so people will often run A/B tests that will improve metrics in the short term, but that have a deleterious effect long term, right, either on customer happiness or retention or the outcomes.

LinkedIn talks about this in a very interesting post, because, and this is a sign of kind of good news here in the experimentation space, that people are thinking about this, because they realise that if they ran A/B tests, right, it could design an A/B test that, you know, potentially kind of intentionally or unintentionally, actually bias the career chances of cohorts on their platform, right? So what if we ran an A/B test, but the outcome of that A/B test was to completely shaft the career chances of, you know, young black males? Right, you know, it wasn’t our intention to do that. But it’s actually happened, that would be really terrible, right? And it would have an almost permanent effect on the life outcomes of those individuals. So they started thinking we need to be really responsible about this. They talked about this in terms of not only measuring the impact metrics of those cohorts, but also actively seeking in a data science sense, to find cohorts that they didn’t know exist. Rumsfeldian cohorts, right. So, we didn’t actually know that even this group of people existed, but boy did we bias things for them, right.

And there’s an article I’ll share resources in further reading that I’ll give to you guys. And that that to me, when I read that, I thought, thank goodness someone’s really thinking about this. It was a good article, good critical thinking behind it, and it shows that there is a way to find balance right, the yin and yang between having growth but not at the expense of humans as a result of that growth, should the growth be powered off the back of customers? Or should the growth be powered through by making the product easy to use and great for consumers and helpful for their lives? Right. And that’s those, those incentives are often misaligned, because people pick the wrong Northstar metric or metrics that actually measure what they’re doing in experimentation.

Per Axbom
That’s such a good example because I think a lot of people don’t even realise that they are impacting people. And the way that you expressed it, that they actually found a new cohort, you found someone that they didn’t even know about, and that’s the thing, as long as you don’t really mind and don’t really care about figuring out that problem. You are doing okay, because you’re looking at the numbers and you’re seeing, we’re doing okay, I don’t see any harm happening. But you’re not even looking for the harm. But how do you get people to start looking for the harm? That’s, that’s one of the biggest challenges.

Craig Sullivan
I think education is so important, because certainly, in the UX research field, this is an integral part of the way that you learn to do research and do it ethically, right. But the problem is gnawing away at the heart of this is that this mass democratisation access to A/B testing has meant that hundreds of millions of experiments have been run on people via products, right? We’re not testing drugs, right, like in a controlled drug trial. But do these experiments not have the same ethical research that experiments in the medical scenario possess? Right? And I would say, yes, they do in many ways. So if we agree that there are potential ethical problems in experimentation in medical patients, we also agree that it’s possible for digital experimentation.

The big issue here is that many of the people running the experiments have not had any ethical regulatory compliance, legal training, you know, how to avoid PR disasters, you know, that kind of stuff, examples of good and bad practice, you know, we, we have high standards for this in medicine for a very good reason, right? Because the potential for death or serious injury or harm, right, but the bar to running experiments on millions of people as guinea pigs is way lower in the digital realm. It’s like, you know, you don’t need any qualifications. There’s no ethics board or review board, there’s no peer review or consideration of ethics required. You just you just put the A/B testing on, and off you go, right.

James Royal-Lawson
Craig, I mean, this is important, because it talked, you’re into now, the pseudoscience of A/B testing or testing and it makes out that it is real science, but, and it points to the real science, and like you say are the medical trials and whatever. But at the same time, it’s not doing a lot of things you say it’s, you know, we’ve got things like disclosure, and when someone enrols in a medical trial, they are very aware of the fact they’re going into a trial, they are told what they’re going to be taking, they don’t know if they’re actually going to have a placebo or not. But they know they’re in the trial. There’s openness there. And then also, we’re often, I mean, you don’t generally enrol to a cocktail of tests at the same time. That, you know, we’ve we’ve got a scientific situation where, you know, we’ve limited the world around what’s happening, so we can actually understand the change that we’re trying to test.

Craig Sullivan
It’s a very difficult area, that one, because it really depends on the intent. right? And it can be very different. I’ve got I’ve got some examples I wanted to talk about. But, you know, so, you know, the one thing people ask me is, is A/B testing ethical? Right? Is the ethical problem with the A/B test? And no, it’s not because somewhere at some point, there’s a human decision behind that or a human bias, right, or human mistake or a lack of human governance, right, or a lack of human peer reviews, not the A/B test and software itself that’s the problem, it’s the way that the tool is being wielded. And, you know, sometimes it’s used in a bad way, but experimentation for me is about making faster and more confident decisions. What if we didn’t test at all? So we put, we just put new features live on a website?

Well, that is potentially more harmful, ethically, because you don’t even know if it had a harmful effects or not like what we’re talking about earlier on Per. So that, you know, I would argue that, you know, testing features to see if they have a harmful impact on the company, the customers, the suppliers, the business partners, the environment, and more things is, is really important to include in that, you know. The ethical problems are not caused by the ability to experiment, but the human decisions that provide the framework for that experimentation, you know, so if you just went and implemented changes, you’re theoretically running into the same ethical problems, it’s just that you’re completely, both unaware of the revenue or conversion rate impact, but you’re also unaware of the user experience impact.

So I don’t think A/B testing software or the capability to do is the problem, it’s, it would be like if we gave lightsabers to everyone, right, real lightsabers and said, Look, everyone have a lightsaber, you know what it’s going to happen, right? Loads of people are gonna end up in hospital messing arms and legs, right? Because they’ll have lightsaber fights and stuff, right. And we’ve kinda done that with A/B testing, we give people, these really amazing tools that can do great good and be used in an ethical way to help quantitatively and qualitatively improve software for human beings, right. But it can also be used for bad just in the same way that designers can design good patterns or, or dark patterns, we can have good experiments, great experiments, or definitely dark experiments.

Per Axbom
So the essence then is the education because I see today people can attend online courses for two or three months, and then they call themselves a UX designer or a CRO expert, and that’s the way it goes. And then they have no training in figuring out what are the other things that are happening with the work I’m doing? What is the impact?

Craig Sullivan
Yeah, I think the ethics education should be mandatory for all product team members that are involved in experimentation, right? It should be there. When you join a company, it should be available when you move team and to that product team. It should be mandatory, it should be part of the HR process, right? So that it’s not like, Oh, I’ll skip that and do it later.

James Royal-Lawson
You’re highlighting, I guess, the situation a lot people find themselves in where there’s, I suppose you’d call it an ethical gap that you’ve got an organisation that is lured by the honey in the pot. And you know, even if you have insisted that certain people have ,or all people have ethical training and so on, and that you yourself have picked up on this. You’ve still got the honeypot. And the organisation, they said marketing was oh, this is fantastic. We run with it. So I mean, when, I guess a lot of us are going to find ourselves in those organisations, where parts of the organisation are maybe not aligned, and attracted by the honeypot. What do you do for that? I mean, there’s transformational change, isn’t it? What do we do?

Craig Sullivan
So, this is where you need governance, so, there’s an excellent entry on Wikipedia that explains perverse incentives. But one of the good examples on there is when the British government had a problem with an… a population explosion of cobras in Delhi, right. So what they did was they offered a bounty for dead cobras, right? And lots of people brought them cobras and you’d think the cover would be like this, right? You’d have loads of cobras, and then as more people collected more cobras, it would drop off, right? Because they would have brought in all, the population would have reduced, right. But the thing was, is they discovered after a while that the number of cobras that are getting brought into them was massively increasing, right? And they were like, what’s going on here? Right. So what they did is they investigated it. And they found out that lots of enterprising people had set up around the perimeter of Delhi, cobra farms, right, where they were breeding and rearing cobras in order to then kill them, and take them to the British government.

So at that point, the British government they’ve said, right, they said, we’re not going to pay any bounty for these anymore. So all the people who were breeding the cobras, they were no longer valuable to them. So what did they do? They let them go. So they ended up with more cobras that they had in the first place, right? And these perverse incentive problems are happening and in experimentation teams all time. If you focus say, around conversion rate, then people will do things to try and get you to check out in one session, we’ll give you money off discounts, right, you know, and the problem that people often run into is they’re not having a balanced set of metrics, right? There are only having one metric. They’re not including metrics that measure user pain and happiness that could be caused by the result. And that could be product level UX metrics, but it could also be overarching stuff like NPS, right?

Per Axbom
That’s actually a good example, though. Because I mean, that’s, that’s first, second and third order thinking is what I usually call it but it’s such it’s such a long timeframe. So the cobra example is like, if, we’ve talked about this with elevators before, the person designing the system is not the person installing it, and there can be so many years passing in between and there are different consultants involved. So keeping sight of what is going to happen in the end, I mean, there’s no incentive with the people working in the first stage to actually think about the things happening in the third stage, and that is a huge problem. So who is the person who keeps track of the whole, the whole journey?

Craig Sullivan
Well, let’s where governance of experimentation comes then. And the, if you look at our kind of Centre of Excellence, or experiment hub model, where you have a team who’s both, has product function in terms of they’re building tooling for people to run trustworthy, reliable experiments, but they also have a governance and data collection function, which is the need to measure what is happening with experiments, in order to allow someone to work out that the system is being gamed. And any metric that you create, people will find a way to game it, right? So it’s not that you set up a metric, and then you’re done. It’s a continual balancing act of yin and yang, Yin and Yang aren’t fixed, right? They flow and change, right? And the same will happen within your teams, you’ll set your design, what you think is a great metric system that you hope will avoid these perverse incentives.

But how do you then check, that actually, the perverse incentives haven’t actually occurred, despite your best intentions when designing it? And it’s that bit that’s missing, it’s the ethical kind of management of this stuff actually embedded into the governance. Are we actually checking to make sure that something weird and perverted doesn’t happening as a result of the experimentation? And the answer in most companies is, yes, there are perverse incentives everywhere. And this is a problem it’s happening. Because as Lucas Vermeer once explained, to me, he said it’s very difficult to tell the difference between a high quality test and a poor quality test, if you’re only looking at a metric like revenue, right? Then you could say, well, we made like 50 million, right, and, it shows, but because you’re not actually showing the qualitative aspects of that test, you know, you’re looking at it in a very narrow way, and you completely ignore all the rest of that stuff. And it’s some, it has to be somebody’s job, to govern experimentation, right, you know, a very senior level within the company at board level, if possible, and yet most experimentation teams are siloed, in a way, and they’re not independently governed, right? They’re marking their own homework, basically.

James Royal-Lawson
I mean, what you’re saying, there is that you’re, you’re saying that organisations have to have like an experimentation oversight?

Craig Sullivan
Like an IRB, I’m thinking more on the Centre of Excellence or governance whose job, it’s not their job to manage those teams, their job is to collect the data and metrics that will allow for those teams to be governed, right. So let’s say for example, you run a bunch of teams, and the data that you collect shows you that three of the teams are running really poor quality experiments that are tactical, that can harm long term revenue or customer satisfaction. So then you, because you know that that is happening, you can go and say, those two teams need support, right? If you just set up some rules and expect the teams to follow them, it doesn’t work like that, they will find their own way around them, or design their own process or create their own perverse incentives.

So you have to, you have to have a function whose job is to collect the data that will allow management to actually manage the experimentation outcomes that are meta level within the organisation, and all the way down to an experiment level. So you’ve got the governance of the experiment itself, the governance of the teams that are running the experiment, and then you have the governance of experimentation at a programme level within the company, and you need all three of those need to be present. Even if you give people the training, if you don’t kind of have the backup governance there, then the training will come to naught.

Per Axbom
And I think you touched upon something before earlier, when you said the long term revenue that, we need to keep an eye on that. And I think that’s kind of the key to making the change, to make it… helping people realise they have to invest in the governance, they have to invest in a new way of working where you actually consider the preferable, the possible and the potential futures and sit down and have a discussion around that, because it can help you differentiate yourself from others within the same industry. It can actually help you be better.

James Royal-Lawson
Craig. I mean, something that I think I’ve mentioned to you before, is that I really thought there was some value in creating an open hypothesis movement, where organisations share their design hypotheses, and even the subsequent results from it with others to try and develop a peer review like culture. And I think I think you pointed out to me that, well, there’s some businesses, it’s going to be, it’s going to be a business secret some of this stuff, so they’re not going to want to share it. But at the same time, if we’re getting into more of like, public sector experimentation, and beyond the revenue focused aspects, maybe there’s still room for it. And like you say about kind of like the ethical guidelines, things, but…

Craig Sullivan
How do we then, the commercial secrets defences is a good one, but it can also be misused, right? To prevent transparency, right?

Per Axbom
Yeah.

James Royal-Lawson
So all the algorithms that we see that are kind of like business protected…

Craig Sullivan
Yeah, Facebook’s newsfeed, yeah, but plenty of examples of these, how do we know it’s even happening? Right? Because they may not even know it’s happening, we might be seeing the outcome, who’s watching them to make sure that they’re not doing these things. And that, this again, is an internal company governance problem, right? You know, engagement on Facebook… Here’s another perverse incentive, right. So you are incentivised to basically share content that makes people really angry at each other, argue and create polarisation, right. So it becomes a self fulfilling thing, right?

If you mistake anger and polarisation for engagement, because of the metric that you’re using, then you may get an unintended result, right. But, they knew this was happening, they should have known what was happening once they found out and then they should have done something about it. I have a rule of three, for ethics, which is if you start doing it, and you know, it’s bad stop straightaway, right? If you just found out that something you’re doing is bad, you know it’s bad, then stop straightaway. And if you don’t know you’re doing anything bad, you need to check if you are and see point number two.

Per Axbom
Craig, it’s always fantastic to talk to you, and as you say, I mean, creating awareness, I think that’s what we’re doing now.

James Royal-Lawson
Thank you very much for joining us again today. It’s it’s been a few too many years, actually, since last time.

Craig Sullivan
We’ll have to do it again.

James Royal-Lawson
Yeah, we will. And I’m pretty certain we will I mean, this is a seventh time, there will be an eighth.

James Royal-Lawson
Do you know what? I think that might be the first interview with Craig, where he hasn’t sworn.

Per Axbom
That’s a very good point.

James Royal-Lawson
He must be mellowing in his old age.

Per Axbom
I’m always expecting that from him. Really interesting.

James Royal-Lawson
I only thought about that because you actually swore in our little kind of discussion before we start recording this. Well, another thing that I’ve noticed how Craig nowadays refers, or uses experimentation a lot in, when he’s talking about his work of optimisation and testing… moved to a slightly broader term than it maybe used to be a number of years ago, when the hype was all around A/B testing, and there’s a lot of hype around A/B testing, but it’s really good to hear how Craig uses experimentation now as a umbrella term for a lot of the stuff we work with, but I guess…

Per Axbom
Exactly, yeah, opens up the conversation

James Royal-Lawson
Yeah. But all design is experimentation, isn’t it? I mean, what, I suppose it’s how considered your design is, and maybe how observed it is in practice, you know, in how it’s used, that I guess regulates, informs, you know, the results of your experiments, all these design experiments.

Per Axbom
Yeah, of course, everything is experiments. And I think using that term opens up for the for the conversation around, how do you perform ethical experimentation? Because that is what Craig was saying, is that you have to have ethical review boards, you need to ensure that you keep subjects safe within the experiment. And as you were saying, it’s really hard these days, because you’re being experimented on then, if everything is experimentation, you’re being experimented on, constantly

James Royal-Lawson
Yeah

Per Axbom
…whether the companies themselves are aware of it or not, because they are putting stuff out there, learning, listening in some way at least, and changing stuff really quickly.

James Royal-Lawson
So not only is a lot of experimentation unethical, it’s also, there’s a great deal of pseudoscience going on, as we mentioned in the interview, that there’s this experimentation that claims to be, or gives the appearance of being scientific and being robust, but actually, it’s fundamentally flawed.

Per Axbom
Right, and we talked a lot about this, about being able to, to actually interpret the data that you have in front of you, it’s so easy… A/B testing seems so simple as a term, and, but understanding the specifics of what the data tells you after an A/B test, that’s really, really hard that takes years of experience

James Royal-Lawson
and replication, which I know is one of your favourite topics.

Per Axbom
Yes, exactly, the same experiment needs to convey the same result the second time you perform it

James Royal-Lawson
which is incredibly difficult in our world of digital, because normal experiments in the lab, you would hold everything else still, you would, you know, you would maintain all the other variables. And you would choose the ones that you’re playing with. Whereas when we’re working on digital products, there’s so much stuff happening, life is happening, and you can’t control everything, you can’t control everything from, you know, what weathers happening at the place of the person they set your, you know, that’s visiting your site, there’s just so much going on so much changing, how much the kids are screaming that day, or how bad your cold is. I mean, all this stuff kind of alters the experiment, so it’s not quite the same maybe as last time you ran it.

Per Axbom
Is there a pandemic going on? Are they nullified last two years?

James Royal-Lawson
Yeah. Yes.

Per Axbom
That really, for me, that’s hugely interesting, because also with the experimentation focus now, borrowing from the medical industry and realising that, well, all the scientific experiments that have been going on for the last 200 years, they haven’t been very ethical, and how long exactly does it take for something to be considered so unethical that we need to draw up laws and guidelines and the Hippocratic Oath around, promising that we will do this in a safe way for everyone involved, the awareness of something being wrong and something being bad, because bad came up several times during this interview, but how do we define what is bad? And how do we make people aware of that?

James Royal-Lawson
And something we we did talk about with Craig, but it didn’t make the cut of the final edit is, the whole issue around small organisations, much of what Craig talks about, and we discussed in the interview was around, you know, programmes, teams, big org stuff, now that you you get in organisations of 1000s of people, definitely hundreds of people, whereas the majority of businesses are small businesses, how, and with experimentation being so accessible for even the smallest of organisations, how do we make that ethical and successful and robust? Craig, in the chat we had, he did suggest that an ethics charter or checklist and compulsory education even, would be maybe part of the answer there. But I don’t, I don’t see how that flies with these really small companies, you know, if you’re, if you’re that person who is using a web service to set up a web shop, which you can do in a couple of hours. How do we get them to sign up to, you know, ethical charters for experimentation and making sure they’re educated before they sign up? It’s just not going to happen, is it?

Per Axbom
I don’t think you make people care through education. I think you’re right. I mean, there is, it has to be normalized that what experimentation entails always has to include an ethical process, where you do the work, you’ve signed off on the work, and you have the documentation of the work you’ve done. But for it to work, it has to cost, to incentivize people to do the right thing, it has to cost in some way, to not do the ethical processes, or it has to be really rewarding to do them. And that cost and reward of course, sure, it can be monetary, but it can also be social, social relationship based, community based, etc. So, it has to be that more people are aware of the harm, so that people actually expect and place demands on the people building these things.

James Royal-Lawson
Yes. So we have to reach a state where it’s socially unacceptable to have unethical design practices and to run unethical digital experimentation.

Per Axbom
Right, and our frustration, of course, is born out of us seeing this, perhaps earlier than a lot of other people, and we want it to happen now, but you have to look at history and see how long these things take.

James Royal-Lawson
And we need to keep on calling stuff out.

Per Axbom
Yes, exactly.

James Royal-Lawson
Because that’s ultimately the seed that grows into social unacceptability, isn’t it?

Per Axbom
Yeah

James Royal-Lawson
If you keep pointing out what’s behind the curtain and what damage its’ doing, then, hopefully, eventually, it will spread and will be understood.

Per Axbom
And we need to be better at people who have been warning about these things for a long, long, much longer than we have, the people who are the people who are most often underserved and set aside, who actually are the people who are at most risk who are the most hurt and harmed. They have been warning, writing, trying to be heard, b ut too few listen. And I think we need to be better at listening to the underprivileged groups in society who are already warning us about the risks that they are put in by these technologies.

James Royal-Lawson
And I think it’s really good to point out again, what Craig said that, the potential harm from not experimenting and not doing any testing at all, could be even more.

Per Axbom
Yes, exactly.

James Royal-Lawson
So we’re in an interesting, I suppose, not trap, but dilemma, I guess, that unethical testing is bad. Not testing is bad. So, you know, walking that walk or finding that balance, which is ethical, socially acceptable, creating value, creating good for all parties involved is… hmm, yeah.

Per Axbom
Exactly. But there’s also this self awareness that has to happen with the people building stuff that you have to yourself, realise that even though you think you’re a good person, even though you have good intent, you are making stuff that will hurt some segment of people. And that realisation can be tough, tough to acknowledge, and tough to really like, also tell if you can acknowledge it yourself, but also tell other people about it and be open about it, and actually be transparent and say, we got it wrong. This is how we’ll try and do better in the future.

James Royal-Lawson
That’s good Per, that loops back nicely into all design is experimentation.

Per Axbom
Yeah.

James Royal-Lawson
And I think that’s what we need to be reminding ourselves of. Everything we do is an experiment.

Per Axbom
Yes. So for listening to next, I suppose you’ve already teased about previous interviews with Craig and the cursing going on there.

James Royal-Lawson
Listen back to them all and see if you can find every single swear word he uses during 10 years of appearing on the podcast that when he does he goes back to I mean, he joined us way back in episode 11, Episode 26, Episode 56, Episode 113, Episode 116, Episode 157 and now episode 280.

Per Axbom
Remember to keep moving

James Royal-Lawson
See you on the other side

[Music]

James Royal-Lawson
What is black and white and can cut through steel beams?

Per Axbom
I don’t know James, what is black and white and can cut through steel beams?

James Royal-Lawson
A penguin with a lightsaber. If that wasn’t bad enough, I’m going to do two I’m going to do two. How hot is a lightsaber?

Per Axbom
I don’t know James. How hot is a lightsaber?

James Royal-Lawson
Lukewarm.

Per Axbom
Oh, Wow.

 

This is a transcript of a conversation between James Royal-LawsonPer Axbom and Craig Sullivan recorded in December 2021 and published as episode 280 of UX Podcast.