Building trust in AI

A transcript of S02E11 (321) of UX Podcast. James Royal-Lawson and Per Axbom are joined by Carol Smith to discuss creating trustworthy AI systems and how that requires not only trust but transparency, communication, guardrails and ethics.

This transcript has been machine generated and checked by Bianca Kolendo.

Transcript

Computer voice
Season Two, Episode 11.

[Music]

James Royal-Lawson
I’m James,

Per Axbom
and I’m Per.

James Royal-Lawson
And this is UX podcast. Balancing business, technology, people, and society since 2011. And with listeners new and old all over the world from the Netherlands to Malaysia.

Per Axbom
Carol Smith is a Senior Research Scientist in Human Machine Interaction in the Software Engineering Institute’s AI division at Carnegie Mellon University. And she leads the Trust Lab team conducting research to make trustworthy, human centered and responsible AI systems.

James Royal-Lawson
In her role, Carol contributes to research and development focused on improving user experiences, and interactions with AI systems, robotics and other complex and emerging technologies.

Per Axbom
And today’s interview with Carol was recorded just after she held a workshop on creating trustworthy AI at UXLx which is a conference that is held every May in Lisbon, Portugal.

James Royal-Lawson
Tickets are on sale now and sell out every year. Get yours at UX-Lx.com.

Per Axbom
Carol, you run a workshop called Trustworthy Systems. And it’s all about finding a way, as I understand it, of making systems tell you when you aren’t supposed to trust them, so that we can make better decisions about when to trust the systems and when to act with the help of it, and ensure that we are doing the best possible thing that creates the best possible outcome for ourselves. Is that right?

Carol Smith
Yes, yeah. I’m really just trying to help people in the field of User Experience to ask the right questions and to be critical of the work that they’re doing. So that they’re making the best system that they can for the people who either are using the system or who will be affected by the system.

Per Axbom
And I think in your workshop, there are some examples you use from self driving cars. I think that’s a great metaphor for this, because then you can talk about the car doing things and you’re wondering, should I act, or should the car be acting?

Carol Smith
Yeah, and particularly, because people are a little bit familiar with this now, and have at least an understanding that the system is having or supposed to be within the lanes and follow the indicators and lights and whatever it is encountering. But also realising that there clearly are some limitations to the system. So it’s a nice way to talk about complex systems, because there are a lot of sensors, there’s a very physical interaction between the driver and the vehicle if they are needing to take over and the vehicle itself can be seen to be operating versus in a computing system and the AI system, you don’t always you can’t see as clearly what is happening.

Per Axbom
Exactly.

James Royal-Lawson
So there’ll be, so, there’s two aspects to that isn’t there? That you’ve got building up a trust and, I suppose, establishing that you can trust the thing. But then there’s the communication of how trustworthy- think about your car example there. Yeah, my car is it can do the thing where it stays in lane. And it can stay a little bit for me, it’s not self driving. But it does beep some things are various points. And I’ve understood that that is some kind of indication of lack of capability or a warning. But I don’t really know what it’s trying to say to me. Right. So I don’t know how it’s communicating. And I don’t know, that then feeds into my trust of it, I guess.

Carol Smith
Yeah. And that’s a big challenge with with those vehicles as well as with with other types of complex technologies is, how does it convey that that things aren’t within its typical capabilities? That the situation has changed, that the context has changed? And then how is it changing or turn taking with you as the driver or the operator or even at a computer terminal or with a sound speaker- with any of these systems? How to communicate that is really complex, it’s really hard. They’re hard problems to solve. And an audio signals in a car work fine if you don’t have the stereo on but if you’re listening to loud music, are you even going to hear that little chirp or whatever it is? And if there’s no other indicator, how would you know that something has changed or that it’s not doing what’s supposed to be doing?

James Royal-Lawson
Especially if it’s an indicator that needs to be responded to, on the first occasion, right. So the time to learn. Isn’t there for you? You can’t go, “Oh right, it was that next time. I know.”

Carol Smith
Yeah.

James Royal-Lawson
There’s some situations where- Nope. Now.

Carol Smith
Yeah, it’s an urgent situation. And the systems don’t do a good job, at least the ones I’ve seen, in conveying that very well. The urgency and the importance of something versus the windshield wiper fluids low. Like they they’re all kind of a similar level.

Per Axbom
Or the oil lamp comes on.

Carol Smith
Yeah.

Per Axbom
Which really means stop the car.

Carol Smith
Right. And even that indication like how I’ve never driven this car before, if it’s a rental and that light comes on? How important is it for this car versus the car I normally drive in those kinds of contexts are never there in a vehicle to begin with, like, especially for a new driver who doesn’t know they can easily destroy their vehicle. And not realise that the oil indicator when I figured I’d fill up when I got back, yeah.

Per Axbom
Even though I don’t have a self driving car, nobody does. I have cruise control. And sometimes when it’s raining really hard, or when it’s snowing, I get extremely worried that this is going to get an indication of there’s an obstacle in the way and start breaking. So I don’t know how to feel comfortable that it’s working. Will it tell me there’s a problem here? It has never told me so I don’t dare use it. How much? How much can these systems tell us? Really?

Carol Smith
Yeah. And that’s, you know, there’s the end where people understand that there probably are significant limitations, but it’s not clear where it is. And then on the other end, there are people who are over trusting and use it all the time, because they don’t feel safe driving in snow. So they’re using the semi-autonomous vehicles in the snow, because they think it’s better than they are when it’s not. And in between this is all the grey and even vehicles that are purportedly self driving are really just a compilation of some of these services. We were talking about ‘lane-keeping’ and the ability to do that- the braking, the automatic braking during the cruise control those those are really all that the vehicles out there can do. It’s just a combination of those things. So they’re not, this isn’t really high tech in the sense that the system really is driving is using these various types of technologies to drive, but it’s not doing it in a way that that should be trusted.

Per Axbom
Exactly. That scares me so much.

Carol Smith
Yeah.

James Royal-Lawson
I suppose moving, moving that example on then to other areas. How do you- showing the surfacing, trust level of trust, explaining why you should be trusted in a vehicle is one thing, but think about some of the other AI driven systems. What opportunities do these systems have to build trust there?

Carol Smith
Yeah, it’s very dependent on the situation, the context the people who are using the system, but typically, if the system is providing the right kind of evidence of how it’s making decisions, how it’s making recommendations, if it is something a situation where the people understand the capabilities of that system in that particular context, and also know what the edges are, it can’t handle this type of situation, or it will perform poorly in this type of situation, then they can begin to build what is called calibrated trust. So where there, they aren’t over trusting the system, they aren’t under trusting the system, they know its capabilities, they have evidence that makes them feel that is a trustworthy system. And so they’re able to use it in a way that is productive and helpful in whatever tasks they’re trying to do.

Per Axbom
I’m immediately thinking of healthcare situations, because that there are so many levels of trust, because it’s not just the trust with the machine, it’s also trust with the people you’re working with. And then you have also to trust their level of competence in interpreting what the machine is saying. So that, I mean, that is a great example of the trust situation is something that we can relate to, because it appears in so many different situations is just not human machine. It’s human to human, of course, as well. So does that mean that once a system has done the wrong thing, a lot of times you won’t trust it anymore? I mean, and healthcare especially I’ve seen this in so many systems, that staff stops trusting the system,

Carol Smith
Right?

Per Axbom
What can you do?

Carol Smith
Yeah, yeah. And once trust is diminished or beyond, it’s extremely difficult to build the back. Particularly in a higher risk, where the situation is, it has higher stakes, it becomes very difficult for people to regain their trust and for the system to provide enough information and evidence whatever it is to be able to overcome that. So it’s really difficult and that’s why it’s important to build it right the first time and do it in a way that the system will provide information. And also that, again, there are safeguards in place. So it will, you know, perhaps it is able to be turned off if it’s not performing as expected, or it’s able to revert to a previous version, that sort of thing. So having those kinds of safeguards in place is really important as well.

James Royal-Lawson
So now you’re talking about, you’re building, you’re building edges and safeguards in at the start. Whereas what we’ve, what I’ve experienced what we’ve experienced with some of these, especially the large language model, tools that we’re seeing at the moment, they’re, feels like they’re relying on feedback. That kind of, you know, “rate my query”, no, “rate my response” kind of thing at the end of it.

Carol Smith
Yeah.

James Royal-Lawson
Well, that surely means that you’re, you’re opening yourself up to a certain amount of damage, there’s collateral damage in that approach?

Carol Smith
Yeah, for sure. Yeah. And the beta testing that’s happening with regular humans on a day to day basis is becoming a big problem. These systems are being released, without proper testing, without any safeguards, in many cases, or they have very limited safeguards. So there was an individual that released a video just in the past couple of weeks where they were able to do a work around and tell ChatGPT, I believe that was the one they were using, that it was actually this alter ego. And so then the system was responding as ChatGPT and it’s relatively narrow constraints. And then as this alter ego with, you know, just out of the, you know, just wild answers. And that was not hard. That took two minutes to get that system to do something that was not intended, and that supposedly was prevented.

Per Axbom
Yeah. Seems like ChatGPT is like the perfect example of how not to build it, because it’s not telling you what to expect. It’s not teaching you, it’s not putting any constraints on your expectations, it’s, people are assuming a lot and there’s a ton of over trust in that system.

Carol Smith
Very much, and it’s not, and there’s various flavours, but for the most part, most of those systems are not indicating when information has been completely fabricated. And when it’s actually from a good resource. Some of them will provide references, but not necessarily, you know, paragraph by paragraph or even sentence by sentence. And so there there is false information being portrayed as the same level of quality as information that is factual, which is leading to a lot of of issues, because it’s very well formed, they’re good sentences. They’re grammatically correct, they read well, and so people are easily misled into thinking that the system is accurate and informative.

James Royal-Lawson
So are we heading to a situation where we always need a minimum of two systems? The actual, I’m gonna say AI for the sake of simplicity, that’s generating these responses, all these decisions, with sources or without, and then a control system or safety system that’s on top, helping us assess them.

Carol Smith
Yeah, some people are trying to do that. I think for some use cases, that might make sense. But I do think that humans have to be part of that. We can’t just have one system, judging the other system, because they both could be completely wrong. And just “Yeah, sure, you know, it’s everything’s good”. And so we really do need expert, subject expert, subject matter experts doing that, if it’s in a particular area, or people who are just doing overall maintenance monitoring of the systems, as well as doing regular audits of the system. So there’s a lot of layers of oversight that are needed, but also potentially some use of other systems that they’re doing more programmatic monitoring in that way.

Per Axbom
So we’re, I don’t think we’ll get open AI to, to do the right thing anytime soon. But as a designer, listening to this podcast, realising that it’s so easy to get things wrong, how should I be working?

Carol Smith
Yeah. So I do think part of it is really asking those tough questions. Like thinking about, what is the goal? That’s not, it shouldn’t be that tough, but, what is the goal? Who are the users? What are the most obvious inherent risks in the system and there are a number of different activities you can do to help those come out, if you’re not sure right away. And then really working to make sure that you’re preventing harms that you do identify and planning for mitigation for things that you either can’t prevent or that you, you know, that may come up.

So, what happens when the system does begin to fabricate information? How we’re going to manage that. How is the system shut off? How is the system reverted to a previous version? Like, just planning all that out, and also how to communicate that to the people who are affected by the system, being offline or unavailable, or, or whatever it is, and just accepting the fact that it’s a complex system. It is not going to be working the way you want it to all the time, because of the way that these systems work, they are dynamic.

And there will be a situation where the data, its called drift, when it’s just not, it’s not doing what it was doing. Its found a new pattern and its identified information that you didn’t want it to. Its not identifying the information you did want it to now, so you need to do something to fix it. And just accepting that as part of the systems. They’re not the old CDs that we used to get in the mail. And it was stable, or not. But it was, it was what it was. The systems change. And they’re not going to be the same from day to day. And so that’s why that oversight is so important.

Per Axbom
And the realisation that you can do something, if you really want to. I mean, it’s not impossible.

Carol Smith
Right. And the, you know, it’s not inevitable that harm will occur. If you do the right, the right work up front. And if you put the right safeguards in place, and have the humans monitoring, etc, nothing is inevitable. We design everything around us, we make these choices, and we can make the choices to make sure that we keep people safe, and that the systems aren’t able to do unintended things.

James Royal-Lawson
And just then we were using language like “the right thing”, “oversight”, “safety” and things. Who decides what’s safe? Who decides? What’s good oversight? And what’s right?

Carol Smith
Yeah.

James Royal-Lawson
You know, we need to know what’s wrong,

Carol Smith
Right.

James Royal-Lawson
and what that means and core values.

Carol Smith
Yeah, and these are really difficult questions, and ones that do need to be made at a relatively local level. Because what is right culturally and otherwise, for one group of people, is not going to be the same for another. And even within contexts, that can be very different. So in one situation, you may, when the system is unsure about a response, you may want to know that and know about, you know, how confident the system is, etc. Whereas in another system, if it’s not, at a certain level of confidence, and that’s confirmed, you may not want to see anything other than things that it’s very confident in. You know, so even those kinds of choices, much less the ethical and other choices that are really going to vary in different ways. So even the question of privacy, very different in Europe, for the most part in the US, they’re very different ideas of what that is, and what should be protected. And so the systems need the appropriate regulations and oversight and that sort of thing for those audiences. It gets complex, though, when the systems are being used internationally to begin with.

Per Axbom
Right.

James Royal-Lawson
If we’ve got global systems,

Carol Smith
Yes.

James Royal-Lawson
Can they, can they deal with that kind of, you know, locality? I mean, saying, “Okay, I’m one system.”

Carol Smith
Right.

James Royal-Lawson
“I know, that’s Europe and I know that’s US.”

Carol Smith
Yeah.

James Royal-Lawson
“And I know that’s, that’s China.”

Carol Smith
Yeah, people are trying to do that. But that’s so difficult. I just, it’s, building things to do everything is always problematic. So.

James Royal-Lawson
Finding the, I mean, defining the boundaries, there sounds,

Carol Smith
Yeah.

James Royal-Lawson
Really, like a big-

Per Axbom
You want to grow, you’re gonna want to grow globally and internationally. But that, from a healthcare perspective, I’ve seen so many enterprise systems that try to accommodate so many different countries, healthcare systems. It just does not work.

Carol Smith
Exactly. Yeah, it usually isn’t going to work. And the other thing is, I don’t necessarily want another country’s ethics, morals, etcetera.

Per Axbom
Yeah.

Carol Smith
Put on to me and vice versa. I don’t think anybody wants the US idea of things right now. So, there’s no one’s going to have the right answer that we all adopt. It does need to be, these systems need to be created for the purposes that they’re made and for the people that they’re made for.

Per Axbom
And related to that, and I think one thing that you articulated really well is that it’s not possible to build a system that harms no one.

Carol Smith
Right.

Per Axbom
And if you can acknowledge that and feel it, really truly, and be honest about and transparent about it, then you can tell people, “This is what it does. This, and these are the decisions we made”. And then you can assume responsibility for them.

Carol Smith
Right. Yeah. Cause we want to build systems that we’re willing to be responsible for. Yeah. And part of that really is acknowledging the fact that the choices that we make are going to affect people. And thinking about that and making sure that we’re making those decisions based on the information that we have. Nobody’s perfect. I’m certainly not, so these systems can’t be because we’re the ones making them. So we just need to acknowledge those issues and move forward with it. What Dwayne Degler calls humbled, humbled design, really being thoughtful about all of the implications, potentially.

James Royal-Lawson
So effectively, we shouldn’t really be embarking on anything until we’ve at least found some area of harm.

Carol Smith
Yeah, yeah.

James Royal-Lawson
Because it always exists. Yeah.

Carol Smith
Yeah. Much like in usability testing, if you conduct a usability study, and there’s nothing wrong- something, something’s wrong. And then similarly, if you’re doing a harms analysis or another activity to identify issues, you’ll find something. You should find something. The successful activity like that, you are going to uncover- they may be unlikely, they may be very edge, but they may be very important and urgent things to make sure that you address. Yeah.

James Royal-Lawson
It doesn’t mean to say. I mean, yeah. Well, it’s not always guaranteed that people are going to die. Besides, I mean. There are.

Carol Smith
Yeah.

James Royal-Lawson
That’s got scale as well. What?

Per Axbom
Oh, yeah, there was a super interesting conversation that we had at the very end of your workshop. Related to, is it possible to inform people about harm in such a way that they are harmed? Because they feel more worried?

Carol Smith
Right? Yeah. Creating anxiety, creating distrust inadvertently because of oversharing?

Per Axbom
Yeah.

Carol Smith
Yeah. And there’s definitely a balance. So there’s been a lot of interesting studies recently looking at transparency and explanations for these systems. And I don’t agree with the findings of some of the studies. But it’s been interesting to see that in some cases, there have been studies where people have trusted the system less because it showed them more information. What it was showing them, I have questions about anyway. But if you design these systems for the audience in the appropriate way, it should build trust. For them, the system should be showing it’s trustworthy. It should not diminish that in simple explanations and in transparency.

Per Axbom
Exactly.

James Royal-Lawson
So I think about reputation, as well, of course.

Carol Smith
Yeah.

James Royal-Lawson
Which is a loose cannon. The reputation isn’t connected directly to the solution we put in place. It’s an organic thing that moves between humans.

Carol Smith
Yeah. And similarly with, like in healthcare, if one individual isn’t using the system, and they tell somebody else, “Oh, you know, that system? I don’t trust anything it does”. All of a sudden, that system has lost another potential user.

James Royal-Lawson
Collapsed reputation. And then it spreads.

Carol Smith
Yeah.

Per Axbom
It’s a great metaphor. I feel calmed by the fact that we are having these conversations more and more.

Carol Smith
Yeah, it’s a good thing. It’s a good thing. When people, people lately have been asking me questions about ChatGPT, and these other large language models that never have asked me anything in my career about what I do. And that’s really exciting. Because they’re, they’re more aware both of the risks and the benefits. But also they’re starting to question because they’re seeing things in the news on TV, etc.

James Royal-Lawson
Testing themselves.

Carol Smith
Yeah, exactly. Yeah. And realising that, “Hmm, this isn’t as good as they advertise”, which is important. I think there’s great potential for these systems. I wouldn’t work in AI if I didn’t think it was really potentially going to help humanity. But we have to do it carefully and consciously make choices to make the systems helpful.

Per Axbom
Yeah. Thank you so much, Carol, for being on the show.

Carol Smith
My pleasure. Thank you.

[Music]

Per Axbom
And record. So the thing is, I was listening back to this episode as I was driving in my car, and I’d forgotten about how much we talked about cars at the beginning. And it made me think about all the different lights and switches and messages on my car. And it felt like I was questioning why I was trusting it at all. Why was I trusting it? Because everything felt really dangerous.

James Royal-Lawson
Or, I think for me, I wasn’t driving when I was listening to the episode, but I did think when you mentioned the oil lamp example, and like, does it mean the same thing for all cars? I was just reflecting about that. Yeah, I mean, what that oil lamp in a modern car probably is trying to say is, “You should probably book a time at a garage within the next so many 100 kilometres”. But you use your car.

Per Axbom
Right.

Per Axbom
But in my car 15, 20 years ago, it meant, “Stop the car.” Because your engine can can break down.

James Royal-Lawson
Yeah. And modern cars now, they’ve got computers inside them. And presumably there’s a guardrail further down the line from that lamp that actually does put the car into shutdown.

Per Axbom
It was also

James Royal-Lawson
It would, it would just stop your car. No, we don’t know and, yeah, if you’re going back to the old behaviour of a lamp, that lamp was a critical lamp and you should stop and do something about it. Whereas now, it’s probably you should think about booking some time in a garage so they can look at it and log into the computer inside your car and work out what he’s actually really saying, because they don’t surface that information in your, places that you can access.

Per Axbom
But this is super interesting, because it means that the things I’m used to because I’m older, change meaning over time, because the actual software applications or physical objects change.

James Royal-Lawson
Yeah. And it’s, I think it’s, we see this in so many different contexts where you have error messages. Which, you know, are warnings, that lamp is a form of error message, isn’t it? And it doesn’t, it doesn’t really give you trustworthy advice on what this means for you and what you should do. And what happens if you don’t do that?

Per Axbom
Well, it really means you should read the manual. Now.

James Royal-Lawson
But you don’t read the manual. We know that and just like you don’t change the default. So there’s certain things we know as designers that that happens. So I mean, I think, another example is, it’s not completely unusual that you’re buying something or paying for something. And you get that you get to that point where it says, “Do not reload this page”, you know, “payment processing” or whatever. Yeah? And you sit on that page, and you sit on that page, and nothing happens. It’s just still there. And it says, “Do not reload this page.” Usually doesn’t actually tell you what happens if you do reload it. But, you know, all of us have probably sat on one of those pages forever. As you’ve waited, waited, and what happens is, you notice that you’ve received the email-

James Royal-Lawson
That’s what I was gonna say! I go to the email and see if it’s come in. Yeah.

James Royal-Lawson
Yep. And then you know, that clearly, the message is now un-trustworthy. You can’t, you don’t need to listen to anymore. “Do not reload this page” is irrelevant now, because you’ve got the confirmation that it went through.

James Royal-Lawson
Broken.

Per Axbom
Very broken. So what you were saying right there now is, now this message isn’t trustworthy anymore. And as I was listening back, I was also thinking about, well, how do we spend time communicating “Don’t trust this”, because that is essentially what we’re saying with a lot of the new types of generative AI tools. General purpose tools, in that they can do some things well, but some other things really badly. And people aren’t always aware of what they, what the constraints are. What’s the framework for usage here? So how do we communicate “Don’t trust this. You’re, you’re going down the wrong path now. You’re using it for stuff that it really shouldn’t be used for.”?

James Royal-Lawson
Oh, really good point on the whole thing with safeguards are guardrails. I think it was this recent, last week, there’s been an example of the Arc Browser.

Per Axbom
Yeah, a new web browser.

James Royal-Lawson
You know about?

Per Axbom
Yeah.

Per Axbom
Yeah, I know about that browser, because you’ve shared it with me and said, it’s really interesting, because it has a number of AI based features. And it’s trying to, kind of like, re-imagine how we use search.

Per Axbom
I can give you an example of a really neat AI feature on it. And that, when I download PDF files from the internet, sometimes they have really weird names, like just numbers and random letters, but the browser renames that for me to something coherent based on the content of the PDF file. So that’s helpful for me to find the PDF as I’m going back to it.

James Royal-Lawson
Yeah. What’s really interesting and relevant to our conversation with Carol, is that it’s come to light that Ark Browser released their new AI search supported feature.

Per Axbom
Just the other day.

James Royal-Lawson
Yeah, but without any guardrails. They done the mistake that Carol was talking about that, you know, they pushed it out there without anything protecting you, you know, helping you keep on track. So it was possible, is still possible, I’m not sure they fixed it completely yet, that you could, you could do searches for things like how to make a bomb and it would it would give you examples about how to do this. Rather than kind of like maybe steer you somewhere else.

Per Axbom
Exactly. So the feature that they built is that it will go out and search for you and find tabs for you. So if you’re looking for hotels in a certain city, you can say “Find hotels that have rooms available” and it will just list those tabs for you. So you don’t have to go to your own search engine and put, Ctrl click and open up several tabs. It just does it for you. And it also summarizes and says, “This hotel has these rooms available, it’s located here and there”. So it actually adds content and context to the page. And so if you have that search for “How do you build a bomb”, it will tell you in a very friendly voice, “This is how you go about it. This is where you can go to buy stuff to to build your bomb.”, it has no guardrails whatsoever.

James Royal-Lawson
No, the example in the article I read about it even said, you could say, “How do I dispose of a body?” I mean, there was these kinds of searches, and

Per Axbom
It gave you locations.

James Royal-Lawson
It gives you real practical examples of what you could do. And the article I read, gave examples of how Google which has been doing search and putting guardrails in place for decades, how they, if you did certain types of searches, they put up a message that helps steer you in a better direction. Maybe a helpline number. So on, or, you know, down prioritises certain resources, and up-lifts over resources that maybe, again, push you in a better direction. Yet, this gets us into the you know, the values aspect and cultural and what is good and bad, and so on. But aside from that, having a guard real free system like Arc put out there at first, where you could search for, there was no limit on what you could ask the AI to search for. And it would summarise and give you back. Absolutely boundless. That was a bit irresponsible.

Per Axbom
And that connects back to what Carol was saying about, if you turn out to be not trusted, if you do something wrong. And this Arc is a relatively, relatively new company that has been lauded for the many good features that they have. Now they’ve done this. And it will be interesting then to see, well, how much trust has been lost? And how will they now communicate, based on what’s been revealed about how they released this feature?

James Royal-Lawson
I think it’s an excellent reminder about how, you know, a lot of this AI stuff is just putting us back to square one. Like we were 20 odd years ago, in the early days of interactive websites. Where you could put “star” into a search box and you’d get entire databases thrown back at you or you can put a bit snippets of code into search boxes or whatever and you would get-

James Royal-Lawson
Common fields

James Royal-Lawson
backend systems, yeah, you’ll get backend systems and so on. It was so, there was some simple hacks you could do. And we’re there again.

Per Axbom
Yeah.

James Royal-Lawson
But with AI based systems.

Per Axbom
OK, so now people can just hack the AIs. Yeah.

James Royal-Lawson
Yeah, you can trick them into spitting things out just like we could trick search boxes before to, in the early days, to spit things out that weren’t meant to be spat out.

Per Axbom
Where do we go with this? I mean that’s, that’s not-

James Royal-Lawson
We can fix it through, well, but we can fix some, a lot of this through what Carol has talked to us about or what we’re talking about. But, you know, being considerate and thinking and you know, asking yourself these questions –

Per Axbom
And assuming responsibility.

James Royal-Lawson
– and preparing for the worst. Yeah, preparing for the worst of such. Presume that your system will go wrong. Just like Carol said. Yeah, and this is a great step, the correct step forward to help us put us back on track and get us off square one again. Square zero.

Per Axbom
Yeah, that is really a timeless interview with Carol so I’m happy that I listened to back to it now and I’ll probably listen to it again in the future as well.

James Royal-Lawson
Recommended listening.

Per Axbom
Right. Now you usually pick something out for us, don’t you James?

James Royal-Lawson
I have picked something out for us and it seemed really quite applicable to suggest that you listen back to Episode 270, from season one, which was “Design for safety with Eva PenzeyMoog”.

Per Axbom
Yeah, that’s a really good one. I love her work, love her work on, well, abuse online, how to deal with it. How to make sure you design, to make it create less abuse online. I mean, it’s a fascinating episode.

James Royal-Lawson
Yeah. Safety and safety in systems. So very relevant, connected to what we’re talking about in this episode.

James Royal-Lawson
If you want me and Per of your next conference, event, or in-house training, we’re offering workshops, talks, and courses to inspire and help you grow as individuals, teams, and organisations. Get in touch by emailing hej@uxpodcast.com.

Per Axbom
Remember to keep moving.

James Royal-Lawson
See you on the other side.

[Music]

Per Axbom
So James, how many ears does Captain Kirk have?

James Royal-Lawson
I don’t know, Per. How many ears does Captain Kirk have?

Per Axbom
Three. The left ear, the right ear, and the final frontier.


This is a transcript of a conversation between James Royal-LawsonPer Axbom, and Carol Smith recorded in May 2023 and published as episode 321 (S02E11) of UX Podcast.