Make it more so: Sci-fi and AI

A transcript of S02E04 (314) of UX Podcast. James Royal-Lawson and Per Axbom are joined by Chris Noessel and Nathan Shedroff to discuss sci-fi interfaces and AI and reflect back on the 10 years since Make It So.

This transcript has been machine generated and checked by Dave Trendall.

Transcript



Computer voice
Season Two, Episode Four.

[Music]

James Royal-Lawson
I’m James.

Per Axbom
and I’m Per.

James Royal-Lawson
And this is UX podcast bouncing business, technology, people in society since May 2011. And we’ve listeners new and old all over the world, from Thailand to Curacao.

Per Axbom
And wow, do we have an interview for you today. It’s been a long time coming. But we did it and it was by chance. We are talking to Nathan Shedroff and Chris Noessel. And we haven’t done that in 11 years,

James Royal-Lawson
Well, with the pair of them together, we haven’t we haven’t done. We we first talked to them in Episode 25, which is 11 years ago as you said, on the back of thier book that was released at the time called ‘Make It So’, about science fiction interfaces. Both Chris and Nathan have written numerous books over the years, actually, to be honest, I think there’s too many to list.

Per Axbom
There’s too many to list. And we will obviously link to the respective online profiles in the show notes. And today’s interview was recorded at UX LX, which is a conference that is held every May in Lisbon, Portugal, and tickets usually go on sale each October and sell out every edition.

James Royal-Lawson
Our conversation with them, at the conference, we we wanted to reflect back on the interview 11 years ago and the book from 11 years ago. And project that into now and see what we can pick up and what’s changed?

Per Axbom
I think so yeah. But Chris and Nathan being who they are, we were all over the place. But in a very, very interesting place.

James Royal-Lawson
So in my in my preparation for this gathering, reunion, we could call it, I look back at the transcript from our chat in 2012, some 11 years ago now. And it’s quite a typical dialogue, I think, at least between, well, probably all four of us. Nathan, you made a small technical error when talking to a date. And Chris couldn’t resist correcting the technical error and making sure we had the factually correct number. For this, we’re talking about years.

Nathan Shedroff
Not unlike either of us.

James Royal-Lawson
And then per makes a comforting comment, trying to say, Okay, I do this too. And then of course, I dive in with a jokey comment and the jokey response and make comments about our ages and time flies by and decades, and so on and go on to say we’ll book an interview with you again in 10 years. So we can have the new meetup and talk about this again.

Per Axbom
Yeah, we’ll actually have to play that clip.

Nathan Shedroff
And in fact, I, you know, I came up with the title of the book back then too, but never did anything on it until about 96, I believe when I started talking to Chris about it. And he said, Yeah, right on this sounds like fun. And even back then we knew we had a feeling that there’d be something interesting in that kind of investigation, but we had no idea exactly what would you know, come out of it or that we would find so much material.

Christopher Noessel
Quick, quick data, not 96, 2006. Right, right?

Nathan Shedroff
Oh, yeah, I’m sorry.

Per Axbom
I tend to make the same mistake.

James Royal-Lawson
Yeah, no, decades fly by now. It’s a sign of our ages.

Christopher Noessel
I was just gonna say like, it’s one kind of nerd thing to have been doing this six years.

James Royal-Lawson
Yeah, we book the interview with you in 10 years time so we can have the re-meet up and talk about it.

James Royal-Lawson
So Nathan got his decades mixed up, said 96 instead of 2006.

Nathan Shedroff
Okay, yeah.

Christopher Noessel
So petty I’m sorry.

Nathan Shedroff
What were we talking about?

James Royal-Lawson
You were actually going back to the the origins of Make It So and the very first kind of research you did into the topic.

Per Axbom
How long you’d actually been working on the book, prior to writing it.

James Royal-Lawson
And then probably the next stage and then when you actually started working on the book in 2006.

Christopher Noessel
1886, around there.

Nathan Shedroff
I think I came up with the idea for it on the train on Caltrain. That would have been 89.

James Royal-Lawson
You did mention the 80s, then we moved on. That’s what I think you went to the 90s. But Chris then said, well, actually, that feels,

Nathan Shedroff
It’s all a blur.

James Royal-Lawson
It’s so long ago.

Nathan Shedroff
What is the saying about the 60s you grew up? And if you remember the 60s, you weren’t there.

James Royal-Lawson
So yeah, so if you remember our first chat in 2012, we weren’t there. But going on from, because I kept on looking a bit at the transcript of the episode, and in the time and space we are in now, AI’s, and I say AI without getting into a more detailed definition of that, even though someone’s looking at me across the table, and probably would like to. We don’t mention AI’s in that original conversation about sci fi interfaces.

James Royal-Lawson
I thought, okay, maybe just the letters A and I, we don’t use maybe there’s some variants. I tried to look for agents or agentive, or artificial. I couldn’t find any mention of these things. Which kind of surprised me. But at the same time, maybe made me reflect more about the time we are in now.

Per Axbom
Because back then it was really about interfaces like, yeah, that was the thing.

Nathan Shedroff
Well, we definitely had agents and anthropomorphized interfaces and conversational interfaces. In the book, maybe it didn’t come up in the conversation.

James Royal-Lawson
That was not criticism of that we should have mentioned it. It’s more reflection on, that was a category that was there. But you know, we didn’t have that burning need to talk about AI’s. So if we did that conversation now, that chapter would have got a much, much, much bigger slot in there.

Nathan Shedroff
Which is essentially the talk you just did.

Christopher Noessel
Yeah, it’s true.

Nathan Shedroff
I have a question for you Chris. And that is, maybe because it there’s not a Hollywood film that necessarily – so when I have my talk about the history of conversational interfaces, it starts with the burning bush, which I get isn’t necessarily an AI. But the next one is the Golem.

Christopher Noessel
Yep.

Nathan Shedroff
And so would you consider the Gollum, AI? And would it, I mean, it obviously predates metropolis, but would it fit in that investigation? Not being a movie? Or would it wouldn’t have to permute to like Frankenstein?

Christopher Noessel
I’m still processing, I’m very slow after the conference is over. I try and really put hard boundaries around the survey. So I think in the book I wrote in 2017, I referenced the Golem as an example of an agent of interface which would lead me to say, Oh, yes, but in the context of science fiction, it’s arguable. It’s arguable, but it’s a great citation in the history of conversational interfaces. Especially because the on and off switch is linguistic. I don’t know if you guys know that.

James Royal-Lawson
Okay.

Per Axbom
No.

James Royal-Lawson
Explain a little bit.

Christopher Noessel
I don’t read Hebrew. So do you want to you?

Nathan Shedroff
I’m not familiar. Well, I mean, I don’t know if there is an off switch. But the births birth moment, is when you inscribe the, I think it’s probably a specific word in Hebrew for you,

Christopher Noessel
It’s the Aleph, isn’t it?

Nathan Shedroff
But is it just the Aleph or is there, see now I’m getting confused with one of Ted Chiang’s short stories, which is just brilliant. Have you have you read this? You know, Ted Chang, who wrote the short story that became Arrival.

Per Axbom
Okay. Oh, really.

Nathan Shedroff
Yeah. Most amazing writer.

Per Axbom
Oh, wow.

Nathan Shedroff
in the 20th century as far as I’m concerned, because every single short story is a completely unique, different world that you’ve never thought of before. But one of his stories is, I don’t know how to characterise it other than to say it’s like, Hebrew punk. It’s sort of steampunk with golems. So the most important people in manufacturing are Hebrew mystic writers who have to write out the instructions for the Golem on a sheet of paper in Hebrew.

Nathan Shedroff
So you’re writing right to left. And I think there’s a specific amount of characters that you have to use so it becomes a limiting factor in your coding. And then you wrap it up and you put it in the back of the neck of the clay Golem. And it comes to life and does that function. And so now I’m getting confused if this is a classic Golem, the original Gelem, have one letter or a couple letters.

Christopher Noessel
I’m going to sound more knowledgeable than I am. I’m looking at Wikipedia at the moment. The Golem originally had the Aleph. And to turn the Golem on, and then you, no I’m sorry, you would remove the Aleph from the word emet. Am I saying it right? Which means truth. And when you remove the Aleph it becomes death. And that turns the Golem off. I never did that. Which I found just a wonderful linguistic pun in the middle of that wonderful myth.

Nathan Shedroff
Truth to death.

Per Axbom
Wow.

James Royal-Lawson
And then we go confirmed there was an off switch.

Nathan Shedroff
Yeah, exactly. Well, and so my question to your question was going to be what’s the topic that we should be mentioning today that we will end up be talking about in 10 years?

James Royal-Lawson
Oh, that’s a fascinating one for us to maybe come back to as an end-all question that we can think about during this little journey of ours. Because yeah, we can come back in another 10 years.

Per Axbom
But I feel like there’s an AI in the room isn’t there because everybody’s talking about AI these days. But at the same time, as you were alluding to, nobody is actually talking about something that they can define, which is a bit of a struggle for us, because everybody keeps bringing it up. How do you guys feel about this word just being thrown around? And meaning absolutely anything it seems.

Christopher Noessel
Yeah, so I did a workshop yesterday, where I was teaching sort of fundamentals of design for AI. And I was touching on the three core components for me are: fancy algorithms that interact with models based on vast data. I think that covers the majority of what has been called AI. And as far as I can prognosticate what may be AI in the future. So for my money, those are those constituent parts, if you want to work with that, but you’re right, from the 1970s, we considered spell check would have been a really powerful AI. And now it’s just a function. It changes over time.

James Royal-Lawson
Oh, that’s interesting. Because then because I was just gonna say, do you think then we’re stuck with a definition of AI now and we’re going to have to invent, like super intelligence? Do we have to have new phrases to move on to when we’ve worn out these things that we have as AI? But what you’re seeing now is maybe not

Nathan Shedroff
Sadly, yes,

James Royal-Lawson
OK.

Christopher Noessel
We have words that change their meaning based on context; pronouns, the word horizon, those change. Your horizon is different than my horizon. Your listeners have a horizon all their own as they hear these words. And that’s kind of like a pronoun word. It depends on the context. So I think that the social context, and to some extent, the technological context, will decide whether we keep that word or throw it away.

We’ve already been through a couple of AI winters over the history of artificial intelligence. And nobody would touch AI for a very long time. Even the term, even though that’s why we invented machine learning, because, that’s not AI. That won’t cost me the funding that I need for my project. But now it’s back in vogue again. And we’ll see how far it goes.

But I was remembering one of the things that we discovered when we were writing the original book, is we went back to the very beginning of cinema, as part of our survey that informed the book. And we relaunched la voyage de la lune, and that’s when you realise that there’s zero interfaces in that film. Like when they want to open the door to the rocket, they just push it. Right?

Nathan Shedroff
Certainly. I mean, you could, sort of say that is a kind of interface, but what you’re saying is, there’s no surface. Yeah.

Christopher Noessel
It’s minimal.

Nathan Shedroff
Artificial.

Nathan Shedroff
Yeah exactly, mediated in a way.

James Royal-Lawson
There’s not assistance there. You’re just pushing something out.

Christopher Noessel
Yeah, there was a fellow who used the term the Gutenberg parenthesis. I forget his name. Please forgive me. It’s something I’ll have to look up. But it was his thesis paper at the Royal College of Art. And when I ran across, I think, no, the title of it was ‘Into the Hot’. It was a weird thesis. But to use the phrase, the Gutenberg parenthesis, really points out that what feels like something that is permanent as part of the world has a beginning and it will probably have an end.

And I think Jeff Jarvis, that’s not the one I’m remembering, so the guy I’m thinking of was referencing Jeff Jarvis. Thank you so much. So it may be that there is an AI, parenthesis, or there may even be an interface parenthesis. And in fact, I kind of believe we’re getting close to it. Lots of designers are asking the question, what does it mean to be an interface designer in a world where I can speak to a computer and have it do the thing?

Nathan Shedroff
I have to say that I’m very frustrated with the Co-opting of the term AI, which now is just been, you know, laid onto machine learning. And now such that people who are doing real AI, like really looking at artificial intelligence, now have to modify the term I call it, you know, AGI, or GAI or whatever. To play off what Chris is saying, I mean, this sort of natural language, the natural evolution of language, and it’s not like I’m gonna hold that dike back.

But I think it’s problematic, especially when you look at the media, and you just, an hour ago, put this beautiful survey up and identified where the media’s stories are, that they shouldn’t be and where they aren’t that they should be fighting stories about or reporting about. And to me, that’s the difference between reporters or filmmakers understanding that this AI isn’t I.

It’s ML. But confusing that with stories about I, AI, right. And the difference there. And the reason why, you know, if you read right now, this is, almost June of 2023, if you read an article that references the Terminator, or killer robots, or anything like that, you automatically know that that reporter or writer has no idea what AI is, and has no idea what is important about what we call AI and what’s happening and has no way, like they don’t know where the story is.

Christopher Noessel
Yeah, I think, Oh, can I share one quick? I have a colleague that I worked with, who was at Google and move to Meta, or was it the opposite? No, yeah, he was at Google now he’s at meta. I don’t know if he still is because of all recent lay-offs. Anyway, he and his friends like to joke about ‘time to Terminator’, which is a metric that they keep for conferences, and every speaker how quickly over the course, and I don’t think I didn’t hear it once here. But how quickly will the speaker about AI mentioned Terminator, and I jokingly added it as literally the first word I made at a conference where I knew he would be.

Nathan Shedroff
We had a picture of it.

Christopher Noessel
Did I? In today’s thing?

James Royal-Lawson
Yeah, there was one of your slides about films

Christopher Noessel
Oh all the examples, yeah

James Royal-Lawson
The Terminator is in, but thinking back I think we talked about this in the previous, in the first chat from Make it so if not in the in the book itself, but the Sci Fi of now can only reflect society or the culture of now.

James Royal-Lawson
So thinking about the terminology thing are we going to see now a phase of sci fi films which have to incorporate this this world where we have used AI to mean all these machine learning things because basically now they’ve all passed the Turing test so it’s kind of job done for AI and now they are AI’s we have to use things like super super intelligence, so on to go beyond it just now. Because we’re bridging different technologies. So we need several times in parallel at the moment, and that then presumably, maybe feeds into Hollywood.

Christopher Noessel
Yes, Hollywood always has to extend the current paradigm. And every time there is a great leap in literacy, a) it makes a lot of old titles look super stupid. When they were just you know, guessing, what’s the dreams one? 80s film, guy spills soda on his keyboard, Electric Dreams.

James Royal-Lawson
Yeah.

Christopher Noessel
Yeah, it’s Electric Dreams work spills a soda on a keyboard and that wakes his computer up. Like at the time it might have been even like tongue in cheeky but now it just looks so dumb, anyway, yes.

Nathan Shedroff
Like the teenagers ripping up magazine photos and inserting it to make

Christopher Noessel
To a scanner on a real genius. So dumb.

James Royal-Lawson
It worked, right?

Christopher Noessel
It worked, you saw all the evidence and yeah, exactly. Like why they buried that technology I don’t know. But yeah, totally. It’s it’s a vast conspiracy. So, yes, and there’s always a treadmill of increasing literacy, and how Hollywood writers extend that. And it’s not ever going to end. There was a intern at my job at IBM as part of the design for AI guild that I run, who did a really interesting project about how we talk about AI, and to be as specific as possible, balanced with being as accessible as possible. That’s not an easy road to walk. And it is one that sci-fi runs and abuses quite often. But as long as the movie sells the studio’s happy.

Nathan Shedroff
I think that the difference I’m talking about is really, whether it’s characterised as sentient or not. Right. So I think, as far back as sci fi goes, we have writers and movies and actors and whatever, portraying sentience, artificial sentience, and everything that is what we now call today AI, which is, you know, incredibly impressive machine functioning, was just never seen as sentience. It was just sort of technology, right?

In EagleEye, like, you know, or CSI like, enhance, enhance, enhance, right. That’s what we’re, we’re seeing everyday now apps that do these kinds of things, to some extent, but we don’t think of that as sentience. And so I think that the conversation about AI that’s out there in the world between, you know, Wired Magazine, and, you know, some local paper versus, you know, an MSNBC or something is still centred on the sentience of technology.

Is that happening? Is it not happening? What happens if they take over? Elon Musk says he’s worried, you know. That’s the weirdness of the conversation, because there really isn’t a lot of conversation outside of the tech design, and a little bit of the sort of innovation investing world, around like these capabilities of what we now call AI.

Christopher Noessel
But it’s funny because the the sentience question is always, it’s very weird to me, partially, because it’s always a yes or no question. And that’s ridiculous. I read and I talked about in the workshop yesterday, a book that I read last year, by Eddie Yong called An Immense World. And in it he goes through and talks about animal perception, and the resulting hints about animal consciousness.

And it really points out and in the intro of the book, he talks about, or describes, like, a gymnasium room with an elephant, a snake, a human and a spider, and one other animal. And he says, to the human, they think that they’re in, they can see it all. And so they can see that as the room. But each animal lives in a different world in that same space. And it, it really raises the question of the alienness of familiar sentiences.

And the question for me personally about AI is not is it sentient? Because we’re ploughing towards that as best as we can. For better or for worse, but what kind of sentience is it? And I only see a very few films that are asking that key question. And one of the best ones, and I now cannot remember if this was out, when we were writing the book, but Under the Skin, you know, this film,

James Royal-Lawson
I just started singing “Under the sea”.

Christopher Noessel
Oh no no.

James Royal-Lawson
Oh my word.

Christopher Noessel
Under The Skin is this brilliant film, it was filmed in Scotland. But Scarlett Johansson is the main actress in it. And I’m not gonna give spoilers, but she goes, this happens in the very early part of the film, she goes around this city in Scotland, luring men like with sexual commands, and then they get back to this building.

And they’re, these men get these sort of dopey expressions on their faces as they’re following her. And they don’t acknowledge what we, as the audience can see is this oil black, seamless environment that they’ve entered into. And as they’re sort of following her like a fish or like a siren.

James Royal-Lawson
a Little Mermaid.

Christopher Noessel
Like they slowly like every with every step, they get lower and lower in this black inky space until they fall into it, and then they are digested. And it’s beautiful depiction of an otherworldly sentience in a way that fits film really beautifully. And so

Nathan Shedroff
Annihilation was one of those to me.

Christopher Noessel
Oh, yeah. Weird core. Yeah. Yeah. Like it’s just going to be stranger than we can possibly think. I’m looking forward to seeing more films, especially with the recent rise of chat GPT, begin, more writers begin to explore that space. What is the alienness of AI? And what does that mean if we’re using it to access the world.

Nathan Shedroff
But going back to I mean, you’re essentially suggesting that sentience may very well be a continuum, bi-spectrum. And humans are maybe on one, maybe not at the pinnacle, but we’re over on this end, and insects or whatever are over on this end. And we already know that AI technology, whatever, is capable of levelling and functioning at the level of intelligence of insects, and even small animals.

Nathan Shedroff
And I think, when we were doing presentations about Make it So, I was telling people, stop personifying your interfaces and your systems and your interactive televisions as people. Why don’t you try pets? Because the level of intelligence going to closer to a pet and where the technology could at least simulate sentience or intelligence was so much closer? And why wouldn’t you want, you know, your TV to be a pet than another person?

Christopher Noessel
It certainly sets expectations much better? Right?

Nathan Shedroff
The fail over to you know, disappointment isn’t as big.

James Royal-Lawson
Legal consequences, possibly are a bit iffy.

Nathan Shedroff
Considering, like from November on, now, we’re in different territory.

Per Axbom
But what I’m hearing from what you’re saying, Chris, is that I mean, we won’t be able to define the type of sentience that perhaps is actually developing, because it’s something that we don’t, we are not capable of understanding.

James Royal-Lawson
I don’t think we’re even yet agreement in the animal world of where it begins.

Per Axbom
Exactly.

James Royal-Lawson
So if we’re not, if we’re not finished, like you were into your scale there, Nathan, if on the other insects scales, or microorganisms and so on, if we don’t really know where the greyness is, right? It’s not all white at the end here.

Nathan Shedroff
It used to be like, Oh, humans are tool users. That’s what sets us apart. And then we find out oh, no, monkeys, and dogs, fishes, crows, fishes. About the tool, maybe it’s opposable thumbs. So there’s monkeys and whatnot, we keep moving the goalposts because we have to

Christopher Noessel
To make ourselves special.

Nathan Shedroff
And so one thing I’m looking forward to as these technologies will really help us question what that whole idea of sentience is, right? Like we already had the Google engineer declare, it’s alive. Right? And it’s not, but it’s getting close enough that confusion is getting, you know, easier and easier to make.

James Royal-Lawson
This might be, I don’t know if this is political but maybe, just thinking though, we’re saying sentient, but then also you’ve got life. When does life begin? When does sentience begin? Are they the same moment, or does one come later? Can we end up with a situation where we have machines who have maybe achieved one level that is higher than a living thing that eventually crosses that threshold?

Christopher Noessel
There has been a lot of talk about that recently. And we’re going far afield of interface. And that’s fine. I love it. But I have an outline for a book about designing for animals. Like literally, for them as users.

Nathan Shedroff
There’s a long history of that.

Christopher Noessel
Yeah. And part of that has been doing research into this question of sentience. And the thing that really stuck with me that I learned last year is that the current thinking, and I will have to open up my iPad to look at sketch notes, for the source, but the concept is that it was mobility that developed sentience, because if an animal, a stationary animal, develops a set of senses about what’s coming towards me, and if it’s a threat, avoid it.

But the moment an animal becomes mobile, it has to subtract its own motion, or every time it moves, it panics. Crap, that wall is moving towards me. Oh, shit, that person is attacking. Right? You have to subtract that, or the animal will freak out all the time. It is that subtraction that begins to outline a sense of self. And a sense of self is the core of the theory of mind. Yeah, really fascinating. I know we’ve gone far far.

Nathan Shedroff
I have an interview somewhere in my archives that never got published for the demystifying multimedia book with a designer at Apple who was designing, he did interfaces for Coco.

Christopher Noessel
Is that the gorilla?

Nathan Shedroff
Yeah. And the fascinating ones were dolphins or dolphin, where he said that the big difference, because you have to get into animal vision, right? Like if you’re doing that kind of interface. The big difference with dolphin is that the interface constantly has to be moving or they can’t see it.

Christopher Noessel
When I taught for one year when Brenda Laurel was the head of the CCA programme, and I taught both the undergraduates and the grads in interaction design, and one of my early assignments was to design a banking interface for a shark.

Nathan Shedroff
Nice

Christopher Noessel
For the same reason. Like they, they have to have constant airflow or they drown. Oh, waterflow because of the oxygen or they drown. I mean, it poses this whole weird worldview of how on earth do you do that? Anyway? Yeah, it’s gonna raise all sorts of questions.

Per Axbom
Okay, this is truly fascinating. But we’ll go am going to bring it back to like, I’m trying to think oh, so what’s going through our listeners minds right now.

Christopher Noessel
And this isn’t about interfaces or Sci-fi.

Nathan Shedroff
We’re developing a really good book list.

Per Axbom
Exactly. Yeah. That that’s going into the show notes! Absolutely, definitely. At the same time, some of these things, they seem so complex, and a lot of designers are getting their information from the media, which we just criticised. They’re not aware of what’s going on, really. And perhaps the reporting is under par.

But what do what should I as a designer be expected to understand about all this? What should I be doing to make myself, now I’m perhaps not afraid of being replaced by an app? I want to use it because I’m interested and curious and perhaps I can make cooler things and better things, performing things for my clients, for my users. Where do I start? Where what am I supposed to be doing right now? Because everything is moving so fast.

Christopher Noessel
I’m only gonna start with a few anecdotes about how fast it’s moving. Like Chat – like massive industries, blue chip industries are retooling around Chat GPT. And that was released in November. That’s only what six months ago. Auto GPT. And baby AGI was released, like, maybe two months ago. Mid journey came to its own in fall of last year. And like, even in the workshop, I was like, I had two slides. I was like, No, this came out Monday. But wait, this came out Tuesday, and I haven’t had time to, you know, adequately process it. And even though my job at IBM is to stay on top of this, it is the most overwhelming part of my job. So a couple of anecdotes to say, we’re all struggling, don’t worry. It’s very fast and very furious. Oh, that’s a bad reference. My apologies. Because

Nathan Shedroff
The next one is going to be written by AI. Number 11.

James Royal-Lawson
Based on all the others.

Nathan Shedroff
So I would say that you’re absolutely right about how fast this is moving. And it’s moving faster than anyone can keep up with, let alone sort of conceive of the boundaries. But the first answer I have to my designers, is experiment, like use these things. The worst thing I think a designer can do is what a lot of designers did, at the dawn of desktop publishing, which is: No, I will not, like that’s not typography. I can’t control typography, I can’t control colour. That’s not I will stick with print.

And okay, there’s still print around, right. But like, they missed an enablement of design that, you know, has been orders of magnitude. So I think the worst thing that designer could do is say, it’s more than I can deal with, I don’t want to deal with, I want to keep to my process and tools. So I just I try to have my students just explore play, I give them assignments, so that they can see also like, where it’s not really good.

You know, I think what when we when I talk about AI is, you know, ML, AI, whatever. I call them regurgitrons, right, because they regurgitate, they beautifully, elegantly regurgitate what they’ve been given. And that’s sort of all they can do, at least at the moment. They’re not terrible. They’re generative in this other weird way, but they’re not original. And they have a built in regression to the mean. So they’re really good at mediocre. If you’re a sitcom or a rom com writer. It’s over. Because most of us mediocre anyway, right?

Like, it’s not going to be, if you’re a really good one, you’re going to rise above so my students, you know, they can get 60 to 80% of the way there on their assignment. They still have to do the other 20 to 40. Not that they do.

But like you have to you have to explore that, you have to play with that. The other piece of it is prompt design is a new design field, like how do you design a prompt to get the feedback that you want or the output out of a Chat GPT or especially like a stable diffusion or midjourney. That’s a skill that they need to learn, which is an entirely new skill.

James Royal-Lawson
And when we talked to you, Chris, about agentive technology remembers talking about just that skill of as a designer, being able to see, good option, bad option, judging what is produced by these generative systems. And that’s a incredibly important skill and one at the moment that can’t be done by the systems themselves. Because it’s a judgement, it’s a subjective decision.

Christopher Noessel
I don’t want to nerd out too hard.

James Royal-Lawson
I shouldn’t have said can’t.

Christopher Noessel
Yeah yeah

James Royal-Lawson
As soon as I said it, I had the voice inside and it said James, you just said can’t. He’s gonna come with examples.

Christopher Noessel
That adversarial networks work is actually with two AIS, one that spits out 10 options. And the other one, which says which is closest to this prompt, which of these images is closest to this prompt? So in fact, it is judging, but all it’s doing is fit? Not quality. Not even novelty. When I have been playing with Chat GPT I always stump it by saying things like, what’s an atypical example of this object? And like that implies a field awareness and the ability to recognise within that, what are the exceptions, humans are still pretty good at that. And I always like, crash Chat GPT when I ask him for those sorts of things.

Nathan Shedroff
Is that when it starts hallucinating best?

Christopher Noessel
No. So the new version of chat GPT. And again, the speed of this thing, this was released last week, I think. But now it can, Chat GPT four. And this will seem ancient by the time somebody listens to this later now can browse and plan. So and there are also all sorts of other plugins. But you can do something like say, hey, go to the UX podcast website, and find all the interviews with Nathan Shedroff and Chris Noessel, and then summarise them in a timeline. We tried it right before the most, the end of the today’s talks and it didn’t work. But that’s because we were only two steps into the prompt design. I’m fairly sure we can get to work eventually.

Per Axbom
We tried it for 10 seconds.

Nathan Shedroff
And so to your point, it would be interesting, like, essentially, I expect nothing to happen if you go to Chat GPT and say, what are the topics that Chris and Nathan should have been talking about? In their interviews? Right, like, I don’t know, turnips?

Christopher Noessel
I don’t think it could handle the task that I set up for myself and the talk today, which is, what is Hollywood not telling us about AI? Those are hard questions to answer. I did think of one other thing, which would be an advice for listeners. Because ay ay is a technological thing that we are trying to adapt our work and our lives and our world to, I believe one of the challenges for designers is to and this is hard, but stay on top of the news and get good at separating the techno jargon, from what matters to humans. Right?

It might be of interest to know the difference between one shot learning and the different types of neural networks there are. But as a designer, you don’t really need to know. I’m saying that now there I can even hear my own caveat, but for the most part, right, reducing the AI and filtering all the stuff you’re reading down to. How will I use this? What does this mean for the things I need to equip our users to do? Is a great filter to develop. Because there’s so much technobabble I think I just brushed the mic. I’m gonna say that again.

There’s so much technobabble that you have to be the filter, I try and do my best. The poster that I referenced in the workshop yesterday was me literally doing that, looking at the algorithms, the three algorithms that underlie AI, and really saying, here’s what matters. Here are the inputs. Here are the outputs. Here’s what it does, here are examples. And here are the interactions that are germane to that function. So I’m not the only one that’s doing it probably. So finding those people who can do that if you can’t do it yourself is a good thing. But better is to develop that your skill yourself.

Nathan Shedroff
Well, the title of your talk was, I mean, it doubles down on the sense of you know, the point you’re making is it’s always about the people. Like if you’re on the design side of the interface side, whatever, it’s always about people and how is AI going to serve people? Do you know on the engineering side, maybe there’s cases where you just care about the technology and the technology interface, interfacing with other technology.

But, you know, we, in these discussions, often, some people can get sidetracked and forget that, no, we’re designing this for people in the same sense that in my workshop, in my workshop, yesterday, or two days ago, whatever, someone always invariably comes up to me afterwards and says, Well, this all sounds really good for B to C. But I’m in a B2B market, and I’m enterprise and I don’t understand how emotions and value and meaning fit because they only care about features and price.

And my answer is, you’re not looking hard enough. Because as long as there’s another person, all this stuff still works. But business people for so long, especially in the tech industry, have been able to squish their brains in this weird way so that they can forget that there are people involved, because it’s just numbers and the quantitative optimizers love that. But the message you’re giving us about AI Chris, is that no, people, like remember the people look into people look at what they need, look at how it interfaces with. In some senses, who cares how the technology side works?

Christopher Noessel
I think there’s a there’s a second answer, that first answer is 100% true for us as designers, we also have to acknowledge that AI stands to upend civilization and entire labour markets. And so as citizens, we don’t need to pay attention deeply to the technology, but about the broad scale consequences of that technology.

But the things that we need to do there aren’t like, Oh, I’m gonna go in and design something different. I may have to go vote or convince other people and educate them about well, okay, what is Chat GPT mean for the labour market? And then what do we do with the jobs that are about to be obviated? And sorry, I misspoke with the people whose jobs are about to be obviated. Yeah, I guess I just talked myself right back around to what we were talking about, which is still about the people.

Nathan Shedroff
Well, except that’s going to be a huge challenge on a philosophical and, and political, and even religious level, for a lot of people who are, especially in the United States, trying to eke out I mean, we have this formula, it’s not even capitalism, that’s corporatism. Right? It’s like, eke out every last penny and out of the system, it’s so extractive that already, we have to job problems, right. And there’s the moralization of you know, we won’t give you food stamps, unless you jump through these hoops and prove that you’re looking for work.

And, you know, there’s all sorts of stuff that all the research says us problem that was written probably doesn’t work, right. And, and we’re at this kind of detente in the United States, between these philosophies, these approaches to solving problems.

And here comes AI and neither side is ready for the consequences of that, because it’s going to be impossible to hold to this idea that welfare is morally repugnant, and people are gaming the system, and that that’s not just itself, like, there will be so many people out of jobs, and white collar jobs, not blue collar jobs that there it’ll be impossible, I think, to hold fast to these traditional moral views of work and compensation.

Christopher Noessel
There’s something really funny. So the pushback, the kickback against Chat GPT and Midjourney has been so loud, even from close friends of mine on social media. But I got into a conversation fairly recently about where were you at the dawn of desktop publishing. My first job out of undergraduate was as a typesetter, a photo typesetter. And that Job went away.

Nobody was infuriated. I think it’s a bit of hubris on humans parched to think, oh, art is special, writing is special. Turns out not so much. We do they are right quality art and quality writing and even novel writing and novel art is hard and difficult. And maybe not still not the purview of AI and taste is not the purview of AI, but we’ve hit a point where we’re touching on deep emotional things that we have to wrangle with. Because it’s no different than the desktop revolution as far as my job my old job was concerned, but it is different in the scale and the speed at which it’s gonna happen.

James Royal-Lawson
So there are gonna be new jobs. Most people could lose their current jobs, like Chris’ example there. But what I think is really interesting that you say Chris, about your job is to keep on top of the changes, what’s going on in the AI world, and you can’t keep up just now.

Christopher Noessel
I did say both things didn’t I?

James Royal-Lawson
We’re in an inevitable period now, where policy does not have a cat in hells chance, there is no way a policy can keep up, if Chris can’t keep up, and people involved in this branch can’t keep up, then policy definitely can’t. It wasn’t going to keep up anywhere. Now it really can’t. So in policy is what we’d need to revolutionise education. Because a part of the answer, I guess, to this thing, of where all the jobs gonna go, once people are without their jobs. No, that’s the generations coming up need to be trained, so that the new jobs that are coming, they can fit into them. But it’s so fast, now, we’re not going to be at the policy in place, and we’re not gonna understand what that is in –

Nathan Shedroff
Except I think if you go back to people as your grounding philosophy, maybe policy will never be able to keep up. But it could certainly keep up better than it is now. Right? Our leaders –

Per Axbom
Even enforcement of current policy.

Nathan Shedroff
Redefining, you know, we had the wrong sort of specifications, but our intent was this and we can change the specifications. We know what skills humans need to be, you know, children need to be taught or adults need to be retrained with. It’s very clear creative thinking, strategic thinking, systems thinking, design thinking, communication, collaboration skills, and critical thinking.

I think those are the ones I’ve put together, right? Chat GPT, these technologies are not good at those. So yes, we need to say, we need to redesign education. And then we need to redesign the assessment of, education, because, every every college, every high school, every school instructor right now is dealing with like, what do you do and they have Chat GPT write their essay?

Well, maybe, book report was never the assessment that someone learned something that we all assumed and pretended it was. Now, it’s clearly not. So, change your pedagogy. Like it’s time, get off your ass. Sorry, you know, you were you’re still in the workforce when this happened, but you got to change. And I don’t think a lot of people are ready to do that, especially in academia.

Christopher Noessel
I also think the modern Luddite movement needs to be brought to the fore and people need to get on board. So, Luddism is that was, as it was corporate-washed, was about an unreasonable fear of machinery in progress. That’s a very useful denouncement for a corporation to make. But I was recently reading a book called Breaking Things at Work by Gavin Mueller.

And it was all about the sort of factual history of the Luddites. In fact, they were a labour movement who were concerned with losing power in negotiations with mill owners, which once you put them in a mechanised loom in a factory, suddenly, the hand loom, people no longer have negotiations for their jobs. And they’re like, and this is going to ruin our quality of life, we’d have to travel to a factory, your factory,

Nathan Shedroff
It was all about fair wages.

Christopher Noessel
We’re going to be replaceable, it’s about quality of life. And so, that is now back in relevance. We can either structure industries, and from a design perspective, individual relationships of users to AI as either a, Oh, you are the babysitter of this thing and therefore you are much easier to replace, or we can design them as augmentations and business qua business does not have the incentives to make that happen.

It has to come from a pressure both from other systems such as government, but they’re slow, or us I’m pointing to everyone at this table, and everyone listening to put pressure and vote with your dollars, vote with your vote, in order to steer the powers that be, away from creating AI systems that just replace people or turn them into, not managers, that that implies things that I don’t want to – babysitters, turn them into the babysitter’s of the day. That’s a terrible world and I don’t wanna live in it.

But while we’re on this thread, I did want to add one other thing to our long and listing book recommendations. I just started on the plane. But Matthew Wizinski Design After Capitalism touches on a lot of these issues and even has a section on Luddism, that I’m eager to get to. Highly recommended already, and I’m only on chapter three. But it touches on a lot of these issues.

James Royal-Lawson
Do you know, we’re not gonna have time to go back to the the question at the beginning about what kind of interfaces sci fi will be talking about in 10 years,

Per Axbom
Because we just don’t know.

James Royal-Lawson
So it’s fine. We just book the interview in 10 years and get on with it. It’s a cliffhanger.

Christopher Noessel
It is a good cliffhanger.

James Royal-Lawson
Thank you a huge amount.

Per Axbom
Thank you, guys. Wonderful.

Nathan Shedroff
I feel like we just got started.

Christopher Noessel
Yeah, just got started.

James Royal-Lawson
Recommended listening? Well, I think a good choice this time of our out of our many interviews with Chris we’ve done over the years. Lets go back to Episode 121 Agentive technology. I mentioned it a little bit in the interview. But this is from seven years ago now.

Per Axbom
121 I was gonna say that’s a long time ago.

James Royal-Lawson
Seven years ago. But we’re talking about digital assistants and agents and using technology to do things for us. So it’s very much aligned with what we’ve been talking about and very relevant still.

Per Axbom
It’s a fun one, you say agentive? I say agentive, I think it’s one of those you say potato, I say potato.

James Royal-Lawson
I think we do actually talk about that in the show. And if you want me and Per as part of your next conference event or in-house training, we are offering workshops, talks and courses to inspire and help you grow as individuals teams and organisations. Find out more, just get in touch by emailing Hej@uxpodcast.com.

Per Axbom
Remember to keep moving.

James Royal-Lawson
See you on the other side.

[Music]

James Royal-Lawson
What do you call a bee that can’t make up its mind.

Per Axbom
I don’t know James, what do you call a bee that can’t make up its mind.

James Royal-Lawson
Maybe.


This is a transcript of a conversation between James Royal-LawsonPer Axbom, Chris Noessel, and Nathan Shedroff recorded in May 2023 and published as episode 314 of UX Podcast.