Could robots lead to a better tomorrow?

It's been a hundred years since the word 'robot' was first used*, but are we another century away from seeing them in our daily lives?

Image of Pepper, a humanoid robot by Softbank

In the first episode of our new podcast series, we try to find out.

We talk to the founder of a company that rents and sells robots, an academic who's working on the ethics of robot/human interactions, and a PhD student who's studying fairness in machine learning. 

Listen to the episode now to hear more about robots and artificial intelligence, and how they still might create a better tomorrow. 

"It's a completely new and unexplored area... and we don't know what's going to happen.”

Anouk Van Maris

Music

Adam Kushner: “… So our robots can now actually control an entire building.”

Anouk Van Maris: “… It’s a completely new and unexplored area, and we don’t know what’s going to happen.”

Michelle Lee: “… So what is the best algorithm? What is the best methodology to make these decisions? What are some of the trade offs we’re making when we’re choosing one algorithm over an alternative?”

Sam Woods: Hello, and welcome to the first episode of a Better Tomorrow. I’m Sam, and I’m going to kick off with a bit of context

I work for Aviva, but this series is not about savings, or retirement, or insurance at all really.

This series is about the future.

Audio from Aviva’s purpose film: “… Because at Aviva, we understand that what we do today creates what’s possible tomorrow. With you today, for a better tomorrow.””.

Sam Woods: That’s Aviva’s purpose. And I pitched this series because I wanted to explore what a “better tomorrow” could look like.

Not so much in the small scale, but more the potentially world changing, world shaping things that are – or at least could be - on the horizon.

And I wanted to hear from the people who’re involved in those things – people who are working to make them a reality, or to mitigate the harm they might cause, or to educate and change perceptions… or even to sound the alarm bells of potential danger.

Now, I’m not one to shy away from a cliché, so for the first episode, the first thing that came to my mind was robots.

If I think about the future, I tend to think about robots. I don’t quite know what that says about me. I’m a simple man, it seems.

Now I’m talking about capital ‘R’ Robots. Robots like the ones that we were promised in the old World’s Fair films from the middle of the last century. You know the ones.

Archive audio:

“To help us get a glimpse into the future of this unfinished world of ours, there has been created a thought provoking exhibit of the developments ahead of us…”

“Yes Roll-oh the robot, the chromium plated butler is just a daydream after all. But not so Roll-oh’s little brother and sister robots! The millions of small mechanical servants that never ask for afternoons off… one little robot for example always remembers to serve drinks when it sees anyone walking around with a thirst…”

“Imagine if you can, an electronic brain operating at millionths of a second speeds”

“The greater and better world of tomorrow!”

Sam Woods: Now obviously that 'world of tomorrow' never actually came to pass, or at least it hasn’t yet, and we don’t all have a “Roll-oh the robot” to take care of all the chores at the push of a button.

It seems things went in a slightly different direction. Less hardware, and more software. I think most of us have an AI assistant in our pockets, on our wrists, or on our shelves… I’m hesitating to say their names because I don’t want them chipping in on this recording. You know the ones I mean.

Siri activation noise

Sam Woods: But still, I was curious. Could actual physical robots still change the world? Could they catch up to those 80-year-old expectations? Could we still have a ‘better tomorrow’ coming our way aided by robots, or is the idea of physical robot assistants as dated as those videos, and our AI assistants are all we get, or all we need?

So, towards the end of 2019, we went and had some conversations with people who could help provide some answers to those questions.

Sam Pendon: “Can you introduce yourself to me, please? … Sounds good, seems alright…”

Ben Moss: “1, 2, 3, 4, 5… Yeah, it’s on there. Can I just get your name and official title?”

Sam Woods: Again, this was back at the end of 2019. And 2019 turned into 2020… and the world actually, properly changed, almost overnight, in a way we didn’t imagine… and it wasn’t caused by robots.

News broadcaster: “Chinese authorities have launched an investigation into a mysterious viral pneumonia which has infected dozens of people in the central city of Wuhan…”

Sam Woods:  … and so we put this series aside.

Now though… now 2020 has given way to 2021 and it’s probably a bit easier to think about the future – that potential better tomorrow – without a laser focus on the global pandemic.

I feel like now’s the time to come back to robots and the recordings we’d made.

When I think about robots right now, robots actually existing in the world today, rather than in old archive films where it’s obviously just a person in a costume, I think of factory floors.

Big, metal arms, bolted to the floor. Swinging around, piecing together a car, or an aeroplane engine, or some other piece of heavy equipment.

Big, dumb machines that don’t look anything like a person, designed and programmed for single repetitive tasks.

Now don’t get me wrong, those industrial robots have changed the world in a bunch of ways. The International Federation of Robotics says that there were about one point six million of them working in factories around the world last year. That’s a lot of robots.

We have robot vacuum cleaners, I suppose, and we’re getting closer every day to properly having driverless cars on the roads… I guess they’re a kind of robot too, in a way, but again, they’re single purpose and not really what I would think of when I think about futuristic ‘robots’.

So are they the only ones around right now? How are developments in AI being applied? Are there any smarter, more general purpose robots?

Are we getting closer to Roll-oh?

Adam Kushner: “… We sell humanoid robots. We develop software for humanoid robots… We are also in the process of developing digital robots, or avatars, and holographic robots that you can interact with and chat with in real-time.”

Sam Woods: That’s Adam Kushner. Adam founded Robots of London.

Robots of London rent and sell robots that are way more like what I was hoping for.

In fact, they might be more like those old visions of the future than I expected. 

Adam Kushner: “The business really started when I went on a trip to Japan and saw a robot… and I

was completely fascinated and kind of pretty much changed my life.

“I just believed that there were so many real-life functionalities that robots could do, and I felt that… robots would become essentially a major part of our lives in the future.

“I would say that a majority of the robots that we sell now are used as receptionists”

Sam Woods: So rather than heavy machinery and production lines, the robots that Robots of London deal with are being used for things that I would have only thought about people doing. Things that you would think need a human touch.

Like customer service and hospitality.

Adam Kushner: “…So they are used as a full-on receptionists in various offices. We also supply robots to a lot of museums, airports.

“I think it's changing a little bit as software becomes more advanced, so I think over the last couple of years we've sold a lot of receptionists and developed software, and I think that's now increased really because of the functionality that we can actually assign to the robot.

“So our robots can now actually can control an entire building. You know, from being a receptionist, but the robot can actually control the heating. The lighting… doors. Really any smart device and that’s added a lot of good functionality which previously was not so easy to do…”

Sam Woods: It’s interesting, given the last year and how it has changed the world – or at least how I look at the world - that I can see far, far more use for robots in those kind of roles than I would have imagined, back when we first spoke to Adam before the pandemic.

Especially when it comes to big transport hubs.

Adam Kushner “Surprisingly actually there’s very little maintenance involved. I mean Pepper, in particular, we’ve had running full time for up to 18 hours a day at several locations. We've even got one at Eurostar, in the departure lounge, and that is working full time, has been for the last sixteen or seventeen months without any downtime. 

Sam Woods: The kind of viruses that a robot could reasonably come down with aren’t ones they could pass on to travellers, you know?

That being said, these robots are still much more human than those factory robots I mentioned.

Looking at Pepper, Adam’s favourite model of robot and the model that was being used at St Pancras, it’s a glossy white figure with big anime eyes, and rolls around on concealed wheels. It’s actually kind of cute. I’ll put a link in the show notes so you can see what I’m talking about.

Pepper is about 120 centimetres tall, or about four feet. It’s aware of its environment and it can navigate around it. Pepper can recognise basic human emotions through facial recognition and change its behaviour to fit. The Pepper that was at St Pancras could actually pose for selfies with travellers as well as direct them and answer questions, and has a big touch screen on its chest to display information for people.

To be honest, I think Pepper kind of beats that 1940s fictional robot Roll-oh in a few ways.

Given their close interaction with people, we were curious about how reliable they were – the chances of bugs causing unwanted behaviour or problems.

Adam Kushner: “I'd like to think our developers do a good job when they're developing the software, and we test it thoroughly before it goes on site. We've never had a debugging issue to deal with since we started. It is possible, but I'd like to think we're pretty thorough.  So when it goes on site, it really is ready to go. “

Sam Woods: It doesn’t feel like we’re at the stage yet where we need Isaac Asimov’s three laws coded into our robots…

Isaac Asimov: The first law is as follows: a robot may not harm a human being or, through inaction, allow a human being to come to harm.

“Number two: A robot must obey orders given it by qualified personnel, unless those orders violate rule number one. In other words, a robot cannot be ordered to kill a human being.

“Rule number three: a robot must protect its own existence – after all, it’s an expensive piece of equipment – unless that violates rule one or two. A robot must cheerfully go into self-destruction if it is in order to follow an order, or to save a human life…”

Sam Woods: As a side note, this is the second piece of archive footage I’ve used where they pronounce ‘robot’ as ‘robit’, and I am really starting to wonder if I’m saying it properly…

But anyway, that’s when we turned towards the future. Sci-fi ‘robit’ dystopias aside, we were curious to ask: can we expect robots to get more embedded in our lives in the future, in places that we don’t even think about right now?

Adam Kushner: “I don't think it's even the future. It's now. I mean, I was at an exhibition in Japan and I saw a robot playing against a human in chess and it's quite amazing to watch because you are using effectively an industrial robot to play a game that is way superior than a human. 

“I just think that robots are going to play such a huge part in our lives. Whether it’s in the home, which I think is coming – it’s still very low key at the moment – I think in the office and business environment. I think almost in all parts of our lives I think we’ll see robots suddenly becoming a major part.

“More and more people will see, when they travel… I think you’ll have robots guiding people to the correct place, telling them what their gate numbers are, all using voice recognition.

“I think you’ll see a lot more… and they are taking part in museums, we’ve supplied museums with robots that can be interactive and actually tell people about the exhibits, and again I think that will become more commonplace.

“Retail I think we’re going to see major changes, so not only in the backend warehouse but I think actually in stores themselves, you’ll end up having robots that will tell customers where items are, where they can find them, is there stock availability, if you have an issue…

“I think very much… look at hotels, I think you'll see people with personalised concierge robots in their rooms, and I just think care homes as well. You know, I think they will become a huge part of our lives in in the very near future.”

Sam Woods: Care homes stuck with me. To my mind, there aren’t many jobs more “human” than caring for other people.

While I don’t think Adam meant care homes only staffed by robots, and he was talking more about the idea of co-bots, robots that are there to support and work alongside humans rather than replace them… still the idea of care homes staffed by robots, run by AI, does kind of stick with me.

It might be a lack of imagination or understanding on my part, but it feels a bit… unsettling maybe?

I think about that quote, that “a society can be judged by how it treats its most vulnerable members”, and if we delegated that treatment to machines…

Anouk Van Maris: “Hi, my name is Anouk Van Maris and I'm a PhD student here at the Robotics lab.

“I am investigating the effect that robots and their behaviours can have on people when they are interacting with them and whether these effects have ethical consequences or not.”

Sam Woods: Thankfully, Anouk Van Maris is studying something entirely related to this at the Robotics Lab in Bristol. The idea that people like Anouk are dedicating themselves to researching the ethical implications of robots, especially in care, makes me feel a lot better about it.

Anouk Van Maris: “The title of the project is Socrates, which stands for Social Cognitive Robot Agents in the European Society. It's a mouthful!

“We focus on interaction quality between robots and older adults. 

“The aim of the project is to improve the quality of life for older adults, since older adults are a target group that can benefit from robots a lot since the number of people that require care is increasing, but the number of people that can actually provide care is not.

“So there is a gap, and robots cannot necessarily close that gap, but they can be helpful and supportive there.”

Sam Woods: So again, we’re not really talking about wholesale replacing human carers, we’re talking about supportive robotic technology. Even still, there’s a lot to consider from an ethical stance.

Anouk Van Maris: “So I am looking at how older adults respond to robots, whether there are things that we should be concerned about.

“For example, do they become attached to these robots? If yes, does that mean they will over trust the robot, and will this result in situations where they become hurt because of that.”

Sam Woods: Maybe Asimov was on to something with that first law...

Anouk Van Maris:Also, if a robot would show emotions during interactions because that makes for a more pleasant conversation, what is the effect of that? Will they actually believe that the robot experiences an emotion? And will they behave differently depending on that?” 

Sam Woods: This might appear silly at first, but us humans… we bond with anything. I’ve apologised to a Roomba more than once. Not even joking. It’s basically part of the family.

We’re not just talking about humanoid robots though, like we met with Pepper and Robots of London.

When we visited the robotics lab in Bristol, we saw some amazing bits of technology of all different shapes and sizes – so what should we have in mind when we think about these caring robots?

Anouk Van Maris: So when I think about a social robot, I think about a machine that can support you to improve your quality of life, and it will do that through interaction both physically and verbally.

What does it look like? That can really depend. So we have humanoid robots that look like a human, have a head, two arms, two legs…

These are useful robots, but it's also possible to have social robots that look like pets, like there’s versions… the PARO robot looks like a baby seal. It's proven to be very helpful for patients with dementia to lower their stress level for example. There is robot dogs, and that can provide the benefits of having a pet, but not the disadvantages of being allergic an needing to care for it. So it doesn't matter if you forget to give it food, for example. 

Sam Woods: I’ll be totally honest, I’m only in my 30s, but I would love to have one of these robot dogs... or a robot baby seal.

But how far away are we from these kind of robots being out in the world – am I going to have to wait until I’m old enough to need care for these robots to be out there?

Anouk Van Maris: “If you think about a social robot that can both physically and verbally support you, assist you, be with you 24/7…That's relatively far [away]. 

“For example, for the robot to be able to move within an apartment, and for every apartment to be different. The technology behind mapping is definitely improving, but combining everything together is really difficult, and usually the robots that are physically more capable, for example in supporting a person are in the verbal interaction not that great, or they look very robust and sturdy and don't have the aesthetics that are appreciated in a robot so, in my opinion, that would be quite a while.”

Sam Woods: So really, we’re still a way away from having our own social robots. And it is probably for the best, because the work on the ethics is still underway and there’s a lot of work to do.

Anouk:The thing is it’s a completely unexplored area and we don’t know what’s going to happen, so we have to try and establish that. So there will be many benefits from these robots, and I do believe that this is the way to go to improve everyone’s quality of life.

“But at the same time, we have to make that that happens in a responsible way. So my particular focus and interest is on what effect these robots have on people, what their behaviours have on people.

“For example, I mentioned trust, that people might trust the robot too much. If it shows emotions and interacts in a very human-like way, then people might expect that the robot has the same abilities that a human has. Right now, we’re definitely not at that level, so there might be misplaced trust which can result in an older person expecting that a robot will catch them when they stumble, which is not the case, which might even result in the robot falling on top of them if they grab it anyway.

“There’s also the ethics of explainability and transparency. If something would go wrong – and with each technology it is bound to happen at some point – how do we make sure that we can recover why it went wrong so it doesn’t happen another time, and the process of thinking…

“So a very popular topic right now is deep learning - you put something in, the machine learns something, and you get the wanted output - but it’s a black box on how the machine learns this. If we would use this in robotics, which is already happening, but the danger is that the robot learns to make decisions in a certain way that has an unwanted outcome, and if that is the case then we don’t know where it went wrong… so therefore explainability and transparency about the decision making process in these robots is very important.”

Sam Woods: The thing that I’m taking away from the conversations with Adam and Anouk – other than the fact that robots really are actually a thing that could appear more and more in society – whether that’s right now with Pepper and its equivalents in museums and airports and shops, or further down the line with supportive, social robots that live with you and care for you – the thing that I’m really taking away from this is that I’m actually thinking more about AI than robotics.

Robotics feels like the physical engineering problem - the hardware to let the software interact in a physical space in a physical way – and that’s not to dismiss that as a small thing to solve. It isn’t, it’s a massive, massive challenge. But most of the big questions and possibilities I’m actually curious about really lie in how we can program them so that they can make decisions and act on them outside of things that are directly programmed into them, like a factory robot.

One of the issues with that ability for robots to make their own decisions though is something that Anouk mentioned, and that’s the idea of a ‘black box’.

The concern there is that we have decisions being made by AI, by algorithms, that aren’t transparent or understandable to us – now that’s not a huge issue if that algorithm is just governing something like… what YouTube videos are being put up on your feed or what shows up on the first page of Google results for a search (although, don’t get me wrong, both those things can end up problematic…), but it could be a huge issue when it comes down to how a care home robot behaves, or how a driverless car makes split second decisions in an accident, or whether loan applications are accepted, or who gets hired for a job.

Michelle Lee: “So algorithms are currently being used to make predictions about a lot of things… so it will decide which loans are most likely to default. It will try to tell us who we should hire, who we should recruit… and these are all judgments and at the moment that can be made by humans or algorithms.

“The difference though between the algorithm and the human is that the algorithm can learn patterns in a much larger data set, and a much greater volume of information than humans can process, and because of that, some of the judgments can seem difficult to understand and a bit… non-transparent, and that is, I think the main challenge that people have when looking at algorithmic judgments.”

Sam Woods: This is Michelle Lee. Michelle is a PhD candidate at Cambridge University, and her research is ‘Context and fairness in machine learning’. This is an Aviva sponsored PhD project that we support through our partnership with Cambridge University.

We spoke to Michelle a while ago for another Aviva podcast series – Quantum – based around data science and that partnership. I’d recommend going to listen to that series if data science is something you’re interested in – again, I’ll put a link in the show notes - but Michelle’s research totally speaks to that black box concern that Anouk mentioned and – honestly, this isn’t a corporate line I’ve been asked to put in here…

I’m really glad that Aviva is sponsoring this work and thinking carefully about fairness and how we can ensure there’s no unfair bias in the algorithms that we might use.

Michelle Lee: “I was originally born in South Korea. I did my undergraduate degree at Stanford. I studied political science and symbolic systems, so my concentration was in decision-making and rationality. How do humans make decisions differently than machines do so it was a combination of computer science, statistics, neuroscience, behavioural biology, etc…

“Then I worked in strategy consulting for a year, and then I've been working in risk analytics consulting, where I focused on building AI products for financial services companies, and then more recently I switched to advising on new risks introduced by AI to financial services companies.

“And then I decided to pursue a one year Masters at Oxford in Social Data Science to see whether or not this route is right for me. Like a trial period before I commit to three additional years of PhD… and in my client work I've found that there is such a knowledge gap in how to ensure that algorithms are safe to scale and making those ethical decisions and embodying the values of the company.

“I figured the only only way I could really talk all it is in academia, with a lot of likeminded researchers.”

Sam Woods: Michelle talks about how some researchers are currently looking at this. There’s an idea that we could mathematically formalise fairness. Put together a set of rules that an algorithm or AI follows that makes it inherently fair in a way that’s understandable and auditable, to ensure that there’s no hidden bias that’s coming out and, if there is, it’s fixable.

Michelle Lee: “The challenge that I have is that fairness isn't really a binary concept.

“So we shouldn't be looking at looking at fixing fairness or fixing the bias, we should be looking at what are some of the trade-offs that we're making when we're choosing one algorithm over an alternative.

“A machine learning algorithm can be both fairer and more accurate than a human decision making process… so what is the best algorithm? What is the best methodology to make these decisions for that decision maker?

“So my research is introducing the trade-off analysis of the competing objectives to make it much more clear to the decision maker what's at stake.”

Sam Woods: So if I’m understanding Michelle correctly - and for the record that’s by no means a given… she is so, so much smarter than I am - we’re not saying that we need to babysit those decisions, to check everything before the algorithm can act on anything – that probably wouldn’t be ideal - It’s more to do with making sure that we know what some of the key considerations are, what trade-offs might be made in the algorithms that we choose, and whether those are right, whether they reflect the right values.

The question that comes to my mind though is… Can we ever get to a point where AI is able to make perfect decisions?

Michelle Lee: “I don't think there is really such thing as a perfect decision because even if you ask… there has been a lot of research where they've asked humans what is the fair decision in this scenario, that scenario, and in each of those cases there was considerable disagreement even among people on what it means to make a fair decision, so that is exactly the direction that my research is aiming to go in.

“Where instead of trying to formalise in one mathematical formula everything that it means to be ‘fair’, try to reveal some of the key considerations and trade-offs in each algorithm so that a human can then make that judgement call of what is a fair decision in each case.”

Sam Woods: So, to really sum up the journey I’ve been on when looking at the future of robots and AI, whether they can lead to a better tomorrow… I think it really comes down to people.

People like Adam Kushner and the others at Robots of London, who are working to get robots out into the world right now in new and interesting ways, paving the way for new, maybe unexpected robots of the future.

People like Anouk Van Maris, who are looking at the ethical considerations for that use of robots, studying the effects that they can have on people when they’re interacting with us - to make sure that the technology doesn’t overtake us, that they’re a positive, that they don’t cause more harm than good.

People like Michelle Lee, who are dedicating themselves to the fundamentals of how AI and algorithms can make fair decisions, so that if and when decisions are being made by machines that they can hold on to our values, that they can be fair and unbiased in ways that, unfortunately, we as a species can struggle with ourselves. And that we know how and why those machines came to make those choices.

And maybe people like you and me, who are thinking about the potential implications, and maybe thinking about how they can get more involved themselves.

I told you, right at the beginning, that I’m not one to shy away from a cliché.

So, I think I’ll close on another one.

The future up to us.

To find out more about any of our guests from this episode, take a look at the show notes on Soundcloud or on aviva.com

If you want to hear more from Aviva and the things that we’ve been working on or talking about, you can follow us on social media, visit our website, or subscribe to our other podcast series – the Aviva Podcast – available on Spotify, Soundcloud or a heap of other places that you find your podcasts

If you’d like to see potential opportunities at Aviva, whether that’s in data science, technology, apprenticeships or graduate opportunities - or any other roles that we have available in the UK, you can take a look at careers.aviva.co.uk – that was a lot of dots...

In the meantime, take care of yourself and take care of each other.

Thanks for listening.

More information:

A Better Tomorrow is the podcast in which we look to the future. To the potentially world changing technologies, opportunities and events that are - maybe - just over the horizon, and talk to the people who are working to make that future better for everyone.

You can subscribe to A Better Tomorrow on... 

Soundcloud,

on Spotify,

on Apple Podcasts,

... or wherever else you find your podcasts.

Adam Kushner & Robots of London: https://www.robotsoflondon.com

Pepper the humanoid robot: https://www.softbankrobotics.com/emea/en/pepper

Bristol Robotics Lab: https://www.bristolroboticslab.com

Anouk Van Maris: We caught up with Anouk while editing this episode, and she’s now graduated so she’s no longer a PhD student, and she is now working as a postdoctoral research assistant for the RoboTIPS project working on implementing an ‘ethical black box’. You can see more about the project here: https://www.robotips.co.uk/ethical-black-box

Aviva Quantum podcast: https://soundcloud.com/aviva_plc/sets/aviva-quantum

Michelle Lee: https://michellesengahlee.com/publications/

*The first use of the word robot was in Rossumovi Univerzální Roboti (Rossum's Universal Robots) a play by Karel Čapek which premiered in 1921 https://en.wikipedia.org/wiki/R.U.R.

Here's a selection of our other podcasts