How to investigate when a robot causes an accident – and why it matters – Philippine Canadian Inquirer

0

Robots are increasingly shaping our everyday lives. They can be incredibly useful (bionic limbs, robotic lawn mowers, or robots delivering meals to people in quarantine) or just plain entertaining (robot dogs, dancing toys, and acrobatic drones). The imagination is perhaps the only limit to what robots can do in the future.

But what happens when robots don’t do what we want them to do – or in a way that causes harm? For example, what happens if a bionic arm is involved in a car accident? (Pexels photo)

But what happens when robots don’t do what we want them to do – or in a way that causes harm? For example, what happens if a bionic arm is involved in a car accident?

Robot accidents are becoming a problem for two reasons. First, the increase in the number of robots will naturally lead to an increase in the number of accidents in which they are involved. Second, we’re getting better at building more complex robots. When a robot is more complex, it’s harder to understand why something went wrong.

Most robots run on various forms of artificial intelligence (AI). AIs are capable of making human-like decisions (although objectively they can make good or bad decisions). These decisions can involve any number of things, from identifying an object to interpreting language.

AIs are trained to make these decisions for the robot based on information from huge data sets. The AIs are then tested for accuracy (how well they do what we want them to do) before setting the task.

AIs can be designed in different ways. Consider the vacuum cleaner robot as an example. It could be designed to redirect in a random direction whenever it bumps off a surface. Conversely, it could be designed to map its surroundings to find obstacles, cover all surface areas, and return to its charging station. While the first vacuum takes input from its sensors, the second tracks those inputs into an internal mapping system. In both cases, the AI ​​takes in information and makes a decision based on it.

The more complex things a robot can do, the more types of information it has to interpret. It can also evaluate multiple sources of a data type, such as B. in the case of acoustic data, a live voice, a radio and the wind.

As robots become more complex and capable of responding to a variety of information, it becomes even more important to determine what information the robot has acted on, especially when damage is being caused.

accidents happen

As with any product, things can and do go wrong with robots. Sometimes this is an internal problem, e.g. B. if the robot does not recognize a voice command. Sometimes it is external – the robot’s sensor has been damaged. And sometimes it can be both, such as the robot not being designed to work on carpets and “stumbling”. When investigating robot accidents, all possible causes must be investigated.

While it can be inconvenient if the robot gets damaged when something goes wrong, we’re far more concerned if the robot harms a person or doesn’t mitigate the harm. For example, when a bionic arm can’t grab a hot drink and it bumps into the owner; or when a care robot fails to register an emergency call when the frail user has fallen.

Why is investigating robot accidents different from investigating human accidents? Remarkably, robots have no motives. We want to know why a robot made the decision it made based on the specific inputs it had.

In the bionic arm example, was it a miscommunication between the user and the hand? Did the robot mix up several signals? Block unexpectedly? In the example of the falling person, couldn’t the robot “hear” the call for help over a loud fan? Or was it having trouble interpreting the user’s language?

The black box

Robotic accident investigation has a key advantage over human accident investigation: it has the potential for a built-in witness. Commercial airliners have a similar testimony: the black box built to withstand plane crashes and provide information about why the crash happened. This information is incredibly valuable, not only for understanding incidents, but also for preventing them from happening again.

As part of RoboTIPS, a project focused on responsible innovation for social robots (robots that interact with humans), we created what we call the ethical black box: an internal record of the robot’s inputs and corresponding actions. The ethical black box is designed for each type of robot it inhabits and is constructed to record all information to which the robot responds. This can be speech, visual, or even brainwave activity.

We test the ethical black box on a variety of robots in both laboratory and simulated accident conditions. The aim is for the ethical black box to become the standard in robots of all makes and applications.

While the data recorded by the ethics black box still needs to be interpreted in the event of an accident, it is vital for us to have that data first so we can investigate it.

The investigation process offers a chance to ensure that the same mistakes don’t happen twice. The ethical black box is a way not only to build better robots, but also to responsibly innovate in an exciting and dynamic field.The conversation

Keri Grieman, Research Associate, Department of Computer Science, University of Oxford

This article was republished by The Conversation under a Creative Commons license. Read the original article.

Share.

Comments are closed.