May 26, 2021

Robots and AI That Will Support Us in the Future

Nowadays, the word AI can be heard on a daily basis. Although it has become commonplace, many of us may not know what AI actually is, or have a firm understanding of it. In this article, we will explain AI through an interview with a key figure in the field of AI.

For this second installment, IT journalist Hiromi Yuzuki interviewed Felix von Drigalski, who studies combining robots with AI, at OMRON SINIC X Corporation.*¹ Ms. Yuzuki asked Dr. von Drigalski about the immediate challenges faced by robots and AI in a time when the world of science fiction seems to be both already upon us, and yet far away.

*¹ A strategic base that creates OMRON's vision of "near-future design."
https://www.omron.com/sinicx/

 

AI Learns to "Sense Like a Human" and Robots Evolve Further

Hiromi Yuzuki (Yuzuki): This interview's theme is "Robots and AI." When I hear the word "robot," I imagine something out of science fiction, which has limbs and can communicate with us, but how are robots actually defined?

Felix von Drigalski (Felix): The word "robot" can mean different things. In movies and TV shows, robots talk to us, and are almost human. But today, the most common robots are industrial arms used in factories, with six joints that can reproduce movements.

From a technical perspective, anything that moves automatically can be called a robot. We could say that our washing machines and smartphones are robots. In my perception, robots are machines that can move autonomously and interact with their environment.

548_1.jpgDr. Felix von Drigalski of OMRON SINIC X

Yuzuki: How much can robots do right now, with current technology?

Felix: That's also a difficult question. When it comes to physical tasks, such as picking something up and moving it, the industrial robots I mentioned earlier already have that ability. Robots are very good at accurately repeating the same movements hundreds or even tens of thousands of times. In fact, they are better at it than humans - they are stronger, faster and much more precise. But current robots only work well when we tell them exactly what to do.

One thing that is still very difficult for robots is adjusting to unforeseen situations and taking actions. They have no "common sense", and blindly follow our instructions to the letter. Unless you program a robot very carefully, it will make mistakes that no human ever would. For example, if an object slips out of a person's hand, they stop and pick it up, but the robot might not even notice that it dropped. No human would need to be told what to do when they drop something, but robots still need these details spelled out for them.

Another challenge for robots is behaviors that are difficult to describe and program, such as tasks for which we use our senses and hard to put into words. Imagine teaching someone to ride a bike - how far can you get with explaining it to them? At some point, they just have to try it out themselves and feel the balance.

If you tell a human "put this rubber belt on these two wheels", they can do it easily, but if you try to program a robot to do it, it can be so complicated. Where should it grasp, where should it move, how much should it pull, how does it feel when the belt slips into the ridge? Humans have learned how to control their hands and just "know", but how do you explain these details? Daily life is full of such movements that are easy to demonstrate but difficult to describe.

548_2.jpg

Yuzuki: What will combining AI with today's challenges in robot technology enable us to do?

Felix: It will help us solve both of those problems. First, it will allow robots to use their senses better, so that we can teach them tasks that are hard to program, much like we would teach other people. Later, robots can gain a more "common-sense" understanding of the world around them and adapt to changes more flexibly.

Even if the actions are difficult to explain in words, I believe that robots will be able to achieve a variety of things if they have senses similar to humans, understand their environment, and utilize AI so that the robot can learn.

Yuzuki: What sort of technological evolution is required to realize such a combination of robots and AI?

Felix: First, we will need to equip robots with sensing technology that enables them to see the world like humans, so that they can understand their surroundings and situation from the sensory information, in the same way that humans use their five senses.

Conventional robots mainly sense their surroundings with images they receive from their cameras, but there is hardly a direct link between the information from the robot's "eye" (camera) and its body parts (motors and joints). It's been as if robots were moving under instructions based on someone else's visual information, like trying to hit a piñata blindfolded while relying on what others are shouting. Therefore, we need technology that combines several sensing modalities, such as tactile and visual sensors, and makes sense of it.

Second, we need technology that enables robots to learn actual human movements in order to reproduce them regardless of the robot's embodiment. For example, we record a video of a person performing movements and feed the data to AI to let it learn. However, since humans and robots have entirely different physical structures (muscular and motor, numbers of joints, skin softness, and so on), robots cannot reproduce human movements exactly. One difficulty is to determine the important parts of the demonstration, learn the right movements, and reproduce them with a given robot's embodiment.

At OMRON SINIC X, we are conducting research to realize these technologies in order to implement them in society.

Yuzuki: Isn't it time-consuming to manually teach robots to move like a human?

Felix: Indeed. As you said, it takes an enormous amount of time and effort to teach robots the fine movements in a sequence of actions and it is hard to incorporate all the adjustments and reactions that humans might make according to their sensing input. Therefore, we are taking a hybrid approach: we teach the robot simple strategies, and then let it learn and improve by training in simulation. For this, we use and develop machine learning techniques.

As an experiment to this end, I am conducting research to make robotic arms learn diabolo juggling (Chinese spinning top). It is difficult to explain the movements involved in playing diabolo, such as how to hold the sticks, how to make the diabolo spin or how to move it. So, for someone who is trying to learn, it is best to try it themselves after seeing an example.

548_3.pngA diabolo being thrown between the robot and the human player

Thus, we made a physical simulation model, recorded a human demonstration, and first let the AI learn from what it sees. Then, we use vision and force sensors so that the robot can sense the diabolo and sticks like a human.

We are investigating how robots can learn this challenging task - by reproducing the human player's behavior, considering new strategies, applying them, and evaluating the results of more and more complex movements, in a process called "reinforcement learning" and "transfer learning". What is interesting about it is that the goals become gradually more complex. Can we ensure that the robot does not forget skills it has learned previously? What kind of tricks can it learn?

 

Almighty Robots Are Coming to Our Homes?

Yuzuki: In the case of industrial robots, what kind of tasks will they be able to solve by using machine learning?

Felix: At the moment, I think car assembly lines are the most highly automated in terms of robot usage. But even there, humans are often needed when handling soft or pliable things. For instance, pulling leather or cloth over a car seat requires delicate movements, so humans assemble them manually. In the future, I believe we will first see robots and AI in situations like these, where a skill is difficult to describe, but it can be isolated and learned.

Yuzuki: Robot vacuum cleaners are the AI robot that's most familiar to us, but are there any others?

Felix: Robot vacuum cleaners are one kind of robot, although not many of them have onboard AI yet. Today, many household tasks that require considerable time and effort are done by machines, such as washing machines and dishwashers. For robots to permeate the home, we will need AI that estimates one's surroundings and condition, which is the third technology following the two that I previously described.

For example, laundry-folding robots have been studied worldwide but have yet to be realized. Just like the leather or cloth covers for car seats, it is extremely difficult to handle textiles with robots. While the movements involved in folding a shirt or sweater seem simple, it is very difficult to know where to grasp the shirt and how to move it to get it into the correct shape. Humans can't teach robots how to fold different kinds of clothes by showing them how to do it and describing the process in a program is practically impossible, so an AI module that learns these steps would be a promising application.

Some AI modules will also need to understand ambiguous and competing objectives (= goal setting). For example, the goal state "folded" differs for a sweater or a pair of trousers. If a user asks a robot to put a glass on a shelf, they might not just mean placing it in the same location, but also that it should be aligned with other items on the shelf. Dealing with all this information in a structured way is still a difficult research problem.

When these technologies are developed and combined, we may see robots with advanced autonomy that can perform a variety of useful tasks, such as cleaning, laundry, and tidying up. And as soon as they can perform useful things on their own, I am sure that robots will be in our homes.

Robotic automation is still difficult because we have not yet unraveled all the pieces and tasks, and it takes too much time to teach household tasks to robots without supervision. However, if we get robots to learn how humans sense and act by observing humans (as we are doing in our research) and understand their environment through AI, we will surely get to introduce robots into our homes. And maybe the robots we see in science fiction and anime shows will turn out to be a reality after all.

[Summary]
Robots can play an active role in our daily life by overcoming their shortcomings through AI. The combination of robots and AI appears synergistic. Though many hurdles remain before we see a world in which robots are used for many household tasks, it does not feel like such a distant future.

548_4.jpg

【Profiles】
Felix von Drigalski
Senior Researcher
Robotics Group
OMRON SINIC X Corporation
Dr. Drigalski graduated from both KIT (Germany) and INSA Lyon (France) in 2013 (Dipl.-Ing. and Ingenieur diplômé (mechanical engineering). In 2018, he completed his doctorate at the Nara Institute of Science and Technology Robotics Laboratory (Informatics). After working for AIST, Siemens, and ThyssenKrupp, he was appointed senior researcher at OMRON SINIC X Corporation in October 2018. He primarily engages in research on robotics, including robotic manipulation, planning and automated assembly.

IT journalist
Hiromi Yuzuki
Ms. Yuzuki pens Apple-related articles, including tips for using iPads for work, and produces video reports on overseas tech information. She has appeared on "The World Unknown to Matsuko" as an iPhone case expert. Her YouTube channel is called Gadgetouch.