The Future of...

The Future of Robotics is Bi-Directional Learning

Robots only get smarter when connected

Mark Anderson
The Future of Robotics is Bi-Directional Learning

When we think about robots, most of us picture something out of science fiction or the friendly Pepper robot seen at tech events in the last few years. But the current reality is not quite that exciting. Many of the industrial robots on the market today focus on automating very simple, repetitive tasks such as packaging, taking inventory, putting things on pallets, etc. These robots are typically pre-programmed / trained to do one specific task, even if it’s a cooperative line of robots in a production factory assembling a whole product. A friend of mine who used to train robots how to weld described it as a form of visual programming where the robot is taught to mimic a single task. Using a stylus, they are taught to follow a set of XYZ coordinates, such as where the welding lines are, how many areas should be welded, maintaining the right temperature, etc.

At the end of the day, these robots aren’t really autonomous as they are constrained to that single task. That means any variable not accounted for could cause issues with a robot and throw errors into the whole assembly line – they simply aren’t adaptable like humans. And if the factory needs to be changed to do something else, then robotic operators/trainers have to be brought in to retrain / reconfigure all the robots on that production line.

Image sources: Softbank Robotics; TechNavio

The good news is that we are getting better at figuring out how to apply sensor-based machine vision and artificial intelligence (AI) to machine software. That will equip machines with the ability to interpret the surrounding world and adapt to different circumstances than they were originally trained for. But real robot intelligence will depend on bi-directional learning, where the learnings sensed by any individual robot or device can be applied to core algorithms in the cloud and then shared back with all the robots in that category. These contextually aware systems of robots will be interconnected to internet of things (IoT) sensors, other robots and digital ecosystems for AI data processing and model building. And processing all that data will depend on a distributed, hybrid multicloud IT infrastructure with low-latency, secure connectivity for private data exchange.

Download the IoT Digital Infrastructures Whitepaper

This whitepaper highlights the importance for companies transform their existing digital infrastructure to take advantage of current and future technological opportunities when implementing new digital business models to support the expansion of the IoT industry.

Download Now
IOT

Making robots smarter through IoT and AI

 While the field of robotics may have a different focus than the IoT and AI domains, it is the convergence of these technologies and their supporting IT architectures that is infusing greater intelligence into the end devices – in this case robots. ABI Research coined the term the “Internet of Robotic Things (IoRT)” to describe this type of system, “where intelligent devices can monitor events, fuse sensor data from a variety of sources, use local and distributed “intelligence” to determine a best course of action, and then act to control or manipulate objects in the physical world.”[i] In contrast to a standalone robot programmed to do a single task, IoRT systems are connected at the edge and in the cloud and can exchange data and insights to adapt to changing situations in real-time.

The diagram below illustrates the core elements in a connected robot system. The IoT is like the five human senses perceiving the world, AI in the cloud is the brain processing that information and making sense of it, and the robot is the physical body taking action based on those learnings. In some cases, we react without thinking, such as not touching a hot stove because our body has already learned to associate pain with that. That would be akin to the IoRT where the physical robots have already acquired the intelligence needed to adapt very quickly to certain situations. At other times, when we encounter new situations, we need to think and learn before taking action, such as learning to stop at a red light when driving a car. That is where cloud robotics come in, which is really about improving the core AI models in the cloud with insights gathered by the connected robots at the edge or the IoRT.

Source: Derived from “The Internet of Robotic Things: A review of concept, added value and applications” [ii]

Bi-directional learning example – delivery robots

One example of how this might work is a fleet of autonomous delivery robots. Each delivery robot has its own sensors and AI computer for operating and communicating with other robots in the fleet and the IoRT platform at the edge. Over time, it may become clear that a particular intersection is a problem for several of the robots in the fleet. Maybe the traffic light changes too fast and they suddenly stop and throw an error, or unplanned road work disrupts the usual pathway. All of the data around that error such as video feeds from the sensors, insights from the edge nodes, etc. is sent to the core AI algorithms in the clouds to train them to recognize these new edge cases and account for these situations. Then that updated AI algorithm is shipped back to the entire robot fleet as a software update so they can better navigate that intersection going forward.

Source: Global Interconnection Index (GXI) Volume 3

Low latency is essential for machine intelligence

For this type of bi-directional learning to work well, low latency is key. Machine-to-machine (M2M) communications and interactions at the edge require sub 20 milliseconds of latency. And, while improving the core AI models in the cloud can tolerate higher latencies, getting the software update back to the robot fleet to fix issues and improve precision needs to be reliable and timely. That requires a distributed IT infrastructure to narrow the distance between the robots and a point of connectivity to large bandwidth. Vendor-neutral interconnection solutions such as those on Platform Equinix® can bridge the gap between the compute power needed for core AI models in the clouds and the reach needed to privately connect to digital ecosystems for low-latency, secure exchange of data and improved AI models.

Download the white paper to learn more about IoT digital infrastructures.

 

[i] ABI Research, The Internet of Robotic Things, 3Q 2014.

[ii] Pieter Simoens, Mauro Dragone and Alessandro Saffiotti, The Internet of Robotic Things: A review of concept, added value and applications, International Journal of Advanced Robotic Systems, DOI 10.1177/1729881418759424, Jan-Feb 2018.

Avatar photo
Mark Anderson Senior Director of Global Solutions Enablement - EMEA
Subscribe to the Equinix Blog