Smart Navigation
How is it possible to upgrade a simple robotic arm so that it becomes an autonomous detective?
The answer to this question is provided by our Smart Navigation Showcase!
Aim of the Showcase
Our Smart Navigation showcase demonstrates how it is possible to realize even complex use cases, such as the recognition and tracking of any object on a small, simple robot arm – even if you initially have nothing more than a serial interface available!
It also shows how the resulting live machine data from the robot can be used intelligently and how it can be enriched and statistically analyzed.
The Setup
Simple robotic arm from the company UFACTORY (uarm). Upgraded with an INTEL Realsense stereo depth camera.
An NVIDIA Jetson TX-2 – the heart of the setup.
This is where the neural network is executed and the robot control is coordinated.
An IOT sensor node expands the robot’s range of sensor values. It is now also able to measure, statistically evaluate and react to other physical variables such as temperature, air pressure and vibrations.
Technology & Software
The INTEL Realsense stereo camera delivers two different types of video.
On the one hand, an RGB-based video with a resolution of 640×480.
The second video stream is based on an infrared depth image. Here, each pixel has a defined distance value which is represented by a color code. A red pixel stands for a small distance to the camera, blue for a greater distance to the camera.
The basis of the Smart Navigation showcase is object recognition with tinyYOLO and the programmed real-time coordination of the robot arm. To show that there are (almost) no limits to object recognition, we decided to train with our dmc-group logo.
After almost 300 images in the training data set and several hours of computing time on our high-performance computers in Paderborn, the neural network was sufficiently trained to reliably recognize our logo. Although the training itself required a lot of resources, the neural network can even be run on smaller embedded devices – so no high-end server is required.
The operation of the robot arm itself is based on frame-by-frame recognition of the object (camera delivers stable 30 fps). By calculating the difference between the positions of the trained object in the video stream, a movement vector (red arrow) is calculated and translated for the machine controller. Now it is possible for the robot arm to follow the trained object in real time (within its degrees of freedom).
The dashboard based on Node-RED displays the live data of the robot arm. Real-time recording of the machine values, such as the number of objects detected, the azimuth angle of the arm or the distance to the object, can be viewed here.
As the dashboard is freely configurable in terms of display, you can hide values that are not required or display additional values (e.g. input from the measured values of the IOT sensor node).
Analysis & Data
This collected data can now be displayed, statistically processed and evaluated.
As a first example, the position of the detected object can be displayed graphically over time, as shown here. This makes it possible to display the trajectory of the object in 3-dimensional space.
As soon as sufficient data has been collected, it can be displayed and evaluated over time.
This is useful for uncovering correlations between different data streams during fault analysis.
This information can be used to predict error states (keyword: predictive maintenance). Here, the machine data would be monitored by a trained neural network. In this way, specific error patterns of a machine can be recognized and early warnings of an imminent impairment of operation can be sent.
Further application areas
Here on the right you will find some examples of the recognition of people and other objects, such as chairs or vehicles.
To recognize an unknown, new object, the neural network must be trained again – as in our case to recognize our logo. This first requires a sufficiently large and high-quality test data set of image material.
As soon as the training has been successfully completed, however, almost any object can be reliably recognized – the range extends from facial recognition to the identification of (partial) products in a factory during the production process for quality assurance.