The camera recognizes the hand movement as paper during play, so the algorithm responds appropriately.

Robot plays "Rock, Paper, Scissors" - Part 1/3

Gesture recognition with intelligent camera

Selfie of Sebastian Trella next to robot

From the idea to implementation

In his search for a suitable camera, he came across IDS NXT - a complete system for the use of intelligent image processing. It fulfilled all requirements and, thanks to artificial intelligence, much more besides pure gesture recognition. Trella's interest was piqued. This is because the evaluation of the images and the communication of the results took place directly on or through the camera - without an additional PC. The IDS NXT Experience Kit also came with all the components to get started with the application right away - no prior AI knowledge required.

Trella took the idea further and began to develop a robot that would play the game "Rock, Paper, Scissors" in the future - with a process similar to the classical sense: The (human) player is asked to perform one of the familiar gestures (scissors, rock, paper) in front of the camera. The virtual opponent has already randomly determined his gesture at this point. The move is evaluated in real time and the winner is displayed.

Gesture recognition with IDS NXT: Scissors

The first step: Gesture recognition by means of image processing

Robot for playing "Rock, Paper, Scissors"

But until then, some intermediate steps were necessary. Trella began implementing gesture recognition using image processing - new territory for the robotics fan. However, with the help of IDS lighthouse - a cloud-based AI vision studio - this was easier to realize than expected. Here, ideas evolve into complete applications. For this purpose, neural networks are trained by application images with the necessary product knowledge, such as in this case the individual gestures from different perspectives, and packaged into a suitable application workflow.

"The training process was super easy and I just used IDS Lighthouse's step-by-step wizard after taking several hundred pictures of my hands using rock, scissor, or paper gestures from different angles against different backgrounds. The first trained AI was able to reliably recognize the gestures directly," explains Sebastian Trella. This works for both left- and right-handed users with a recognition rate of about 95%. Probabilities are returned for the labels "Rock", "Paper", "Scissor" or "Nothing". A satisfactory result. But what happens now with the data obtained?

Further processing

The further processing of the recognized gestures could be carried out by means of a specially created vision app. For this, the captured image of the respective gesture - after evaluation by the AI - must be forwarded to the app. The latter "knows" the rules of the game and can thus decide which gesture beats another. It then determines the winner. In the first stage of development, the app will also simulate the opponent. All this is currently in the making and will be implemented in the next step to the "Rock, Paper, Scissors"-playing robot.

From play to everyday use

Initially, the project is more of a gimmick. But what could come out of it? A slot machine? Or maybe even an AI based sign language translator?

To be continued...