
Pick, move, place
Robot automates pick-and-place processes with AI-supported image processing
Pick-and-place applications are a key area of application for robotics. They are often used in industry to speed up assembly processes and reduce manual activities - an exciting topic for computer science master's students at the Institute for Data-Optimized Manufacturing at Kempten University of Applied Sciences. They developed a robot that optimizes processes through the use of artificial intelligence and computer vision. Based on an assembly drawing, the system is able to pick up individual components and place them in a predefined position - similar to a jigsaw puzzle. The parts can then be glued there manually by an employee.
Two IDS industrial cameras provide the necessary image information
With the help of two uEye XC cameras and AI-supported image processing, the system analyzes the environment and calculates precise pick-up and deposit coordinates. One of the cameras was placed above the work surface, the other above the extraction point. Specifically, an AI pipeline processes the images from the two cameras in several steps to determine the exact position and orientation of the objects. Using computer vision algorithms and neural networks, the system recognizes relevant features, calculates the optimum gripping points and generates precise coordinates for picking up and placing the objects. The system also uniquely identifies the parts by segmenting their surface and comparing the contours with a database. In addition, it uses the results to enable an approximation of parts that have already been placed. The automation solution thus reduces dependence on expert knowledge, reduces process times and counteracts the shortage of skilled workers.
Camera requirements
Interface, sensor, size and price were the decisive criteria for the choice of camera model. The uEye XC combines the user-friendliness of a webcam with the performance of an industrial camera. It only requires a cable connection for operation. Equipped with a 13 MP onsemi mono sensor (AR1335), the autofocus camera delivers high-resolution images and videos. An interchangeable macro attachment lens enables a shorter object distance, making the camera suitable for close-range applications. Their integration was also very simple, as Raphael Seliger, research assistant at Kempten University of Applied Sciences, explains: "We connect the cameras to our Python backend via the IDS peak interface."
Outlook
In future, the system is to be further developed using reinforcement learning - a machine learning method based on trial and error. "We would like to expand the AI functions to make the pick-and-place processes more intelligent. We may need an additional camera directly on the robot arm," explains Seliger. An automatic accuracy check of the deposited parts is also planned. In the long term, the robot should be able to carry out all the necessary steps independently using the assembly drawing alone.
Information about uEye XC cameras
Images(© Kempten University of Applied Sciences):