With IDS NXT, IDS has designed such an AI vision ecosystem of hardware and software components, which in addition to machine learning also intuitively maps the complete application workflow. Solutions can thus be implemented in a time- and cost-saving way.

AI vision in the cloud

With the AI Vision Studio IDS lighthouse, you can take your first steps with AI, test the suitability of its methods for your own applications and also create vision apps for IDS NXT cameras to solve complex tasks. No training or setting up a development environment is necessary for this. This makes it easy to get started, including implementation and commissioning of an individual AI vision system. For this purpose, the entire programming is hidden behind easy-to-understand interfaces and tools that cover all steps of an AI vision development.

More assistance, quick labelling

Right at the start of a project, an application wizard with some kind of interview mode helps to identify specific tasks, select the required AI methods and prepare a suitable vision app project. Users who want a more individual approach can use the block-based editor to build individual process sequences from ready-made function blocks by drag & drop, without having to deal with platform-specific programming or the special syntax of a programming language. This opens up greater flexibility in the application description and at the same time makes the processes easy to understand.

Figure 1 With the block-based editor, completely individual applications with AI processing can be mapped in vision apps without having to know the syntax of a specific text-based programming language.
Figure 1 With the block-based editor, completely individual applications with AI processing can be mapped in vision apps without having to know the syntax of a specific text-based programming language.

Data manager included

In the future, the AI Vision Studio will provide further support when preparing training data. An automatic labelling system allows imported image data and specific content with ROIs to be organized more quickly into data sets with suitable labels. This helps to expand data sets with image content in order to continuously improve networks through re-training.

Less data, more confidence

Providing sufficient data in balanced amounts for all targeted classes is often time-consuming. Since especially error cases can occur in all possible forms, there is often an imbalance of GOOD and BAD parts. Therefore, it is important to offer solutions that require less training data in preparation. Thus, in addition to classification and object detection, users will in future benefit from anomaly detection, which identifies all known as well as unknown error cases that exceed the normal deviations of a GOOD part. This requires relatively little training data compared to the other AI methods. In other words, anything that would be noticed by a human being who spends a long time learning what objects appear to be "typical" can also be identified by an AI system with anomaly detection. Anomaly detection is thus another useful tool to support quality control by reducing manually performed visual inspections and at the same time detecting and avoiding errors in the production process at an early stage.

Figure 2 Anomaly detection identifies both known and unknown (untrained) deviations that deviate from the trained "typical" object appearance.
Figure 2 Anomaly detection identifies both known and unknown (untrained) deviations that deviate from the trained "typical" object appearance.

Explainable AI

For better understanding, among other things, a heat map visualization of the AI attention directly in the AI Vision Studio is provided. For this purpose, special network models are used during training, which generate a kind of heat map during the evaluation of test data sets. It highlights those image areas that receive the most attention from the neural network and thus influence the results and performance. Incorrect or underrepresentative training images can also sensitize the AI to unwanted features. Even an accidentally trained product label can falsify the results. The cause of such "wrong" training is called data bias.

These attention maps help to reduce concerns about AI-based decisions and to increase acceptance in the industrial environment.

Figure 3 "Attention Maps" visualize the focus of a neural network on specific image content, also including a data bias triggered by a product label in the training images.
Figure 3 "Attention Maps" visualize the focus of a neural network on specific image content, also including a data bias triggered by a product label in the training images.

Summary

IDS is constantly developing the AI system with a special focus on easy handling and time efficiency. This will enable AI to be used more quickly across the board, including in SMEs. On the hardware side, the IDS NXT camera family is also being enhanced by a more powerful hardware platform that can execute neural networks much faster, thus enabling AI vision even in applications with high clock rates. However, what helps most in expanding AI vision are companies that have already implemented successful AI vision projects and can tell others about them.