Why Industries Are Not Making the Most of AI

Why industries aren’t making the most of AI

The vision of AI, like image processing with artificial intelligence, is a much-discussed topic. However, the potential of this sophisticated new technology has yet to find application in some areas, such as industrial applications; therefore, long-term empirical values ​​are limited.

Image credit: IDS Imaging Development Systems GmbH

Although there are a number of embedded vision systems on the market that enable the use of AI in industrial environments, many plant operators are still hesitant to invest in any of these platforms and implement update their apps.

However, AI has already demonstrated innovative possibilities where rule-based image processing has run out of options and lacks a solution. Thus, the question remains as to what is hindering the widespread adoption of this new technology.

One of the main factors that hinders technology is the obstacle called “user-friendliness”. Anyone should be able to create their own AI-based image processing applications, even if they lack specialist knowledge of artificial intelligence and application programming.

While artificial intelligence could speed up a variety of work processes and reduce sources of error, edge computing helps eliminate expensive industrial computers and the complex infrastructure required for transmitting image data to great speed.

New and different

However, AI or machine learning (ML) works quite differently from traditional rule-based image processing. This impacts how image processing tasks are approached and handled.

The quality of the results is no longer controlled by manually generated program code created by an image processing specialist, as was the case before, but by the process of learning neural networks used with sufficient image data.

In other words, the characteristics of the object that should be inspected are no longer predefined by instructions but must be taught to the AI ​​through the training process. Additionally, the more diverse the training data, the more likely the AI/ML algorithms are to identify particularly relevant qualities later in the run.

With adequate knowledge and experience, what appears to be simple can lead to the achievement of the desired goal.

Errors will occur with this application if the user does not have a keen eye for relevant image data. This implies that the capabilities needed to work with machine learning approaches differ from those needed for rule-based image processing.

Additionally, not everyone has the time or resources to dive deep into the subject and develop a new set of fundamental skills to deal with machine learning methods.

The biggest problem with new technologies is transparency. If they deliver outstanding results with little effort but cannot easily be verified, for example by examining the code, it becomes hard to believe and trust such a system.

Complex and misunderstood

Logical thinking would suggest that one might be interested in learning how this vision of AI works. However, without solid explanations that are clearly understood, evaluating results is difficult.

Gaining confidence in new technologies is usually based on skills and experience that must be acquired over time before understanding the full potential of what the technology can accomplish, how it works, how to apply it and how manage it appropriately.

Making matters even more difficult is the idea that the AI ​​vision competes with an established system, for which the necessary environmental conditions have been created in recent years through the implementation of knowledge environments, training, documentation, software, hardware and development.

The lack of a clear understanding of the inner workings of algorithms, as well as incomprehensible results, are the flip side of the coin that hinders their expansion.

(Not) A black box

Neural networks are sometimes misinterpreted as a black box whose judgments are unintelligible.

Although DL models are undoubtedly complex, they are not black boxes. In fact, it would be more accurate to call them glass boxes, because we can literally look inside and see what each component does..

The black box metaphor in machine learning

The inference decisions of neural networks are not based on a set of conventional rules, and the dynamic interactions of their artificial neurons may at first be difficult for humans to understand, but they are nevertheless the clear result of a system mathematical and therefore reproducible and easily interpretable.

What is currently lacking are the tools necessary to facilitate proper assessment. This area of ​​AI offers many opportunities for advancement. It also shows how different AI systems available in the market can help users in their goals.

Software makes AI explainable

IDS Imaging Development GmbH focuses on the development of this area. One such result is the IDS NXT inference camera system. Statistical surveys using the so-called confusion matrix allow the construction and interpretation of the overall quality of a trained neural network.

Following the training procedure, the network can be validated using a previously determined set of images with known results. In a table, the expected results and the actual results obtained by inference are compared.

This explains the frequency with which test objects were successfully or erroneously recognized for each of the object classes formed. An overall quality of the trained algorithm can then be provided based on the generated hit rates.

Additionally, the matrix clearly illustrates where recognition accuracy may still be inadequate for productive application. However, he does not provide a detailed explanation of this result.

This confusion matrix of a CNN classification screw shows where identification quality can be improved by retraining with more images.

Figure 1. This confusion matrix of a CNN classification screw shows where identification quality can be improved by retraining with more images. Image credit: IDS Imaging Development Systems GmbH

In this situation, the attention map is useful because it displays a kind of heat map that indicates which regions or content of the image the neural network is focusing on and using to guide decisions.

This visualization form is engaged throughout the training process in the flagship AI Vision Studio IDS NXT in connection with the decision pathways that are established throughout the training; this allows the network to build a heat map from each frame while the analysis is still in progress.

This suggests that the essential or unexplained actions of AI are easier to grasp, promoting greater acceptance of neural networks in industrial settings.

It can also be used to identify and eliminate biases in data, which would cause a neural network to make incorrect decisions during inference (see figure “attention maps”).

Bad input leads to bad output. The ability of an AI system to recognize patterns and estimate outcomes depends on obtaining relevant data from which it can learn “correct behavior”.

If an AI is designed in a lab using data that is not typical of future applications, or worse, if patterns in the data imply bias, the system will pick up and apply those preconceptions.

This heat map shows a classic data bias.  The heat map visualizes high attention on the banana Chiquita label and therefore a good example of a data bias.  Through fake or under-representing images of banana formation, the CNN used evidently learned that this Chiquita label still suggests a banana.

Figure 2. This heat map shows a classic data bias. The heat map visualizes high attention on the banana Chiquita label and therefore a good example of a data bias. Through fake or under-representing images of banana formation, the CNN used evidently learned that this Chiquita label still suggests a banana. Image credit: IDS Imaging Development Systems GmbH

The software tools allow users to track the behavior and results of the AI ​​vision and relate them directly to the weaknesses of the training dataset and adjust them if necessary in a targeted manner. This makes AI more understandable and explainable for everyone, since it’s basically just statistics and math.

Understanding the math and understanding it in the algorithm isn’t always easy, but tools like the confusion matrix and heatmaps make decisions and explanations of decisions visible and therefore understandable.

This is just the beginning

AI vision has the ability to enhance a variety of visual processes when used effectively. However, hardware and software alone will not be enough to drive industry to adopt AI. Manufacturers are frequently asked to share their knowledge in the form of user-friendly software and built-in procedures that guide individuals.

AI still has a long way to go to catch up with best practices in the field of rules-based imaging applications, which are the result of years of effort and application to establish a loyal customer base through a complete documentation, knowledge transfer and several software. tools. The good news is that such AI systems and support tools are already being developed.

Standards and certifications are also being established to promote acceptability and make understanding AI simple enough to bring to the big table. IDS contributes to this.

With IDS NXT, any group of users can quickly and efficiently apply an embedded AI system as an industrial tool, even without a deep understanding of image processing, machine learning or programming. of applications.

This information was obtained, reviewed and adapted from materials provided by IDS Imaging Development Systems GmbH.

For more information on this source, please visit IDS Imaging Development Systems GmbH.

Similar Posts