Automated visual quality inspection (AVQI) is essential for the production of high-quality goods.
However, industrial companies often find that setting them up is time-consuming, error-prone and costly. They inevitably lead to unnecessary waste of resources and high energy consumption. The technical reason is a high dependency on humans, as machines are not able to perceive “quality” in the same way as humans. A large amount of training data is needed to learn to make decisions correctly. Human involvement also brings into play undesirable factors such as differing evaluations between individuals or quality degradation due to fatigue.
The proposed project addresses these problems with a Human Aesthetic Perception Module, which replaces human judgment with quantifiable technology, and the implementation of a promising domain adaptation approach to reuse existing data, which can cover all important factors of transfer learning. These factors are: Changes in source domains, changes in distance measure, and choice of hyperparameters. All this is supported by Active Learning, which is designed to reduce human interaction. Applicability is demonstrated in two proof-of-concept use cases, one for high-speed printing of packaging materials and one for plastic molding of 3D parts. Best practice guidelines for applications in other fields will also be developed.
The unique selling point of the project is a robust transfer-learning concept for automated visual quality inspection, which enables a significant reduction in human effort, an increase in process stability and a reduction in wasted resources during product changeovers in flexible production.
Projectcoordinator:
PROFACTOR GmbH
Consortium partner
Software Competence Center Hagenberg, adapa Holding GesmbH, Greiner Packaging International GmbH
Program / Announcement:
Produktion der Zukunft – 41. Ausschreibung (2021)
Project start:
01.05.2022
Project end:
30.04.2025