Vision Anoamly Detection (VAD)
What is Visual Anomaly Detection?
Visual Anomaly Detection (VAD) is an AI content that detects abnormal images by learning from normal images. By organizing the training data with normal images or images of the same type, you can notify the user with abnormal images when there are images that are different from the trained images. VAD reduces the cost of creating an answer sheet because the user does not have to go through the images one by one to create the answer sheet.
When to use Visual Anomaly Detection?
VAD is an image consisting of a large number of normal or specific types and a small number of abnormal or etc. You can categorize images organized by type. As long as there is no correct answer label, just a normal or specific type of image, VAD can be applied in any field. The main areas of use are:
- Manufacturing: In the process of product appearance inspection or vision inspection, most of them are normal products, and if there are not many abnormal products, abnormal products can be automatically classified and checked without a correct answer.
- Inventory management: Image data without correct labels can be used to ensure that different products are not mixed.
- Agriculture: Monitor the condition of crops through smartphone images and detect diseases and pests and growth abnormalities at an early stage. This will increase the efficiency of crop management.
Below are three of the many real-world examples of real-world VAD applications.
Bolt inspection(TBD)
It is an image-based anomaly detection solution for bolts or parts among various electric vehicle parts.
Critical Defect Detection(TBD)
It is a solution that performs image-based anomaly detection for defects of parts that are not assembled by workers according to instruction letter or made in automation facilities during the mass production process of electric vehicle parts.
Box appearance inspection (TBD)
It is a solution that detects defects in the appearance of packaging boxes before the product is goods issue in the logistics storage location.
Key Features
Superior Performance
VAD can use PatchCore and FastFlow, the latest models of image-based anomaly detection AI models with superior performance. Since PatchCore utilizes pre-trained models, it does not need to be trained, which reduces training costs and has stable performance. FastFlow is an image anomaly detection model that uses image generation models that have recently gained prominence in various fields. Both models are lightweight models that allow for rapid inference.
Reduced initial label creation costs
In order to use an image-based classification model, it is necessary to create an answer sheet that is checked and labeled by a human for each image. However, VAD can reduce the cost of creating answer paper labels because it can check whether it is normal or a specific type with only a normal or specific type image. If a human checks only the images that are later classified as anomaly and gives them a new anomaly label, they can quickly switch to image-based classification AI and use more types of image classification AI models.
Convenient usability and easy-to-understand reasoning
VAD is largely automated, so it's easy to start using as soon as you collect normal image data. It also provides an image that shows the area where the anomaly was detected for the image classified as anomaly. Therefore, you can quickly check whether the AI has been trained normally through an image that shows the evidence area.
Quick Start
Installation
- Install ALO. Read More: Use AI Contents
- Install the content using the git address below.
- git url: https://github.com/mellerikat-aicontents/Vision-Anomaly-Detection-ALOv3
- Installation code: git clone https://github.com/mellerikat-aicontents/Vision-Anomaly-Detection-ALOv3 solution
Required parameter settings
- Edit the data path below in 'solution/experimental_plan.yaml' to your user path.
train:
dataset_uri: [train_dataset/] # Change to user data path
inference:
dataset_uri: Change to inference_dataset/# User Data Path
- Enter the train_validate_column, validation_split_rate, and threshold_method that match the train data in the 'args' field of 'function: Readiness'
- step: Readiness
args:
train_validate_column: # Enter a column to separate train and validation. That column must consist of two arguments: train and valid. If you don't have one, you don't have to enter it.
validation_split_rate: 0.1 # If train_validate_column does not exist, it will generate a validation by that rate from the input train data.
threshold_method: F1 # Select the method to validate OK or NG. Selectable values: F1, Percentile (sometimes automatically selected based on the number of validation data)
- Enter the model_name, img_size, and percentile that match the train data in the 'args' of 'function: Train'
- step: Train
args:
- model_name: Select the model you want to use with fastflow # VAD. Selectable values: fastflow, patchcore
img_size: 256 # Set the image size to be converted when learning the image.
percentile: 0.8 # If threshold_method is a percentile, select the percentile of the anomaly score in the Validation dataset as the basis for determining NG.
- Enter the inference_threshold that matches the train data in the 'args' field of 'function: Inference'
- step: Inference
args:
- inference_threshold: 0.5 # The threshold of the anomaly score to be judged as abnormal.
- Just set up the above process and run ALO to create your model! If you want to set up advanced parameters to create a more data-specific model, please refer to the page on the right. Read More: VAD Parameters
Execution
- Go to the path where ALO is installed in the terminal and run the 'alo run' command. Read More: Use AI Contents
Topics
VAD Function Description VAD Input Data and Outputs VAD Parameters
VAD Version: 3.0.0, ALO Version: 3.0.0