Skip to main content
Version: Next

VAD Parameter

Updated 2025.05.12

experimental_plan.yaml Explanation

To apply AI Contents to your data, you need to write information about the data and the Contents features you want to use in the experimental_plan.yaml file. If you install AI Contents in the solution folder, you can see the experimental_plan.yaml file created by default for each content under the solution folder. By entering the 'data information' in this yaml file and modifying/adding the 'user arugments' provided for each function, you can run ALO to create a data analysis model with the desired settings.

experimental_plan.yaml structure

experimental_plan.yaml contains various settings required to run ALO. If you modify the 'dataset_uri' and 'function' parts of the train/inference section of this setting value, you can start using AI Contents right away.

Enter data path ('dataset_uri')

  • The parameter of 'train' or 'inference' is used to specify the path to the file to load or the file to save.
train:
dataset_uri: sample_data/train/ # Data folder, folder list (no file types)
# dataset_uri: s3://mellerikat-test/tmp/alo/ # Example 1) S3 key(prefix) or lower path All folders, files
artifact_uri: train_artifact/
pipeline: \[input, readiness, train2\] # list of functions to be executed
inference:
dataset_uri: sample_data/inference/
# model_uri: model_artifacts/n100_depth5.pkl # load the already trained model
artifact_uri: Define compression and upload paths for files stored below the path inference_artifact/ # Optional) pipeline['artifact']['workspace']. Uploaded as inferenece.tar.gz below the default path
pipeline: \[input, readiness, inference2\]
parameter nameDEFAULTDescription and Options
dataset_uri (train)./sample_data/train/Enter the folder path where the training data is located. (enter the csv file name X)
dataset_uri (inference)./sample_data/inference/Enter the folder path where the inference data is located. (enter the csv file name X)
artifact_uritrain(inference)_artifact/Enter the folder path where the output of the training and inference results is stored.

*Imports all files in the entered path subfolder and merges them. *All columns in the merged file must have the same name.

User Parameter ('function')

  • The fields below 'function' refer to the function name. The 'function: input' below means input step.
  • 'argument' means the user arguments of the input function('function: input'). User arguments are settings for data analysis provided for each function. For an explanation of this, please see the User arguments explanation below.

function: # Required) user-defined function
input:
def: vad.input_function# {python filename}. {function name}
argument:
file_type: csv
encoding: utf-8
...

User arguments explained

What are User arguments?

User arguments are used as parameters for setting the behavior of each asset, and they are written under the 'args' of each step in the experimental_plan.yaml. Each function that makes up the pipeline of AI Contents provides user arguments so that users can apply various functions to their data. Users can change and add user arguments by referring to the guide below to model according to their data. User arguments are divided into "required arguments", which are pre-written in the experimental_plan.yaml, and "custom arguments", which users view and add to the guide.

Required arguments

  • Required arguments are the default arguments that are shown directly in the experimental_plan.yaml. Most required arguments have a built-in default value. Arguments with default values will work as default values even if the user does not set a value.
  • Data-related arguments in the experimental_plan.yaml must be set by the user. (ex. model_name, percentile)

Custom arguments

  • Custom arguments are not written in experimental_plan.yaml, but they are provided by functions and can be used by users by adding them to experimental_plan.yaml. I use it in addition to the 'argument' for each function.

VAD's pipeline is composed of **Input - Readiness - Modeling(train/inference) ** function, and user arguments are configured differently according to the function of each function. Try out the required user arguments in your experimental_plan.yaml first, then add the user arguments to create a VAD model that fits your data perfectly!

train:
...
pipeline: \[input, readiness, train2\] # list of functions to be executed

inference:
...
pipeline: \[input, readiness, inference2\] # list of functions to be executed

Summary of User arguments

Below is a summary of VAD's user arguments. You can click on 'Argument name' to go to the detailed description of the argument.

Default

  • The 'Default' field is the default value for the user argument.
  • If there is no default value, it is written as '-'.
  • If there is logic in default, it is marked as 'Note explanation'. Click 'Argument Name' to see the detailed description

ui_args

  • The 'ui_args' in the table below indicates whether the 'ui_args' function is supported, which allows you to change the argument value in the UI of AI Conductor.
  • O: If you enter the argument name under 'ui_args' in the experimental_plan.yaml, you can change the arguments value in the AI Conductor UI.
  • X: Doesn't support the 'ui_args' feature.
  • For a more detailed explanation of "ui_args", please check out the following guide. Write UI Parameter
  • You can set it by adding the form below at the bottom of 'setting/solution_info.yaml'.
### ui_args Declaration Department ###
ui_args:
function:
train:
default: fastflow
description: Select the model you want to use as VAD. Selectable values: fastflow, patchcore
name: model_name
selectable:
- fastflow
- patchcore
type: single_selection
```  

#### User settings required?
- In the table below, 'User Settings Required' is the user arguments that the user must check and change in order to make AI Contents work.
- O: Arguments, usually assignments, where you enter data-related information, which you need to check before modeling.
- X: If the user does not change the value, the modeling proceeds to the default value.

| Asset Name | Argument type | Argument | Default | Description | User setup required? | ui_args |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| Input | Custom | [file_type](#file_type) | csv | input data.O|X|
| Input | Custom | [encoding](#encoding) | utf-8 | Enter the encoding type of input data.|X|X|
| Readiness | Custom | [ok_class](#ok_class) | | In y_column, type class, which means ok. |X|X|
| Readiness | Custom | [train_validate_column](#train_validate_column) | | Enter a column that separates train and validation. That column must consist of two arguments: train and valid. |X|X|
| Readiness | Custom | [validation_split_rate](#validation_split_rate) | 0.1 | If the train_validate_column does not exist, it generates a validation by that rate from the input train data. |X|X|
| Readiness | Custom | [threshold_method](#threshold_method) | F1 | Select the method to determine OK and NG during validation. Selectable values: F1, Percentile (sometimes automatically selected based on the number of validation data) |X|X|
| Readiness | Custom | [num_minor_threshold](#num_minor_threshold) | 5 | If the number of images in OK and NG does not exceed this, the threshold_method is automatically selected as Percentile. |X|X|
| Readiness | Custom | [ratio_minor_threshold](#ratio_minor_threshold) | 0.1 | If the ratio of the images in OK and NG does not exceed this, the threshold_method is automatically selected as the Percentile. |X|X|  
| Train | Required | [model_name](#model_name) | fastflow | Select the model you want to use as VAD. Selectable values: fastflow, patchcore |O|O|
|Train | Custom | [experiment_seed](#experiment_seed) | 42 | Determine the seed of the experiment in the pytorch environment. |X|X|
| Train | Custom | [img_size](#img_size) | \[256,256\] | Set the image size to be converted when learning an image. If only numbers are entered, the original image is resized to maintain its proportions. (The short part becomes the Resize size, and the proportions are maintained) |X|X|
| Train | Custom | [batch_size](#batch_size) | 4 | Set the batch size during training and validation. |X|X|
| Train | Custom | [max_epochs](#max_epochs) | 15 | Set the max epoch of learning. |X|X|
| Train | Custom | [accelerator](#accelerator) | cpu | Choose whether to run it based on CPU or GPU. In GPU-capable environments, we recommend choosing a GPU. |X|X|
| Train | Custom | [monitor_metric](#monitor_metric) | image_AUROC | Best Select the criteria for saving the model. If the threshold_method is Percentile, loss is automatically selected. Selectable values: loss, image_AUROC, image_F1Score |X|X|
| Train | Custom | [save_validation_heatmap](#save_validation_heatmap) | True | Choose whether to save the forecast heatmap for the Validation dataset. (Save only for ok-ng, ng-ok, ng-ng) |X|X|
| Train | Custom | [percentile](#percentile) | 0.8 | If the threshold_method is Percentile, select the percentile of the anomaly score in the Validation dataset as the criteria to determine as NG. |X|X|
| Train | Custom | [augmentation_list](#augmentation_list) | \[\] | A list of transformations that you apply to perform arbitrary augmentation. You can use rotation, brightness, contrast, saturation, hue, blur, and more. |X|X|
| Train | Custom | [augmentation_choice](#augmentation_choice) | 3 | The number of transformations that are applied to perform a random augmentation. Selectable values: 0 or greater integers |X|X|
| Train | Custom | [rotation_angle](#rotation_angle) | 10 | The maximum rotatable angle during arbitrary augmentation. If set to 10, perform the conversion between -10 and 10. Selectable values: 0 to 180 integers|X|X|
| Train | Custom | [brightness_degree](#brightness_degree) | 0.3 | Random augmentation is the degree of brightness control. Set the maximum/minimum brightness level. Selectable values: 0~1 real numbers |X|X|
| Train | Custom | [contrast_degree](#contrast_degree) | 0.3 | Random augmentation of contrast degree. Selectable values: 0~1 real numbers|X|X|
| Train | Custom | [saturation_degree](#saturation_degree) | 0.3 | Arbitrary augmentation is the degree of saturation. Selectable values: 0~1 real numbers|X|X|
| Train | Custom | [hue_degree](#hue_degree) | 0.3 | Random augmentation is either hue or so. Selectable values: 0~1 real numbers|X|X|
| Train | Custom | [blur_kernel_size](#blur_kernel_size) | 5 | Random augmentation of the kernel maximum size of blur. Selectable values: 0~255 integers|X|X|
| Train | Custom | [model_parameters](#model_parameters) | \{"fastflow_backborn": 'resnet18', "fastflow_flow_steps": 8, "patchcore_backborn": 'wide_resnet50_2', "patchcore_coreset_sampling_ratio": 0.1, "patchcore_layers": \["layer2", "layer3"\]\} | Parameters related to model training. If not set, the model will be trained with the default parameter. For more details, please refer to the parameter description below. |X|X|
| Inference | Custom | [inference_threshold](#inference_threshold) | 0.5 | This is the threshold of the anomaly score to be judged as abnormal. |X|X|
| Inference | Custom | [save_anomaly_maps](#save_anomaly_maps) | False | Whether or not XAI image results are saved. |X|X|  

***

## User arguments in detail    

### Input asset
#### file_type
Enter the file extension for input data.

- Argument type: Custom
- Input type
- string
- Enterable values
- **csv (defualt)**
-usage
- file_type : csv
- ui_args: X

***

#### encoding
Enter the encoding type of input data.

- Argument type: Custom
- Input type
- string
- Enterable values
- **utf-8 (defualt)**
-usage
- encoding : utf-8
- ui_args: X

***

### Readiness asset

#### ok_class
Enter the time information in the format.y_column and enter the class, which means ok. If not entered, ok_class is entered with the name that takes up the most types in the training dataset.

- Argument type: Custom
- Input type
- string
- Enterable values
- **'' (defualt)**
-usage
- ok_class : good
- ui_args: O

***

#### train_validate_column
Enter a column that separates train and validation. That column must consist of two arguments: train and valid.

- Argument type: Custom
- Input type
- string
- Enterable values
- **'' (defualt)**
-usage
- train_validate_colum : phase
- ui_args: X

***

#### validation_split_rate
If the train_validate_column does not exist, it generates a validation by that rate from the input train data.

- Argument type: Custom
- Input type
- float
- Enterable values
- **0.1 (defualt)**
-usage
- validation_split_rate : 0.1
- ui_args: X

***

#### threshold_method
 Select the method to determine OK and NG during validation. Selectable values: F1, Percentile (sometimes automatically selected based on the number of validation data)

- Argument type: Custom
- Input type
- string
- Enterable values
- **F1 (defualt)**
- Percentile
-usage
- threshold_method : Percentile
- ui_args: X

***

#### num_minor_threshold
If the number of images in OK and NG does not exceed this, the threshold_method is automatically selected as Percentile.

- Argument type: Custom
- Input type
- integer
- Enterable values
- **5 (defualt)**
-usage
- num_minor_threshold : 5
- ui_args: X

***

#### ratio_minor_threshold
If the ratio of the images in OK and NG does not exceed this, the threshold_method is automatically selected as the Percentile.

- Argument type: Custom
- Input type
- float
- Enterable values
- **0.1 (defualt)**
-usage
- ratio_minor_threshold : 0.1
- ui_args: X

***


### Train asset
#### model_name
Select the model you want to use as VAD. Selectable values: fastflow, patchcore

- Argument type: Required
- Input type
- string
- Enterable values
- **fastflow (defualt)**
- patchcore
-usage
- model_name : patchcore
- ui_args: O              

***

#### img_size  
Set the image size to be converted when learning an image. If only numbers are entered, the original image is resized to maintain its proportions. (The short part becomes the resize size, and the proportion is maintained)

- Argument type: Custom
- Input type
- integer
- Enterable values
- **\[256,256\] (defualt)**
- 256
-usage
- img_size: \[256,256\]
- ui_args: X

***

#### batch_size
 Set the batch size during training and validation.

- Argument type: Custom
- Input type
- integer
- Enterable values
- **4 (defualt)**
-usage
- batch_size: 32
- ui_args: X

***

#### experiment_seed
Determine the seed of the experiment in the pytorch environment.  

- Argument type: Custom
- Input type
- integer
- Enterable values
- **42 (defualt)**
-usage
- experiment_seed : 42
- ui_args: X

***

#### max_epochs
Set the max epoch of learning.

- Argument type: Custom
- Input type
- integer
- Enterable values
- **15 (defualt)**
-usage
- max_epochs : 15
- ui_args: X

***

#### accelerator
Choose whether to run it based on CPU or GPU. In GPU-capable environments, we recommend choosing a GPU.

- Argument type: Custom
- Input type
- string
- Enterable values
- **cpu (defualt)**
- gpu
-usage
- accelerator : gpu
- ui_args: X

***

#### monitor_metric
Best Select the criteria for saving the model. If the threshold_method is Percentile, loss is automatically selected. Selectable values: loss, image_AUROC, image_F1Score

- Argument type: Custom
- Input type
- string
- Enterable values
- **image_AUROC (defualt)**
- loss
- image_F1Score
-usage
- monitor_metric : loss
- ui_args: X

***

#### save_validation_heatmap
Choose whether to save the forecast heatmap for the Validation dataset. (Save only for ok-ng, ng-ok, ng-ng)

- Argument type: Custom
- Input type
- bool
- Enterable values
- **True (defualt)**
- False
-usage
- save_validation_heatmap : True
- ui_args: X***

#### percentile
If the threshold_method is Percentile, select the percentile of the anomaly score in the Validation dataset as the criteria to determine as NG.

- Argument type: Custom
- Input type
- float
- Enterable values
- **0.8 (defualt)**
-usage
- percentile : 0.8
- ui_args: X

***

#### augmentation_list
A list of transformations that you apply to perform arbitrary augmentation. You can use rotation, brightness, contrast, saturation, hue, blur, and more.

- Argument type: Custom
- Input type
- list
- Enterable values
- **\[\] (defualt)**
- rotation, brightness, contrast, saturation, hue, blur
-usage
- augmentation_list : \[rotation, brightness, contrast, saturation, hue, blur\]
- ui_args: X

***

#### augmentation_choice
The number of transformations that are applied to perform a random augmentation. Selectable values: 0 or higher integers

- Argument type: Custom
- Input type
- int
- Enterable values
- **3 (defualt)**
-usage
- augmentation_choice : 2
- ui_args: X

***

#### rotation_angle
The maximum rotatable angle during arbitrary augmentation. If set to 10, it will perform the conversion between -10 and 10. Selectable values: 0 to 180 integers

- Argument type: Custom
- Input type
- float
- Enterable values
- **10 (defualt)**
-usage
- rotation_angle : 30
- ui_args: X

***

#### brightness_degree
Random augmentation is the degree of brightness control. Set the maximum/minimum brightness level. Selectable values: 0~1 real numbers

- Argument type: Custom
- Input type
- float
- Enterable values
- **0.3 (defualt)**
-usage
- brightness_degree : 0.5
- ui_args: X

***

#### contrast_degree
Random augmentation of contrast degree. Selectable values: 0~1 real numbers

- Argument type: Custom
- Input type
- float
- Enterable values
- **0.3 (defualt)**
-usage
- contrast_degree : 0.5
- ui_args: X

***

#### saturation_degree
Arbitrary augmentation is the degree of saturation. Selectable values: 0~1 real numbers

- Argument type: Custom
- Input type
- float
- Enterable values
- **0.3 (defualt)**
-usage
- saturation_degree : 0.5
- ui_args: X

***

#### hue_degree
Random augmentation is either hue or so. Selectable values: 0~1 real numbers

- Argument type:Custom
- Input type
- float
- Enterable values
- **0.3 (defualt)**
-usage
- hue_degree : 0.5
- ui_args: X

***

#### blur_kernel_size
Random augmentation of the kernel maximum size of blur. Selectable values: 0~255 integers

- Argument type: Custom
- Input type
- int
- Enterable values
- **15 (defualt)**
-usage
- blur_kernel_size : 25
- ui_args: X

***
                   
#### model_parameters
Parameters related to model training. If not set, the model will be trained with the default parameter. For more details, please refer to the parameter description below.

- Argument type: Custom
- Input type
- dictionary
- Enterable values
- **\{"fastflow_backborn": 'resnet18', "fastflow_flow_steps": 8, "patchcore_backborn": 'wide_resnet50_2', "patchcore_coreset_sampling_ratio": 0.1, "patchcore_layers": \["layer2", "layer3"\]\}  (defualt)**
- fastflow_backborn: "resnet18", "wide_resnet50_2", "cait_m48_448", "deit_base_distilled_patch16_384"
- fastflow_flow_steps: int
- patchcore_backborn: All models in the TIMM package are eligible, provided that the model's layer name is interacted with patchcore_layers.
- patchcore_coreset_sampling_ratio: float between 0~1
- patchcore_layers: Enter the layer name of the selected backborn as a list
-usage
- model_parameters : \{"fastflow_backborn": 'resnet18', "fastflow_flow_steps": 8, "patchcore_backborn": 'wide_resnet50_2', "patchcore_coreset_sampling_ratio": 0.1, "patchcore_layers": \["layer2", "layer3"\]\}
- ui_args: X

***

### Inference asset
#### inference_threshold
This is the threshold of the anomaly score to be judged as abnormal.

- Argument type: Required
- Input type
- float
- Enterable values
- **0.5 (defualt)**
-usage
- inference_threshold : 0.5
- ui_args: X            

***

#### save_anomaly_maps
Whether or not XAI image results are saved.

- Argument type: Custom
- Input type
- bool
- Enterable values
- **False (defualt)**
- True
-usage
- save_anomaly_maps : True
- ui_args: X  

***  

**VAD Version: 3.0.0**