Skip to main content
Version: Next

Registering an AI Solution

Updated 2024.05.05

To register an AI Solution, you need to select parameters that users can modify in the EdgeConductor UI from the experimental_plan.yaml file (Write UI Parameter), and perform the registration process to AI Conductor using the register-ai-solution.ipynb Jupyter notebook.

Topics


AI Solution Registration Process

In the register-ai-solution.ipynb notebook, perform the following tasks:

  1. (Preliminary task) When creating a solution, select the ipykernel name created for the solution in the Jupyter notebook kernel (Install ALO).
    • python -m ipykernel install --user --name \{vitual-env-name\} --display-name \{jupyter-displayed-name\}
  2. Write UI Parameters
  3. Write AI Solution Information
  4. Run AI Solution Registration
    • Build the Train Pipeline and Inference Pipeline, and dependency Python packages into a Docker image
    • Upload the Docker image and sample data to the cloud (AWS ECR, S3)
    • Register the AI Solution in AI Conductor    

Writing UI Parameters

UI Parameters are the hyperparameters of the AI Solution that users can change in the EdgeConductor UI. To define UI Parameters, refer to the Write UI Parameter page.

Writing Mellerikat Infrastructure Information

Writing the infrastructure information requires engineer settings. For details, refer to the infra configuration (Configure Infrastructure) page.

Writing AI Solution Information

Write the configuration values related to running the AI Solution as described below.

  • solution_name: The name of the solution.
    • Warning: Uppercase letters, spaces, special characters, and Korean characters are not supported, and up to 50 characters are allowed.  
    • Note: AI Solution names can only contain lowercase English letters, dashes (-), and numbers.
  • inference_only: Indicates whether the AI Pipeline supports both training and inference.
    • e.g., Supports both training and inference --> False
    • e.g., Supports only inference --> True  
      • Note: AI Solutions that only perform training are not supported.
  • solution_update: Decide whether to update an existing AI Solution if you know the name of the existing Solution.
    • True: Update the existing solution. Enter the same name as the existing Solution (an error occurs if the existing Solution name does not exist).
    • False: Create a new solution. Enter a different name from the existing name (an error occurs if a Solution with the same name already exists).
  • overview: Write a macro-level description of the solution.
    • Warning: It is written in string type, and up to 500 characters can be entered based on the python len() method.
  • detail: Write a detailed description of your solution.
    • Warning: It is written in the form of a list of dicts, and the title and content under detail in the 'solution_info' sample below are each written in less than 5000 bytes.
  • contents_type: Description for re-train and re-labeling.
    • support_labeling: If True, enables the Label Data feature for the Inference Result of the solution in Edge Conductor.
      • Warning: Currently, Mellerikat only supports the Label Data feature for image data, so the Inference Pipeline of the AI Solution should generate image files in the path provided by the self.asset.get_output_path() API.
    • labeling_column_name: The column name for Manual Labeling in Edge Conductor.
    • inference_result_datatype: Choose one of 'table' or 'image' to indicate how the Inference Result will be displayed in Edge Conductor.
      • Note: The Inference Pipeline of the AI Solution should generate result files corresponding to the inference_result_datatype in the path provided by the self.asset.get_output_path() API.
    • train_datatype: Choose one of 'table' or 'image' to determine the data format used for re-train.
      • Note: There may be cases where the inference result type shown in the EdgeConductor UI is image, but the actual train data type is table.
  • train_gpu: Choose True or False. If True, create a GPU train Docker container.
  • inference_gpu: Choose True or False. If True, create a GPU inference Docker container.
  • inference_arm: Choose True or False. If True, create an inference Docker container for ARM architecture (if False, create it for AMD).
    • Note: ARM builds are supported only when 'REMOTE_BUILD': True is set in the infra configuration file.  

Warning: GPU-related features are TBD.

#----------------------------------------#
# Write AI Solution Spec #
#----------------------------------------#
solution_info ={
'solution_name': 'my-solution',
'inference_only': False, # True, False
'solution_update': False, # True, False
'overview': "Write an overall description of the AI ​​Solution.",
'detail': [
{'title': 'title001', 'content': 'content001'},
{'title': 'title002', 'content': 'content002'},
],
'contents_type': {
'support_labeling': False, # True, False
'labeling_column_name': 'y_column', # Data column name tobe labled in Edge Conductor
'inference_result_datatype': 'table', # 'table', 'image'
'train_datatype': 'table' # 'table', 'image'
},
'train_gpu': False, # True, False
'inference_gpu': False, # True, False
"inference_arm": False # True, False
}

Logging into AI Conductor

After writing the ./setting/infra_config.yaml in the path of ALO's main.py and entering the login id and password in the Jupyter notebook, run the cell below to log in.

import sys
try:
del sys.modules['src.solution_register'], sys.modules['src.constants']
except:
pass
from src.solution_register import SolutionRegister
## set register instance
infra = "./setting/infra_config.yaml"
register = SolutionRegister(infra_setup=infra, solution_info=solution_info)
## login AI Conductor
register.login(username, password)

Registering the AI Solution

Build the data, code, and dependency python packages used for running the AI Solution into a Docker image and upload it to AWS Cloud's ECR (Elastic Container Registry) and S3. Run the following cell in the Solution Registration Jupyter notebook to proceed with the registration. All processes, including building and pushing Docker images and registering sample data to S3, are automatically performed by ALO.

register.run()


Confirming AI Solution Registration in AI Conductor

Access the AI Conductor URL written in the infra configuration file, log in, and check if the AI Solution is registered and the automatically created Solution Instance is present. This Solution Instance is set with the default Computing Resource Spec, and you can adjust the resources to suit the environment in which the AI Solution operates.  


Topics