Skip to main content
Version: Next

Quick Run

Updated 2024.06.19

This page explains how to use Edge Conductor to deploy an AI model to an edge device and perform inference. It assumes that Edge App is already installed on the target edge device.

Table of Contents

  1. Register Edge
  2. Create a New Stream
  3. Train and Deploy AI Model
  4. Inference
  5. Build Dataset from Inference Results


Step 1: Register Edge

For detailed instructions on registering an edge, refer to Manage Edges.

「Edges」 → 「+New Edge」

Verify the serial number, IP address, and MAC address of the Edge App you want to register. Then, input the Name, Location, Description, and Tags to complete the registration. Once registered, the Edge Status will show as "Connected" and the Inference Status as "No Stream". This indicates that the edge is ready to receive models and solutions for inference.



Step 2: Create a New Stream

For more information on stream creation, refer to Manage Streams.

Step 2-1: Select AI Solution Instance

「Streams」 → 「+New Stream」

When creating a stream, select from available AI Solution instances provided by AI Conductor. An instance name includes the AI Solution name and version. You can create multiple streams from a single instance. After selecting the AI Solution and version, proceed to the next step.

Step 2-2: Configure Parameters and Create Stream

Set the desired parameters for the selected instance and proceed to create the stream. This step allows fine-tuning of the AI Solution version using specific settings.



Step 3: Train and Deploy AI Model

Step 3-1: Prepare Training Dataset

You need training data to train the AI model. You can either use sample datasets included with the AI Solution or select datasets previously created in Edge Conductor. When creating a dataset, ensure that the AI Solution matches the one selected when creating the stream. For details, see Dataset Guide.

Step 3-2: Select Dataset and Request Training

Once the dataset is ready, select a stream to request model training. You can do this by clicking the "Train" button (network icon) in the stream’s Status column, or by selecting the stream and choosing "Actions > Train Model" from the top right.

In the popup, select the dataset and click "Train" to begin training. To use the sample dataset bundled with the AI Solution, choose "Start with Sample Dataset" and click "Train". Once training begins, the stream’s Status will update to "Training".

For more details, refer to Request Model Training.

Step 3-3: Select Edge and Deploy Model

Once training is complete, the stream’s Status will update to "Ready to deploy". To deploy the trained model, click the "Rocket" button in the Status column or select "Actions > Deployment" from the top right.

In the popup, select the edge device to deploy to and click "Deploy". Once deployment is complete, the status will change to "Deployed" and the system will be ready for inference.



Step 4: Inference

Step 4-1: Input Inference Data

Inference on Edge App begins as soon as data is placed in its input path. This input path is defined during installation, and it corresponds to the "Input" field in the Edge Detail page of Edge Conductor. When data is placed into this input path, inference starts automatically and the Inference Status on the Edges page updates to "Inferencing".

Step 4-2: View Inference Results

Once inference is complete, the results are sent from the edge to Edge Conductor. The Inference Status changes from "Inferencing" to "Ready", and the results appear under the "Inf. Result" tab in the Edge’s detail view. The inference artifacts are also saved to the Edge App’s output path. If inference fails, a notification will appear. You can download artifacts from the output path and check logs as needed.



Step 5: Build Dataset from Inference Results

When new patterns appear in data, previously trained AI models may no longer perform well. To address this, you can reuse inference data to train a new AI model.

Step 5-1: Collect Inference Results

Each time inference is performed, results are sent to Edge Conductor. You can view them by selecting an edge on the "Edges" page and checking the "Inf. Result" tab.

Step 5-2: Create Dataset from Collected Data

Inference results collected from each edge can be used to create new datasets. When creating a dataset, select "Edge" as the data source to use inference results from individual edge devices. The AI Solution used in dataset creation must match the one used by the deployed stream.

For more, refer to Dataset Guide.

If the AI Solution supports re-labeling, you can improve dataset quality by editing labels within the dataset. For details, see Labeling Data.