Skip to main content
Version: docs v25.02

AI Solution Operations

Creating Streams and Requesting Model Training for AI Solutions

To utilize a developed AI solution in the operational phase, request training via Edge Conductor and verify that the training process proceeds smoothly.

Accessing Edge Conductor

Log in to Edge Conductor.

Preparing the Dataset

Prepare a Dataset in Edge Conductor to train the registered AI solution. If the modeling code is registered in AI Conductor, you can create a Dataset in Edge Conductor and request training to generate a model.

  • Preparation Process: Refer to the Dataset Manual to prepare a Dataset matching the data type (e.g., tabular, image) required by your AI solution.
  • Tip: Ensure the data format and labeling align with the solution’s requirements. Sample datasets are also available for use.

Creating Streams and Training the Model

To connect the AI solution in AI Conductor with the Dataset in Edge Conductor, create a Stream. Streams enable you to request model training and deploy the trained model to the Edge.

  • Procedure: Create a Stream, select the prepared Dataset, and initiate model training.
  • Details: Check the Streams Manual for guidance on Stream setup and the training request process.
  • Monitoring: Track the training status in real-time via the Edge Conductor dashboard.


Edge App Installation and Registration

Edge App is installed in operational environments to receive deployed AI solution models, perform inference, and provide AI services. Models trained in the cloud are deployed and run in operational settings through Edge App. Try Mellerikat allows you to install Edge App on a Windows PC to experience service operation firsthand. The Edge App runs the Inference Pipeline on new data using the model generated by the Train Pipeline.

Edge App Screenshot

Installation and Registration

Edge SDK

For environments where installing Edge App is challenging, use Edge SDK.

  • Functionality: Edge SDK emulates Edge App behavior, communicating with Edge Conductor via API to deploy models, perform inference, and upload results.
  • Installation and Usage: Refer to the Edge SDK Manual to install and run it in your development environment.
  • Note: The Edge SDK operates only in an environment where the modules and code of the AI solution are configured. It is recommended to use it for validation purposes in the environment where the AI solution was developed and registered.
    • The Edge App automatically configures the AI solution runtime environment.


Inference

Add new data to the Data Input path monitored by Edge App to trigger inference.

  • Verification Methods: Check inference results via Edge Viewer or the Edge Conductor dashboard.
  • Validation Points:
    • Confirm that inference works correctly in the operational environment.
    • Ensure inference results are suitable for operational data.
    • Verify that the model’s inference speed meets operational requirements (e.g., real-time processing).

If issues arise during inference, review the Edge Conductor logs or inspect the Dataset and model settings.