Deploy Model for Inference
Model Deployment
Streams → Actions → Deploy
In the Stream menu of Edge Conductor, you can deploy the AI Model obtained through the model training request to an Edge. The deployed AI Model performs inference along with the Inference Pipeline and outputs the results. When deploying an AI Model, the AI Solution information related to that model is also deployed. The Edge App installed on the Edge runs the Inference Pipeline using the deployed AI Model. Therefore, the AI Model must always match the version of the Inference Pipeline. For example, if there are Version 1 and Version 2 of an AI Solution, and Version 1 of the Inference Pipeline is deployed on the Edge, deploying an AI Model made with Version 2 will also deploy Version 2 of the Inference Pipeline to the Edge.
To deploy an AI Model:
- Navigate to the Edge Conductor and login.
- In the left navigation pane, choose Stream.
- Choose the stream to deploy the model. The stream status should be ready to deploy.
- Press the deploy button (rocket icon) in the stream status pane or click Actions in the upper right corner and select deploy model.
- Choose the edges where you want to deploy the model.
- Press Deploy.
Version of AI Model
Monitoring and continuously improving/optimizing the AI Model to maintain performance is one of Mellerikat's core functions. The AI Model versioning aims to track these improvements/optimization processes. The AI Model version follows these rules: v{AI Solution Version}.{Number of training in the stream}. For example, if the solution connected to the Stream is for AI Solution version 3, and the AI Model is the fourth one trained in that Stream, the version will be v3.4. This versioning rule specifies which version of the AI Solution the AI Model belongs to and how many times it has been trained for that AI Solution version, indicating how the currently operating AI Model has been improved.
Status of Model Deployment
Through the Stream status, users can check the deployment status of the model.
- Deploying: The model deployment request has been sent to the Edge. The Edge is ready to receive and deploy the model.
- Deploy Failure: There was an issue during the deployment process on the Edge.
- Deployed: The deployment was successful, and the Edge is ready to perform inference.
Re-deployment
If the newly deployed AI Model performs worse than the previous model, you can re-deploy a model used in the past. The Stream in Edge Conductor keeps a history of previously trained AI Models, allowing users to redeploy an old model. The corresponding Inference Pipeline is also deployed along with the AI Model.
To re-deploy a model that was trained in the past:
- Navigate to the Edge Conductor and login.
- In the left navigation pane, choose Stream.
- Choose the stream to deploy the model.
- Navigate to the Model history tab in the Stream details pane at the bottom.
- Choose the model version to deploy and press the deploy button (rocket icon).
- Choose the edges where you want to deploy the model.
- Press Deploy.