Skip to main content
Version: Next

Experiment

Updated 2025.02.20

Review the Python file that constitutes the ML Pipeline and the experimental_plan.yaml once more and execute the alo CLI command in the current directory to run the experiment.


Running the AI Pipeline

alo run 
alo run --mode train # Execute only the train pipeline
alo run --mode inference # Execute only the inference pipeline

When the alo run CLI command is executed, it operates based on the files and folders in the current directory. In this process, libraries are installed based on the definitions in experimental_plan.yaml, the pipeline is executed, datasets are loaded, and outputs are saved.

For the Titanic example provided by the alo example CLI command, the folder structure will be as follows after successful execution of alo run.

├── experimental_plan.yaml
├── inference_artifact # Path specified by artifact_uri in the inference: section of experimental_plan.yaml. Stores information on the inference pipeline.
│ ├── inference_artifacts.zip # A zip file containing output (result.csv), log (pipeline.log, process.log), and score (inference_summary.yaml).
│ ├── pipeline.log # Execution log of titanic.py.
│ └── process.log # Overall log of the alo execution.
├── inference_dataset
│ └── inference_dataset.csv
├── setting
│ ├── infra_config.yaml
│ ├── sagemaker_config.yaml
│ └── solution_info.yaml
├── titanic.py
├── train_artifact # Path specified by artifact_uri in the train: section of experimental_plan.yaml. Stores information on the train pipeline.
│ ├── model.tar.gz # The trained model stored in a gz file.
│ ├── pipeline.log # Execution log of titanic.py.
│ ├── process.log # Overall log of the alo execution.
│ └── train_artifacts.tar.gz # Contains log (pipeline.log, process.log) and score (inference_summary.yaml).
└── train_dataset
└── train.csv

If alo run executes successfully, you can proceed to the AI Solution registration process in the next step!