AI Learning Organizer (ALO) v3
What is AI Learning Organizer(ALO)?
AI Learning Organizer (ALO) is a specialized framework for enhancing and streamlining the development of AI solutions. By developing assets according to ALO's guide and connecting them into a single ML Pipeline for training and inference experiments, solutions can then be containerized via a solution registration Jupyter Notebook to leverage the ample resources of Mellerikat. Once registered as a Mellerikat solution, multiple users will be able to easily request AI training and deploy to Edge Devices.
Key Features
ALO v3 has been improved in various areas compared to ALO v2 to maximize user convenience and efficiency. Below is an explanation of the major changes and their effects.
Improved Accessibility and Usability
In ALO v2, the installation process was complicated and version management was difficult due to the use of Git Clone. In ALO v3, with the introduction of the pip install method, installation has become very simple, and version management and local folder creation issues have been resolved. Furthermore, in ALO v2, command-based execution was not intuitive, causing inconvenience to users. ALO v3 significantly improves usability by providing intuitive CLI commands such as alo run and alo template.
Increased User Convenience
The code structure has also been simplified. In ALO v2, a separate coding method suited to ALO (about 30 APIs) had to be used, but in ALO v3, code modifications are minimized by simply adding ALO to existing modeling code. Because of this, ALO's syntax has disappeared, making single .py file management easier and usability greatly increased. Additionally, it provides functionality to print only the desired level of logs and guidance on about 60 error cases to prevent user misuse and quickly resolve issues.
Improved Efficiency
The previously complicated YAML file writing method has also been simplified to map directly with modeling code. Although writing YAML files in ALO v2 was complex, in ALO v3 it can be written much more intuitively, significantly enhancing efficiency. Moreover, as modeling code and ALO are perfectly separated, users can fully focus on writing modeling code.
Enhanced Performance
In ALO v2, there was a lack of GPU support, limiting training and inference speeds. ALO v3 supports GPU during training and inference, greatly enhancing the speed of model training and inference. This allows high-performance models to be trained in a short amount of time and increases inference accuracy.
User Scenario
User scenarios related to ALO are as follows:
- Data scientists install ALO via pip install in environments where they want to develop models, such as personal PCs, servers, or cloud infrastructure.
- Subsequently, they develop AI Solutions by leveraging AI Contents based on ALO, or without any AI Contents. AI Solutions refer to technical units created in a form that can train and deploy AI models to solve specific problems.
- After developing an AI Solution, it is registered with the AI Conductor via the ALO CLI. When an AI Solution is registered with the AI Conductor, an environment for training, or an Instance, is automatically assigned at the minimum size.
- After registering the AI Solution, it is confirmed whether training is performed properly in the assigned Instance through the ALO's AI Solution Test process.
- Once the AI Conductor displays that training was successfully performed, the AI Solution registration is completed successfully. Following the AI Solution registration, training requests in the Edge Conductor and deployment for inference in the Edge App can be carried out.