본문으로 건너뛰기
버전: docs v25.02

ALO Release Notes

v3.0.2 (February 26, 2025)

Update

  • UI Arguments (ui_args) Support
    • Added support for ui_args that can be used for customizing the UI.

Supported version

  • AI Conductor: v2.0.1
  • Edge App: v3.2.0

v3.0.1 (February 19, 2025)

Features

  • Added AloSolutionStreamError
    • New error classes with codes ALO-SSA-015, ALO-SSA-016, and ALO-SSA-017 introduced.
    • These errors handle cleanup issues during solution registration.

Bug Fixed

  • Changed return type in the cleanup method to handle errors more effectively.

Supported version

  • AI Conductor: v2.0.1
  • Edge App: v3.2.0

v3.0.0 (February 12, 2025)

Features

  • Installation via PyPI
    • ALO can now be installed simply using pip install mellerikat-alo, making it more accessible and easier to manage.
  • Introduction of ALO CLI
    • New command-line interface (CLI) for ALO to enhance user interaction and simplify management tasks.
    • Usage examples and documentation are available to guide users through the new interface.
  • Expanded Operational Environment
    • ALO now supports execution in any environment where Python is available.
    • AI Solution registration through ALO is now possible in any environment capable of building Docker images.
  • Removal of ALO API
    • The internal ALO API has been removed to streamline the architecture and simplify maintenance.
  • Improved YAML Structure
    • Enhancements to the YAML configuration structure for better readability and manageability.
    • The new structure is designed to be more intuitive and easier to use, reducing the potential for errors.

Supported version

  • AI Conductor: v2.1.0
  • Edge App: v3.2.0

v2.8.0 (February 12, 2025)

Updates

  • support for operating with AI Conductor 2.1.0 on Sagemaker.

Supported version

  • AI Conductor: v2.1.0
  • Edge App: v3.4.1

v2.7.0 (December 11, 2024)

Features

  • Train gpu docker is supported
    • (usage) 'train_gpu': True in register-ai-solution.ipynb
    • (version limitation) tensorflow >= v2.14.0, torch >= 2.0.1
  • A function for data in/out with GCP's GCS has been added.

Supported version

  • AI Conductor: v2.0.1
  • Edge App: v3.2.0

v2.6.0 (September 30, 2024)

Features

  • Add 'overview' and 'detail' fields to 'solution_info' entry in register-ai-solution.ipynb.
  • Delete 'solution_type' (private, public) field from 'solution_info' entry in register-ai-solution.ipynb.
    • Only 'private' is supported from AIC v2.0.0

Updates

  • Update the backend to be compatible with AIC v2.0.0 REST API during the solution register process.
  • Change format to solution metadata v1.2

Bug Fixed

  • Fixed a bug where during the inference process, after success - fail, if success is achieved again during the next inference, the changed setting of UI args is initialized to the original solution.
  • Fixed bug where redis publish message other than 'booting' occurred during ALO booting while interfacing with EdgeApp.

Supported version

  • AI Conductor: v2.0.0
  • Edge App: v3.2.0

v2.5.2 (July 16, 2024)

Updates

  • For Edgeapp interface, when the status of Redis publish is in 'loop and boot' mode, it is fixed as 'booting'.

Bug Fixed

  • Create a solution_meta.yaml under the ALO project home when registering solution.
  • Fix the bug in S3 data download where the sub-directory structure was not maintained.
  • Add defensive code to handle cases where the 'args' in experimental_plan.yaml are provided as an empty list.

Supported version

  • AI Conductor: v1.7.0
  • Edge App: v3.1.0

v2.5.1 (June 13, 2024)

Bug Fixed

  • Skip when ui_args is written as null in experimental_plan.yaml

Supported version

  • AI Conductor: v1.7.0
  • Edge App: v3.1.0

v2.5.0 (June 06, 2024)  

Features

  • Starting with Edge App v3.1.0, it is only compatible with ALO v2.5.0 and later versions.
  • Supports --index-url option for installing asset dependency packages.

Updates

  • To optimize storage, backups of history and solutions registrations now exclude copying each asset's .git directory (except for .git for ALO)  
  • Spec-out functions below in solution register notebook.
    • run train
    • list solution & stream

Bug Fixed

  • Fixed an issue in operational loop mode, where the solution metadata would be overwritten with the default experimental_plan.yaml from ALO every time. Instead, it now correctly overwrites the experimental plan kept in memory  

  • Fixed an issue in operational loop mode, where the sequence of Redis operations and the creation of inference artifact files would get entangled, leading to alo errors and subsequent improper functioning with the Edge App  

    • send fail redis after creating error artifact zip file
    • skip error backup history
    • save_artifacts() logic error fixed (for redis publish)
  • Resolved a bug related to the backup history size limit function

    • Fixed an error caused by not creating the history/train directory when it's a single pipeline  
    • Made enhancements to reduce discrepancies when measuring directory size by now utilizing the linux du -sb command in python subprocess for better measuring accuracy  

Supported version

  • AI Conductor: v1.7.0
  • Edge App: v3.1.0

v2.4.0 (May 16, 2024)  

Features

  • Support for inference pipeline operation when running in Sagemaker mode
    • Use options together when executing main.py, such as --mode inference --computing sagemaker
  • Two keys added to the control part of experimental_plan.yaml
    • save_inference_format: external_path - determines the compression file format of inference artifacts to be stored in the external_path - save_inference_artifacts_path (supports tar.gz, zip)
    • check_resource: Determines whether to display logs related to resource usage such as CPU, Memory, etc. for each Asset (supports True, False)
  • Reflecting default values for experimental_plan.yaml - control
    • If the user does not enter some key, value in the control part of experimental_plan.yaml, the default value is applied
    - get_asset_source: once
    - backup_artifacts: True
    - backup_log: True
    - backup_size: 1000
    - interface_mode: memory
    - save_inference_format: tar.gz
    - check_resource: False
  • Addition of redis publish function
    • Added function to publish ALO's status / fail message to redis (used when run with Edgeapp)

Updates

  • Removal of REMOTE_BUILD key in setting/infra_config.yaml
    • Replaced by writing codebuild in the value of BUILD_METHOD
  • Expansion of the function of inference_only=True in solution_info of register-ai-solution.ipynb
    • Support for Single pipeline: The method of determining whether it is a single pipeline or not is that train_pipeline must exist in both user_parameters and asset_source of experiemntal_plan.yaml to be False

Bug Fixed

  • Bug fix for incorrect operation related to deleting data already existing in s3 when registering a solution  

Supported version

  • AI Conductor: v1.7.0

v2.3.1 (Mar 22, 2024)    

Features

  • The AWS CodeBuild infrastructure support feature has been added to establish a stable Docker build and ECR deployment environment. However, it is not available for internal use.
    • It is available for use outside the company and supports AMD to ARM docker cross-build.
    • During the AI solution version update, most resources within the docker are cached, excluding pip packages.

Updates

  • The log formatting system has been updated to follow the structure of the Mellerikat.
  • After the train or inference are completed, 'process.log' and 'pipeline.log' are additionally stored in the artifacts path.  

Bug Fixed

  • Resolved an issue where EdgeApp failed to generate a pipeline ID and forcibly raised an error when booting the Inference docker.

Supported version

  • AI Conductor: v1.7.0

v2.3.0 (Mar 12, 2024)

Updates

  • If an experimental plan doesn't exist in the solution, the function responsible for copying it to the solution folder is not executed anymore. Instead, it is now loaded in its current location.
  • Removed the get_external_data function in experimental_plan.yaml
  • Support contents version & name in experimental_plan.yaml
  • The ui_args_detail attribute is now mandatory in experimental_plan.yaml
  • When creating a Docker image, logs are now written to a log file instead of being displayed in the terminal
  • You can receive logs for failed training attempts

Supported version

  • AI Conductor: v1.6.0

v2.2.0 (Feb 15, 2024)

Features

  • External Model Loading: Specify a path in load_model_path, including S3 locations, to load models externally. Make sure to grant access permission through an access key file in external_path_permission.

  • Customizable UI for User Parameters: You can now define and describe user parameters to be displayed in the UI. The ui_args_detail section allows you to dictate how arguments will appear.

    See an example below for defining UI presentation methods:

    external_path:
    - load_train_data_path: ./solution/sample_data/train/
    - load_inference_data_path: ./solution/sample_data/test/
    - save_train_artifacts_path:
    - save_inference_artifacts_path:
    - load_model_path:
    external_path_permission:
    - s3_private_key_file:
    ...
    ui_args_detail:
    - train_pipeline:
    - step: input
    args:
    - name: x_columns
    description: Enter the x columns for TCR modeling separated by commas, e.g., x_column1, x_column2
    type: string
    default: ''
    range:
    - 1
    - 1000000

Improved

  • Unified Installation Method: We've standardized the installation process for AI Contents. It's now consistent across the board, making setup a breeze. (Refer to the "Develop AI Solution" manual for guidance)

  • Streamlined AI Solution Registration: It's now easier to register your AI Solutions. Just follow the steps in the register-ai-solution.ipynb for a quick setup.

  • Clutter-Free Environment: We've removed non-essential files to declutter and improve readability, creating a more focused workspace.

    The updated file structure:

    ├── main.py
    ├── makefile
    ├── README.md
    ├── register-ai-solution.ipynb
    ├── requirements.txt
    ├── setting
    │ ├── example_infra_config
    │ │ ├── infra_config.customer.yaml
    │ │ ├── infra_config.localtest.yaml
    │ │ ├── infra_config.magna.yaml
    │ │ └── infra_config.meerkat.yaml
    │ ├── infra_config.yaml
    │ └── sagemaker_config.yaml
    └── src
  • Version Syncing: If the versions of ALO and alo-library misalign, we now automatically install the matching version to prevent any conflicts.

    Assets will import from a new folder named ./alolib within the ALO installation directory.

  • Mandatory Inference Outputs: The inference pipeline will now raise an error if essential artifacts (inference summary & output) are not generated, ensuring reliability.

  • Deprecated Features Handling: Running AI Contents created with the previous experimental_plan.yaml will trigger an error to promote the use of updated formats.


2.1.0 (Nov 22, 2024)

Features

  • Recorded selectable values in user parameter to solution metadata for rule checking in publishing.
  • Created jupyter to generate solution_description.yaml for solution creators.
  • Tested TCR registration process on AI Conductor.
  • Concretized understanding of ALO's operation method in solution_metadata.yaml ver.6 for AI Conductor.
  • Added external_model_path to experimental_plan.yaml and solution_metadata.yaml.
  • Exit after running only asset init when no load_data_path in external_path.
  • Added always-on mode option in main.py and boot sequence in EdgeApp to reduce tacktime.
  • Input dataset_uri to external_path of experimental_plan.yaml when making containers.
  • Added environment variable controls in alo.py to select train/inference/all.
  • Verified and added running inference pipeline from AI Conductor model.
  • Relocate & compress artifacts for ai-advisor when solution_metadata input.
  • Executed wraggler code before pipeline starts.

Bug Fixed

  • Fixed issue where log files were not being backed up to .history when errors occured during step run.
  • Added logic to reset artifacts folder if train/inference pipelines exist in experimental_plan.yaml.
  • Enabled empty string input for --config in main.py.
  • Added edge conductor related keys to solution_metadata.
  • Refactored alolib's utils.py into logger and asset.py.

1.1.0 (Oct 25, 2023)

Features

  • Expanded data processing capabilities

    • Support for input asset read_csv encoding
    • Developed input asset read parquet function
    • Support absolute paths other than load_train/inference_data_path
    • Support concat and split modes in user parameters of Input asset
    • Pass Train/Inference artifacts externally (to nas, s3)
  • Efforts to make the code robust

    • Error occurs if step names in experimental_plan's user_parameter and asset_source do not match
    • Make error occur if output_data(dict) and config are not input in slave's return
    • Add save_log()/warning()/error() functions in slave so Contents creators can write log statements
    • Execute load_external_inference_data after train pipeline ends
  • Expanding the scope of processable Contents

    • Create MNIST experimental_plan.yaml sample for Vision Contents production
    • Create unstructured Input Asset for IC support

Bug fixed

  • Do not save inference_artifacts from train_pipeline in .history (delete opposite case too)

1.0.0 (Oct 16, 2023)

Features

  • The official version of ALO is now easier to install, more user-friendly, and faster than before.
  • Since ALO imports data from external sources to its installed environment for execution, the data processing speed and reliability have improved.It's also now easier to check which data has been used for training and inference.
  • With gitlab, you can bring assets scattered in various places into the working environment, and you can also develop new assets within the workspace.
  • If you just write the packages used in the asset to the experimental_plan.yaml, they are automatically installed in the virtual environment.Moreover, since you can check the packages of all assets from a single file, you can easily spot any dependency errors even if you install additional packages.
  • The results of training and inference are stored in a folder called "artifacts". Results from previous pipelines are also stored in the .history folder, making it possible to compare experiment results at any time.

Internal

  • support Version v.1.0.0 Algorithm (TCR, GCR, and Biz-forecasting)

Updated

  • Image Classification and PAD are planned to be supported.