Skip to main content
Version: Next

Using ALO-LLM Framework

Updated 2025.08.26

This document guides you through developing a service API in your local environment using the ALO-LLM framework.

Developing an API in a Local Environment

ALO-LLM is a development framework that helps users convert their logic code into a FastAPI format, register it with LLMOps, and automatically build and operate the infrastructure environment. LLM logic developers can handle the entire process from Service API development, registration, and operation easily and efficiently without complex settings.

Prerequisites

  • It is recommended to use Python 3.12.x.
  • It is recommended to use VS Code. If you use Mobaxterm, errors may occur when using Korean in code and comments.

LLM logic developers can develop a Service API in a local environment by following these two steps.

Step 1: Write LLM Logic Code

Write environment variables in a .env file and read them using load_dotenv. The .env file is necessary because its contents are registered as environment variables in the pod where the AI Pack runs when registering with LLMOps. Specify types when defining functions, use Langchain syntax to write the code, and structure the return value as a Dict.

  • If you are using the o11y package, you must use load_env provided by o11y instead of python-dotenv.

Step 2: Convert LLM Logic Code to a Service API

Install ALO-LLM. Create a config.yaml file to define the necessary libraries and API mappings. Then, use the alm api command to convert the logic code to FastAPI.


Step 1: Write LLM logic code

Constraints

  • If the logic code is named main.py or alm.py, or if the function is named main or alm, an error and guide document will be displayed, so please avoid these names.
  • The names of the data loaded in the logic code must be in English.

Please follow the three rules below when writing LLM logic code. Also, be sure to check that your LLM logic code works correctly before proceeding to the next step.

Rule 1: Write all key values (LLM model key, DB endpoint, etc.) to be used in the .env file in the current path, and apply the environment variables in the logic code using the load_dotenv and getenv functions.

Rule 2: When defining functions in the logic code, you must define the types of the function's arguments.

Rule 3: The environment variables written in .env are loaded and used as environment variables in each logic code (gpt key, o11y address, db uri).

####### .env #######
### .env logic code , . ###
# observability
LANGFUSE_SECRET_KEY="xxxxxxxxxxxxxxxx"
LANGFUSE_PUBLIC_KEY="xxxxxxxxxxxxxxxx"
LANGFUSE_HOST="http://langfuse.lge.com"
# LLM model
OPENAI_API_TYPE='azure'
AZURE_OPENAI_API_KEY='xxxxxxxxxxxxxxxxxxxx'
AZURE_OPENAI_ENDPOINT='https://dev.dxengws.apim.lgedx.biz/shared-1000'
AZURE_OPENAI_EMBEDDING='https://dev.dxengws.apim.lgedx.biz/shared-embedding'
OPENAI_API_VERSION='2023-05-15'
OPENAI_MODEL_ID='gpt-4o-2024-08-06'
# DB
DATABASE_LOADER_URI= DB_URI
DATABASE_LOADER_TYPE= DB_name
# VECTOR DB
VECTOR_LOADER_URI= DB_URI
VECTOR_LOADER_TYPE= DB_name
  • You can also add and use other convenient names.
  • After creating this content, you can use it with os.getenv in the logic code.
import os
import functools
from langchain_openai import ChatOpenAI
from langchain_openai import AzureChatOpenAI
from langchain.prompts import ChatPromptTemplate
from dotenv import load_dotenv
load_dotenv()
#
OPENAI_API_TYPE = os.getenv("OPENAI_API_TYPE", "openai").lower()
AZURE_OPENAI_API_VERSION = os.getenv("OPENAI_API_VERSION")
OPENAI_MODEL_ID = os.getenv("OPENAI_MODEL_ID")
def get_openai_model() -> ChatOpenAI:
if OPENAI_API_TYPE == "azure":
return AzureChatOpenAI(model=OPENAI_MODEL_ID, api_version=AZURE_OPENAI_API_VERSION)
else:
return ChatOpenAI(model=OPENAI_MODEL_ID)
def ask_chat(topic: str) -> dict:
model = get_openai_model()
full_prompt = f"Tell me a joke about {topic}."
prompt_template = ChatPromptTemplate.from_template(full_prompt)
chain = prompt_template | model
response = chain.invoke({"topic": topic})
return {
'response': response.content
}

Tip

If you want to use list or dict as the input type for a function, you can do so as follows:

def basic_analysis(question: list[str], answer: dict[str, int]):

And the config.yaml file should be written with dict and list as follows:

...
service_api:
path:
/api/basic_analysis: # api path
POST: # method
handler: rest_index_api.basic_analysis # (python file).(function)
parameter: # args.
question: list
answer: dict

Tip

If you want to send the request as a body, you can use it as follows:

from pydantic import BaseModel
class Item(BaseModel):
name: str
description: str = None
price: float
tax: float = None
def create_item(item: Item):
print(item)
return item

Use object as the value for the parameter.

...
service_api:
path:
/api/basic_analysis: # api path
POST: # method
handler: test.create_item# (python file).(function)
parameter: # args.
item: object

Step 2: Convert LLM Logic code to a Service API

User-created files for running ALO-LLM

After installing the ALO-LLM package, the following files must exist in the working path during logic development.

  1. config.yaml: This is the configuration for how to construct the Service API.
  2. model.py (filename is flexible): This is the file where the LLM logic is implemented at the function level.
  3. .env: This is the environment variable setting file for using Azure OpenAI.

The Service API is built based on the FastAPI framework. The input format of each API must be specified, and the URI must be clearly defined. ALO-LLM automatically generates the API based on these regulations.

2-1. Install ALO-LLM with the pip command

  • Install with the pip install command pip install mellerikat-alm

Caution

When registering via the ALO-LLM CLI, all folders and files in the current location are compressed and registered.

2-2. Create config.yaml


  • Create the yaml based on your logic code.
name: simple_chatbot # Service api name
version: 1.0.0 # Service api version
entry_point: alm api #
overview: A framework that easily converts Python code to FastAPI and deploys it to a production environment.
description:
Agent: ALO-LLM #
Developer: C, Y #
Documents: http://collab.lge.com/main/pages/viewpage.action?pageId=3035388304 #
codes: http://mod.lge.com/hub/llmops-aibigdata/llo/llo-dev/-/tree/v1.0.2?ref_type=heads #
version: v1.2.0 # key
etc: etc..
setting: # library
pip:
# requirements: True # True if requirements.txt exists
requirements:
- python-dotenv
- pandas
..
ai_logic_deployer_url: "https://ald.llm-dev.try-mellerikat.com" # AI Logic Deployer
components:
local_host:
port: 1333
service_api: # logic code
path:
/api/ask_chat: # api path
POST: # method
handler: chat.ask_chat# (python file).(function)
parameter: # args.
topic: str
  • name: This field specifies the name of the service API. Here it is named simple_chatbot.
  • version: Sets the version of the service API. The current version is 1.0.0.
  • entry_point: This is the command that the user will execute. If you are using ALO-LLM, use 'alm api'. Otherwise, you can use a custom command.
  • overview: Write a general description of the service API.
  • description: Write a detailed description of the registered service API. It is written in key: value format. The top 4 (Agent Name, Developer, Documents, Codes) are required, and the rest can be added/deleted by the user.
  • setting: Represents the settings for service configuration.
  • pip: Specifies the list of libraries to install.
  • requirements: Lists the necessary libraries.
  • ai_logic_deployer_uri: Enter the address of the AI Logic Deployer to proceed with registration and deployment.
  • components: This field defines the port used for the experiment.
  • local_host: Settings for the local experiment area.
  • port: Port setting for testing in the local environment. If the set port is unavailable, a random one will be assigned. (Please specify a value greater than 1024)
  • service_api: Contains the settings for mapping the written logic code to an API.
  • path: Defines a specific API path and method.
  • /api/generate_questions: The defined API path.
  • GET: The HTTP method. Use GET or POST.
  • handler: The handler follows the (python file).(function) format. Here it is specified as rest_index_api.generate_questions.
  • parameter: Sets the arguments and input types of the function.
  • target: A string type argument.
  • ...

2-3. Convert the written Logic code to FastAPI format


  • Execute the CLI command to convert the written logic code to FastAPI.
alm api
  • If executed successfully, the Local URL will be displayed.

2-4. Check if the Service works correctly in the Local environment

  1. Caution
  2. Instead of the 0.0.0.0 address, please access it with the actual server address.
  3. ex) http://{host}:{port}/docs/api/v1http://10.158.2.106:8758/docs/api/v1
  • Check for normal operation through swagger. (Add /docs/api/v1 to the server address written above. → ex: http://10.158.2.106:8758/docs/api/v1)
  • (Optional) To check the observability of the LLM used, connect to langfuse.

Reference

Create a Python 3.12 virtual environment

Download the script for installing pyenv.

curl -L -o install_pyenv.sh http://mod.lge.com/hub/dxadvtech/aicontents-framework/alo-guide/-/raw/main/install_pyenv.sh

For external users:

curl -L -o install_pyenv.sh https://raw.githubusercontent.com/meerkat-alo/alo-guide/main/install_pyenv.sh

Install and run pyenv and pipenv.

## pyenv
exec bash ## bash
bash install_pyenv.sh (or ./install_pyenv.sh)
pyenv install 3.12
## pyenv
pyenv global 3.12
## pipenv
python3 -m pip install --user pipenv
## pipenv
## virtual_env_dir . mkdir .
## .
mkdir (virtual_env_dir)
cd (virtual_env_dir)
pipenv --python 3.12
pipenv shell
  • If you get a WARNING or ERROR during pyenv install 3.12, install the dependent packages as follows and try again: sudo apt install zlib1g zlib1g-dev libssl-dev libbz2-dev libsqlite3-dev libncurses-dev libffi-dev tk-dev liblzma-dev libreadline-dev -y
  • If you have already created a virtual environment, you only need to enter the command below in the terminal.
cd (virtual_env_dir)
pipenv --python 3.12
pipenv shell