EVA App Installation
This guide provides instructions for a fast and stable installation of the EVA App in a Kubernetes environment.
🛠️ Prerequisites
Before starting the installation, please ensure the following tools and environments are ready:
- Kubernetes Cluster: A cluster where EVA will be deployed.
- kubectl: Go to Installation Guide
- Helm: Go to Installation Guide
- AWS CLI Configuration: AWS credentials are required for ECR image access. Configure them by running the following command in your terminal:
aws configure
# Enter your AWS Access Key ID, Secret Access Key, etc., as prompted.
🚀 Installing EVA App
Step 1: Register Helm Repository
First, add the Helm repository for the EVA App deployment.
helm repo add eva-app https://mellerikat.github.io/eva-app
helm repo update
Step 2: Prepare Configuration File (values.yaml)
Download the default configuration template to your current directory.
helm show values eva-app/eva-app > values.yaml
Step 3: Update Settings for Your Environment
Open the downloaded values.yaml file and modify the settings to match your environment. Key configuration values differ between On-premise (self-hosted) and Cloud (AWS) environments; please refer to the table below.
| Category | Key | Default | Description |
|---|---|---|---|
| vision | app.vision.ml | - | VM service endpoint URL |
| agent | app.agnet.llm | - | LLM service endpoint URL |
| agent | app.agent.vlm | - | VLM service endpoint URL |
| agent | app.agent.requestTimeoutSeconds | 30 | Agent request timeout in seconds |
| app | app.backendHost | - | Backend server host |
| app | app.backendPort | - | Backend server port |
| app | app.backendSecure | - | Whether to use HTTPS |
| app | app.browserTitleName | EVA | Browser tab title |
| app | app.infraProvider | onprem | Infrastructure provider type (onprem / ncp / aws) |
| app | app.jwtSecret | Secret | Secret key used for JWT signing |
| lifeCycle | app.lifeCycle.analysisRetentionDays | None | Retention period for alert and analysis data (days) |
| license | app.license.activation_mode | online | License activation mode (online or offline) |
| license | app.license.product_code | eva | License product code |
| license | app.license.api_key | ... | License API key |
| license | app.license.shared_key | ... | License shared key |
| pipeline | app.pipeline.detector.workerNum | 16 | Number of concurrent detector pipeline workers |
| pipeline | app.pipeline.streamer.annotationColorMode | GRAY | Annotation color mode |
| pipeline | app.pipeline.streamer.dispatcherFaceAnonymizerEnabled | True | Enable face anonymization in dispatcher |
| pipeline | app.pipeline.ingester.ingesterStreamFPSForGrab | 30.0 | FPS for grabbing stream frames |
| pipeline | app.pipeline.perceptor.queueSize | 100 | Perceptor task queue size |
| pipeline | app.pipeline.perceptor.workerNum | 2 | Number of perceptor workers |
| session | app.accessTokenExpireMinutes | 1440 | Access token expiration time in minutes |
| session | app.uiSessionTimeoutMinutes | 60 | UI session timeout in minutes (0 means no timeout) |
EVA App Resource Values
| Category | Key | Default | Description |
|---|---|---|---|
| database | database.name | eva | Name of the database used by the application |
| database | database.type | internal | Database type (internal: in-cluster MySQL, external: external DB) |
| database | database.service.nodePort | 32060 | NodePort for the internal MySQL service |
| database | database.internal.user | ... | Internal database user account |
| database | database.internal.password | ... | Internal database user password |
| database | database.external.host | dbhost | External database host address |
| database | database.external.port | 3306 | External database port |
| database | database.external.user | ... | External database user account |
| database | database.external.password | ... | External database user password |
| image | image.repository | (env-specific) | Container image repository for eva-app |
| image | image.pullPolicy | IfNotPresent | Image pull policy |
| image | image.tag | appVersion | Image tag (defaults to chart appVersion) |
| imagePullSecrets | imagePullSecrets.enabled | true | Whether to use image pull secrets |
| imagePullSecrets | imagePullSecrets.existingSecret | "" | Name of an existing image pull secret |
| imagePullSecrets | imagePullSecrets.create | true | Whether Helm should create the image pull secret |
| imagePullSecrets | imagePullSecrets.nameOverride | "" | Override name for image pull secret |
| imagePullSecrets | imagePullSecrets.account | 339713051385 | AWS ECR account ID |
| imagePullSecrets | imagePullSecrets.region | ap-northeast-2 | AWS ECR region |
| imagePullSecrets | imagePullSecrets.password | ECR Login Password | ECR login password |
| nodeSelector | nodeSelector | "" | Node selection constraints for pod scheduling |
| pv | pv.create | false | Whether to create the PersistentVolume using Helm |
| pv | pv.nameOverride | "" | Override name for the PersistentVolume |
| pv | pv.annotations | "" | Additional annotations for the PersistentVolume |
| pv | pv.csi.driver | efs.csi.aws.com | CSI driver name |
| pv | pv.csi.volumeHandle | fs-00000000000000000 | CSI volume handle |
| pv | pv.storageClassName | efs-sc-eva-app | StorageClass name for the PersistentVolume |
| pv | pv.storage | 30Gi | PersistentVolume storage size |
| persistence | persistence.type | hostPath | Storage type (pvc or hostPath) |
| persistence | persistence.hostPath.path | "" | Mount path for hostPath volume |
| persistence | persistence.pvc.nameOverride | "" | Override name for the PersistentVolumeClaim |
| persistence | persistence.pvc.annotations | "" | Annotations for the PersistentVolumeClaim |
| persistence | persistence.pvc.existingClaim | "" | Name of existing PVC to use |
| persistence | persistence.pvc.storageClassName | efs-sc-eva-app | StorageClass name for the PVC |
| persistence | persistence.pvc.storage | 30Gi | PersistentVolumeClaim storage size |
| replicaCount | replicaCount | 1 | Number of eva-app pod replicas |
| resources | resources | dict | Container resource requests and limits |
| serviceAccount | serviceAccount.create | true | Whether to create a ServiceAccount automatically |
| serviceAccount | serviceAccount.name | "" | ServiceAccount name |
| serviceAccount | serviceAccount.existingServiceAccount | "" | Use an existing ServiceAccount |
| secret | secret.nameOverride | "" | Override name for the Kubernetes Secret |
| secret | secret.data | "" | Kubernetes Secret data |
| service | service.nodePort | 32010 | NodePort for eva-app service |
| service | service.internal.enabled | true | Whether to enable the internal service |
| tolerations | tolerations | [] | Pod toleration settings |
| volumes | volumes.dshm.size | 8Gi | Size of shared memory at /dev/shm |
📂 On-premise environment values.yaml example (Click)
# ============================================================
# values override file for On-Premise environment
#
# Usage:
# helm install eva-app -n eva-app . -f values.yaml -f platform/onpremise/values.yaml
# helm upgrade eva-app -n eva-app . -f values.yaml -f platform/onpremise/values.yaml
# ============================================================
app:
backendHost: "10.158.2.0" # On-Premise server IP
backendSecure: false
# On-Premise environment
infraProvider: onprem
jwtSecret: secret
vision:
ml: "http://eva-vision.eva-vision:8000"
agent:
llm: "http://eva-agent.eva-agent"
vlm: "http://eva-agent.eva-agent"
license:
activation_mode: "online"
product_code: "eva"
api_key: ""
shared_key: ""
## On-Premise deployments use an internal MySQL Pod
database:
name: "eva"
type: "internal"
internal:
service:
nodePort: 32060
password: passwd
user: root
imagePullSecrets:
enabled: true
create: true
password: "ECR Login Password" # aws ecr get-login-password --region ap-northeast-2
## On-Premise deployments mount local storage using hostPath
persistence:
type: hostPath
hostPath:
path: "" # Actual path on the On-Premise server, default is /{RELEASE_NAME}
## On-Premise deployments do not require a separate PersistentVolume (hostPath is used)
pv:
create: false
## On-Premise deployments require ServiceAccount creation
serviceAccount:
create: true
☁️ Cloud (AWS EKS) environment values.yaml example (Click)
# ============================================================
# values override file for Cloud (AWS) environment
#
# Usage:
# helm install eva-app -n eva-app . -f values.yaml -f platform/cloud/values.yaml
# helm upgrade eva-app -n eva-app . -f values.yaml -f platform/cloud/values.yaml
# ============================================================
app:
backendHost: "your-alb-or-domain.example.com"
backendSecure: true # Set to true when using HTTPS
# Cloud environment (ncp or aws)
infraProvider: aws
jwtSecret: secret
agent:
llm: "http://eva-agent.eva-agent"
vlm: "http://eva-agent.eva-agent"
vision:
ml: "http://eva-vision.eva-vision:8000"
license:
activation_mode: "online"
product_code: "eva"
api_key: ""
shared_key: ""
## For Cloud environments, using managed databases such as external RDS is recommended
database:
name: "eva"
type: "external"
external:
host: "your-rds-endpoint.rds.amazonaws.com" # Replace with your actual RDS endpoint
port: 3306
password: passwd
user: root
## ECR authentication settings
## In EKS (AWS), ECR authentication can be handled via ServiceAccount and IRSA,
## so imagePullSecrets can be disabled.
## In NKS (NCP), imagePullSecrets must be used.
imagePullSecrets:
enabled: true
create: true
password: "ECR Login Password" # aws ecr get-login-password --region ap-northeast-2
## In Cloud environments, pods can be scheduled onto a specific node group
nodeSelector: {}
# nodegroup: "ng-an2-eva-app"
## In Cloud environments, PersistentVolumes are created separately and connected to EFS
pv:
create: true
csi:
driver: efs.csi.aws.com
volumeHandle: fs-00000000000000000 # Replace with the actual EFS FileSystem ID
storageClassName: efs-sc-eva-app
storage: 30Gi
## In Cloud environments, PVC-based storage such as EFS is used
persistence:
type: pvc
pvc:
storageClassName: efs-sc-eva-app
storage: 30Gi
## In Cloud environments, ServiceAccounts are managed via IRSA or similar mechanisms
## For EKS (AWS), the deployment must be associated with a ServiceAccount that has
## the required permissions. Set create: false and specify existingServiceAccount if needed.
serviceAccount:
create: false
existingServiceAccount: ""
⚙️ Default values.yaml (Click)
# Default values for eva-app.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
## For installation On-premise,
## Set database.type to "internal", serviceAccount.create to true.
## Set persistence.type to "hostPath" to mount data volume.
##
## Application Settings.
## These settings are used in the application and can be accessed as environment variables.
#
app:
## Application Settings
browserTitleName: "EVA"
## Backend host and secure settings
backendHost: "127.0.0.1"
backendSecure: false
## Determines which infrastructure environment the application is running on : [ onprem | ncp | aws ]
infraProvider: onprem
## JWT_SECRET
jwtSecret: secret
## Vision Configuration
vision:
ml: "http://eva-vision.eva-vision:8000"
## Agent Configuration
agent:
llm: "http://eva-agent.eva-agent"
vlm: "http://eva-agent.eva-agent"
requestTimeoutSeconds: 30
## Number of days after which analysis will expire
lifeCycle:
analysisRetentionDays: 30
license:
activation_mode: "online"
product_code: "eva"
api_key: ""
shared_key: ""
## Number of concurrent threads for the detector/perceptor job in the pipeline
pipeline:
detector:
workerNum: 16
ingester:
ingesterStreamFPSForGrab: 30
perceptor:
workerNum: 2
queueSize: 100
streamer:
dispatcherFaceAnonymizerEnabled: "true"
annotationColorMode: "GRAY" # ["GRAY" | "COLOR"]
## session
session:
uiSessionTimeoutMinutes: 60
accessTokenExpireMinutes: 1440
## additional configuration for eva-app environment
additional: {}
# foo: bar
##
## Kubernetes Manifest Settings.
## These settings are used in the Kubernetes manifests and can be accessed as Helm template variables.
#
## Configuration for database.
## Set type to "internal" → deploy MySQL pod inside the cluster (internal DB).
## Set type to "external" → connect to an external DB using external settings.
##
database:
name: "eva"
type: "internal" # ["internal", "external"]
internal:
service:
nodePort: 32060
password: passwd
user: root
external:
host: "dbhost"
port: 3306
password: passwd
user: root
image:
repository: 339713051385.dkr.ecr.ap-northeast-2.amazonaws.com/mellerikat/release/eva-app
# This sets the pull policy for images.
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
# tag: "latest"
## Configuration for image pull secrets.
## imagePullSecrets used when pulling eva-app image from ecr.
imagePullSecrets:
# When enabled is true, you can set ExistingSecret or create a new secret.
enabled: true
ExistingSecret: "" # eva-app-secret-regcred
create: true
## The name of image pull secret.
## Use '{{ .Release.Name }}-secret-regcred' by default
nameOverride: ""
account: "339713051385"
region: "ap-northeast-2"
# regcred: "regcred/regcred.yaml"
# dockerconfigjson: "regcred/regcred.json"
## After completing the following steps, enter the ECR password.
## Step 1. AWS configuration
## Step 2. Get ECR login password
#### $ aws ecr get-login-password --region ap-northeast-2
password: "ECR Login Password"
nodeSelector: {}
# nodegroup: "ng-an2-eva-app"
pv:
create: false
## Use 'pv-{{ .Release.Name }}' by default
nameOverride: ""
annotations: {}
# helm.sh/resource-policy: keep
# Example CSI configuration for EFS
csi:
driver: efs.csi.aws.com
volumeHandle: fs-00000000000000000
storageClassName: efs-sc-eva-app
storage: 30Gi
persistence:
type: hostPath # ["pvc", "hostPath"]
## If persistence.type is "hostPath",
## this allows to mount the local storage.
##
hostPath:
path: "" # "/eva-app"
pvc:
## Use 'pvc-{{ .Release.Name }}' by default
nameOverride: ""
annotations: {}
# helm.sh/resource-policy: keep
existingClaim: "" # set to use an already created PVC
storageClassName: efs-sc-eva-app
storage: 30Gi
replicaCount: 1
resources: {}
# requests:
# nvidia.com/gpu: 1
serviceAccount:
## Set the deployed serviceAccount name
## Default name is Helm Release Name
## Service account create when onPremise flag and serviceAccount.create are true.
create: true
name: "" # eva-app
existingServiceAccount: ""
secret:
nameOverride: ""
# This data is used in secret.
data: {}
service:
nodePort: 32010
internal:
enabled: true
tolerations: []
# - key: "key"
# operator: "Equal"
# value: "value"
# effect: "NoSchedule"
# tolerationSeconds: 0
volumes:
## Shared memory volume for /dev/shm
dshm:
size: 8Gi
## For debug
## Uncomment this line for debug, pod sleep infinity
# command: ["sleep", "infinity"]
# appVolumeMount:
# path: /home/$(whoami)/path/to/eva-app/apps/backend/app/app
# webVolumeMount:
# path: /home/$(whoami)/path/to/eva-app/apps/frontend/dist
Step 4: Run Installation Command
Now, deploy the EVA App with the configured settings. This command automatically retrieves the ECR login password.
# 1. Get AWS ECR login password
ecr_password=$(aws ecr get-login-password --region "ap-northeast-2")
# 2. Create dedicated namespace
kubectl create namespace eva-app
# 3. Start Helm installation
helm install eva-app eva-app/eva-app \
-n eva-app \
-f values.yaml \
--set imagePullSecrets.password="${ecr_password}"
Step 5: Verify Installation
Once the installation is complete, a success message with a train-shaped ASCII art and the access URL will be displayed in your terminal.
NAME: eva-app
STATUS: deployed
REVISION: 1
NOTES:
. ___ __ __ ___ ___ _ __ _ __
| __| \ \ / / / \ o O O / \ | '_ \ | '_ \
| _| \ V / | - | o | - | | .__/ | .__/
|___| _\_/_ |_|_| TS__[O] |_|_| |_|__ |_|__
_|"""""|_| """"|_|"""""| {======|_|"""""|_|"""""|_|"""""|
"`-0-0-'"`-0-0-'"`-0-0-'./o--000'"`-0-0-'"`-0-0-'"`-0-0-'
eva-app has been installed successfully!
...
Application Details:
App Version : 2.2.6
Chart Version : 1.3.2
Release Name : eva-app
Namespace : eva-app
You can access "EVA App" using the following URL:
EVA App : http://10.158.2.75:32055
You can check helm instance by running:
$ helm ls -n eva-app
You can check kubernetes objects by running:
$ kubectl get all -n eva-app
For further information, visit https://mellerikat.com
The following procedure allows you to confirm whether the database migration has been successfully executed and whether the latest schema version has been correctly applied.
1. Check EVA App Pod Name
kubectl get pods -n eva-app
2. Check the Alembic Revision Applied to the Current Database
kubectl exec -it {eva-app-xxxx} -n eva-app -- alembic current
Verify that the Alembic revision value shown above matches the Alembic revision listed in the Release Notes for the corresponding software version.