Prerequisites
Before installing EVA, please review the system requirements and environment configuration. EVA operates on a Kubernetes-based architecture and leverages various AI models, including Vision Models, LLMs, and VLMs. Therefore, GPU resources are required.
Server Requirements
EVA operates reliably on environments that meet or exceed the minimum specifications. In production environments, server specifications can be flexibly adjusted based on detection frequency, criticality, and concurrency.
EVA App
The EVA App is the central component responsible for managing camera connections and orchestrating AI services by integrating EVA Vision and EVA Agent. It collects detection results from EVA Vision and forwards them to EVA Agent for further contextual analysis and alert processing.
| Item | Minimum Specification |
|---|---|
| CPU | 12 Cores |
| RAM | 64 GB |
| Storage | SSD 2 TB |
| OS | Ubuntu 22.04 |
💡 The EVA App is responsible for fast processing of on-site data such as camera status monitoring, event log management, and dashboard visualization.
EVA Vision
EVA Vision performs real-time object detection and analysis on camera streams using various Vision models and delivers the analysis results to the EVA App.
The detection results are subsequently utilized by EVA Agent for scenario-based decision making.
| Item | Minimum Specification |
|---|---|
| GPU | NVIDIA L4 |
| RAM | 32 GB |
| Storage | 128 GB |
| OS | Ubuntu 22.04 |
💡 Higher specifications allow the use of a wider range of models and enable simultaneous processing of a larger number of cameras.
EVA Agent
EVA Agent is a component where LLM- and VLM-based AI engines operate.
It receives event and image data from the EVA App and interprets situations, evaluates scenarios, and generates alerts using natural language.
| Item | Minimum Specification |
|---|---|
| GPU | NVIDIA L40S |
| RAM | 32 GB |
| Storage | 256 GB |
| OS | Ubuntu 22.04 |
Beyond simply receiving detection results, EVA Agent leverages VLMs (Vision Language Models) to understand visual context, and utilizes LLMs (Large Language Models) to generate context-aware natural language explanations and alert scenarios.
💡 EVA Agent is an intelligent module that interprets analysis results from EVA Vision and provides natural language–based insights and alerts.
All-in-One Server Configuration (Optional)
For small-scale deployments or early PoC stages, EVA App, EVA Vision, and EVA Agent can be deployed together on a single All-in-One server.
The following specifications are minimum for a single-server configuration supporting approximately 100 cameras.
Minimum Specification (100 Cameras)
| Item | Minimum Specification |
|---|---|
| CPU | 16 Cores or higher |
| RAM | 96 GB or higher |
| GPU | NVIDIA L40S |
| Storage | SSD 2 TB |
| OS | Ubuntu 22.04 |
⚠️ Actual hardware requirements may vary depending on camera resolution, number of cameras, and detection intervals. For large-scale environments, separating components across multiple servers is recommended.
Kubernetes Environment
EVA operates within a Kubernetes environment and requires the following configuration in either cloud or on-premise deployments:
- Kubernetes cluster
- GPU access configuration (e.g.,
nvidia-device-plugin) - Ubuntu 22.04–based nodes
- CUDA 12 or higher
- Helm CLI (v3.0 or higher)
💡 In cloud environments, EVA App, Vision, and Agent can be deployed on separate nodes to ensure scalability and stability. 🧩 In on-premise environments, an All-in-One configuration with shared GPU resources can be used to build a cost-efficient infrastructure.