Skip to main content

EVA App Release Notes

v2.2.5 (Oct 31, 2025)

  • Helm Chart Version: 1.3.1
  • Alembic Version: b17e2d45ed01

Features

  • Alert History Enhancements and Expanded Filtering: The previous “Latest Alert” label has been changed to “Alert History”, with added guide text for better usability. Summary information shown in the chat window has been improved, and users can now filter alerts using multiple keyword inputs.

  • Customizable EVA Browser Title: The EVA browser title can now be configured via a separate setting, allowing customization based on the deployment site or customer requirements.

Performance Enhancements

  • Performance Optimization for Bounding Box Accuracy and Confidence Display: Resolved issues where bounding boxes were inaccurately drawn on objects from previous frames or misaligned with actual object positions. The algorithm has been optimized to ensure more precise and real-time bounding box placement during object detection. Additionally, fixed a bug where confidence values were displayed even when they were below the configured threshold. Confidence is now shown only when it exceeds the defined threshold, providing more accurate feedback on detection reliability.

Bugs Fixes

  • Fix for Brightness Setting Command in Chat: Fixed a bug where the AI Solution status was incorrectly shown as Deployed before the actual deployment was completed in Edge Conductor. The status now correctly reflects Deploying during the process.

  • Incorrect Navigation When Model Is Not Selected: When the ML model is deployed but not yet selected on the camera, the Camera List screen now correctly navigates to the model selection (Edit) screen instead of the Edge Conductor registration page.

  • Duplicate Alert Display in Chat: Resolved an issue where fast VLM processing caused the same alert to appear twice in the chat window.


v2.2.4 (Oct 21, 2025)

  • Helm Chart Version: 1.2.1
  • Alembic Version: b17e2d45ed01

Features

  • Helm Template Enhancements:
    • Updated the value controlling PV usage to enable it for AWS Cloud and on-premise environments, while disabling it for Naver Cloud Platform.
    • Added a value to control whether a service account should be created, allowing for more flexible configuration.
    • Adjusted the inference deployment YAML to manage credreg usage through the Helm template.

Internal Improvements

  • Solution Deploy Code Update for Non-EKS Environments: Modified the solution deployment code to support credreg usage in non-EKS(AWS) environments such as Naver Cloud Platform, ensuring compatibility across various infrastructure setups.

v2.2.3 (Oct 20, 2025)

  • Helm Chart Version: 1.1.7
  • Alembic Version: b17e2d45ed01

Features

  • Search Filter for Latest Alert in Chat: A search filter has been added to the Latest Alert feature in Chat. Users can now filter alert history not only by date but also by keyword, allowing for more precise and efficient alert tracking.

Performance Enhancements

  • Low-Light Mode Execution Time and Image Quality Optimization: Improved the execution time of low-light mode transitions and enhanced image quality during streaming. This update addresses issues where image clarity was reduced under low-light conditions and optimizes the overall performance of low-light processing.

v2.2.2 (Oct 17, 2025)

  • Helm Chart Version: 1.1.6
  • Alembic Version: b17e2d45ed01

Bugs Fixes

  • Fix for Brightness Setting Command in Chat: Resolved an issue where the set brightness to brightness_param command did not function correctly in chat.

v2.2.1 (Oct 17, 2025)

  • Helm Chart Version: 1.1.5
  • Alembic Version: b17e2d45ed01

Features

  • EVA-App UUID Display on Web Setting Page: The Web Setting page now displays the UUID of each EVA-App. This enhancement allows for clearer identification and utilization of EVA-App identifiers within Edge Conductor, improving integration and system management.

  • Night Mode Indicator on Stream Screen: When night mode is automatically applied due to low brightness in the frame, an icon is now displayed on the stream screen. This provides users with a visual cue that brightness enhancement is active, improving transparency and user awareness.

Bugs Fixes

  • Preventing Unnecessary VLM Requests in Vision ML: The system has been updated to prevent VLM requests when no object is detected by Vision ML. This reduces unnecessary inference calls and optimizes resource usage.

  • Default Brightness Parameter Adjustment for Camera Registration: Upon camera registration, the default value for the User Brightness Parameter has been changed from 0 to 50, corresponding to a gamma value of 1.5. This adjustment improves initial image clarity and enhances user experience.

  • Webhook Configuration Improvements: Several enhancements have been made to the Webhook setup process

    • When only one Webhook is configured, the delete (–) button is disabled to prevent accidental removal.
    • Informative messages are now displayed for Webhook entries that have not yet passed validation, helping users complete setup accurately and confidently.

v2.2.0 (Oct 2, 2025)

  • Helm Chart Version: 1.1.1
  • Alembic Version: 1c69e95a8307

Features  

  • Low-Light Image Enhancement Logic Applied: A preprocessing logic has been added to automatically adjust the brightness of images captured in low-light environments. This allows ML and VLM models to infer based on clearer visual information, improving overall recognition accuracy.

  • Expanded Vision ML Capabilities: The architecture has been improved to support multiple ML models within a single Vision Solution. Previously, Vision Solutions were mapped per device, but now they can be mapped per EVA App, allowing flexible selection of models optimized for each device domain.

  • Image Capture Frame Preview on New Device Registration: A new feature allows users to preview image capture frames in real-time during device registration, enabling immediate verification of device connection status and video quality.

  • Quick Button UI/UX Enhancement: The user interface has been improved with a newly designed Quick Button for faster access to key features. Frequently used functions such as Monitoring ON/OFF, Target settings, Alert settings, and viewing recent Alerts are now more intuitive.

  • New Scenario for Enrich Prompt Applied: When the user clicks the Generate button, an appropriate prompt is automatically applied. This generates a high-performance prompt even with minimal user input, reducing the burden of prompt writing and supporting efficient workflows.

  • Enhanced False Positive Feedback Functionality: The feedback feature for false positives (misrecognitions) has been expanded for more precise input.

    • A description field has been added to record detailed explanations of false positive cases.
    • A Similarity Threshold value can be set based on the similarity between the false positive image and the original image, improving the precision and usability of feedback.
  • Device-Specific VLM Inference Interval Setting: A new Alert Interval feature allows setting individual VLM inference intervals per device. This enables optimal inference cycles tailored to each device's characteristics and operating environment.

  • New Webhook URL Pattern Applied: The Webhook URL pattern for integration with external systems has been updated.

Bug Fixes  

  • Alert and Chat Message Lifecycle Improvement: The retention period for Alert and Chat message data can now be configured. Data is automatically deleted after a set period, helping to manage storage efficiently and improve system performance by reducing clutter.

  • Device List Loading Delay Issue Fixed: The issue where the device list loaded slowly has been resolved, allowing faster access to device information upon page entry.

  • New Device List Position Adjustment: Newly registered devices now appear at the top of the list instead of the bottom, making it easier to find and configure recently added devices.

  • Fixed Issue with Teams Webhook Not Sending: An issue where Teams Webhook messages were not being sent in certain scenarios has been resolved. Notifications are now reliably delivered to external systems, ensuring uninterrupted communication flow.

  • Prevented VLM Request Loop Termination on Agent Error: The logic has been updated to prevent the VLM request thread loop from terminating when an HTTP error (e.g., httpx.HTTPStatusError) is returned by the VLM Agent. This enhancement ensures stable system operation even in the presence of external service errors.


v2.1.0 (Aug 26, 2025)

  • Helm Chart Version: 1.0.1
  • Alembic Version: a9526d6688c9

Features  

  • Image Brightness Preprocessing Setting Added: Users can now enable image brightness adjustment through configuration settings.

  • Enrich Detection Scenario Feature Applied: Detection scenarios entered by users are automatically enriched by the LLM based on camera context information, enabling more precise inference.

  • Quick Commands Feature Improved: Frequently used commands such as Monitoring On/Off, Brightness settings, and viewing recent Alerts can now be executed more easily through improved Quick Commands.

  • Web Title Changed: The web page title has been changed from "Edge Vision Agent" to "EVA" to strengthen brand consistency.

Internal Improvements

  • SSE Applied to Camera Detail View: Server-Sent Events (SSE) have been applied to the camera detail view, allowing real-time UI updates when values change.

  • Chat History Sent with LLM Queries: The last 7 chat history entries are now sent with LLM queries to improve context-aware response quality. Only text-based chats are included (Alert, Fewshot, and Bright image data are excluded).

  • VLM and LLM Model Info Managed in DB: Backend structure has been updated to manage VLM and LLM model information in a database. GPT-OSS and Gwen2.5-VL are registered by default, laying the foundation for future model expansion.

  • Response Config Info API Added (Admin Only): A new Dump API allows administrators to retrieve system configuration information, improving operational management efficiency.

  • LLM Server Exception Logging Added: Exception events on LLM and VLM servers are now logged to enhance debugging and operational stability.

Performance Enhancements

  • Streamer Process Architecture Improved: The streamer now operates on a process-based architecture instead of thread-based, improving streaming performance and resolving deadlock issues caused by inter-thread queues.

  • Streamer Annotation Boxing Algorithm Applied: An extrapolation algorithm based on speed prediction has been applied, allowing visualization at predicted positions even when boxing data is unavailable.

Bug Fixes  

  • ML Execution Halt on Disconnected Camera Monitoring Fixed: An issue where ML execution for all cameras stopped when monitoring was started on a disconnected camera has been resolved.

  • LLM Error on Detection Scenario Change Fixed: An issue where the LLM failed to operate correctly after changing the detection scenario has been fixed, ensuring stable scenario-based inference.


v2.0.0 (Aug 19, 2025)

  • Helm Chart Version: 1.0.0
  • Alembic Version: 77560c1fffa6

Features  

  • New UX Design Applied: A new UX design has been applied to improve user experience, with a more intuitive layout and feature placement for better accessibility and convenience.

  • User Account Feature Added: A user account system has been introduced to distinguish user permissions within the system. Roles include Admin, Manager, and User, each with different access levels.

  • Enhanced Alert Functionality: The alert feature has been expanded to provide data on operational events and real-time notifications for quick response. Users can also provide feedback on detection images to help improve the model.

  • Improved User Interface: Monitoring start/stop buttons are now available on the device list screen for easier control. RTSP connection status (Conn / Disconn) can be checked in real-time when connecting new devices. A timezone setting has also been added to the EVA settings menu for regional customization.

Internal Improvements  

  • Separated Interfaces for LLM and VLM: The inference structure has been improved by separating the roles of LLM and VLM, enhancing detection performance and allowing independent management for better maintainability and scalability.

  • Data Storage Method Changed: The storage method has been changed from file-based to MySQL DB-based. Key data (e.g., edges, model, target) is now stored in a structured format, making it easier to search and manage.

Performance Enhancements  

  • Streaming Performance Improved: The video streaming protocol has been changed from WebSocket to Server-Sent Events (SSE), improving stability and real-time processing performance.

  • ML Result Visualization Improved: A linear interpolation algorithm has been applied to the annotation box for ML inference results, improving draw performance on the web interface for smoother and more accurate visual representation.

  • Ingester Performance Improved: Target FPS can now be applied differently depending on the camera state (stream, ml, idle). FPS settings can be configured during installation via values, enabling resource efficiency and operational optimization.

Bug Fixes  

  • Target-Specific Threshold Change Not Applied Issue Fixed: An issue where threshold values set per target were not applied has been resolved.

  • Target-Specific Fewshot Change Not Applied Issue Fixed: An issue where fewshot learning values set per target were not reflected has been fixed, improving inference accuracy based on model training.

Deprecated  

  • Predefined VLM Agent Prompt Removed: The fixed prompt structure has been removed and replaced with a more flexible prompt configuration method, enabling support for a wider range of scenarios.

v1.1.3 (July 16, 2025)

  • Helm Chart Version: 0.9.4

Bug Fixes  

  • Chat History Retrieval Bug Fixed: Fixed an issue where the last 100 chat records were supposed to be displayed when entering the chat window, but instead, 100 older records were shown.

v1.1.2 (July 3, 2025)

  • Helm Chart Version: 0.9.3

Features  

  • LLM Alert Interval Value Managed in DB: Previously, the default Alert Interval was fixed at 10 seconds. From this version, the value can now be managed in the database. For example, Region A can have a 60-second interval configured via DB, allowing flexible settings based on regional operational policies.

Bug Fixes  

  • Alert Status Stuck Issue Fixed: Fixed an issue where the Alert status remained "Yes" under certain conditions, causing the red alert light to stay on. When a detection occurred in Monitoring mode and the Target suddenly disappeared, the system failed to recognize the change and kept the Alert status active. Now, when the Target disappears and the system switches to detecting mode, the Alert status is properly reset.

v1.1.1 (July 1, 2025)

  • Helm Chart Version: 1.1.1
  • Alembic Version: a9526d6688c9

Features  

  • Improved Image Transmission for LLM(VLM) Analysis Requests: Previously, images with annotation boxes were sent for analysis. Now, original images (without boxes) are sent, allowing the model to infer based on more accurate raw data.

  • MySQL-Based DB Execution Environment Added: The system structure has been improved to store key data in a MySQL database.

    • Software version information is now stored in the DB and can be retrieved via a REST API.
    • Chat history storage has been changed from file-based to DB-based, and the frontend interface has been updated accordingly.
    • Alert data is also stored in the DB, enabling history management.
    • Timezone information is now stored in the DB, allowing Teams alert event times to be delivered in local time.
  • Quick Commands for Alert History Lookup Added: A Quick Command feature has been added to allow users to quickly retrieve alert history for improved convenience.

Performance Enhancements  

  • Detector Performance Improved: The detection pipeline has been changed from thread-based to job-based, enabling more efficient use of GPU resources and improving overall inference performance.

Internal Improvements  

  • Helm Repository Applied: A Helm repository has been applied to automate deployment, making installation and updates easier in operational environments.

  • Code Quality Improvements:

    • Unified code style using lint tools to improve readability.
    • Removed unnecessary code from Docker images to optimize image size.
    • Redefined and modified REST APIs between frontend and backend to ensure communication stability and clarity.

v1.0.4 (June 12, 2025)

  • Helm Chart Version: 0.8.2

Features  

  • Web Time Display Based on Local Timezone: Previously, time information on the web interface was displayed in UTC. This update uses the browser's timezone to display local time, reducing confusion and enabling more intuitive operations.

  • Alert Modal Message Info Updated: The message information displayed in the Alert Modal has been revised to be clearer and more intuitive, improving readability and user understanding of warning messages.


v1.0.3 (June 10, 2025)

  • Helm Chart Version: 0.8.1

Bug Fixes  

  • Webhook Image Size Adjustment: Fixed an issue where images sent via Webhook were too large to be delivered to external systems like Microsoft Teams. The image size is now appropriately adjusted for smoother integration.

  • Splunk Forward Interval Setting Added: Added a setting to configure the interval for sending log data to Splunk, allowing adjustment based on operational needs.


v1.0.2 (June 9, 2025)

  • Helm Chart Version: 0.6.3

Features  

  • Splunk Forward Message Updated: Vision Detection log messages have been changed from "vision_detected" to "Vision only". Also, a typo in referencing the DetectorResult variable has been fixed, improving log accuracy.

  • Zeroshot Threshold Value Adjusted: The default threshold value for Zeroshot label detection has been changed to 0.11. The default threshold for Fewshot remains at 0.55. This allows better control over Zeroshot inference sensitivity.

  • Adaptive Prompt Feature Improved: When using a Custom Agent, users can now directly input and configure prompts. The logic for extracting meta information within prompts has also been improved, enabling more accurate context-based inference.

Bug Fixes  

  • "One More" Feature Not Working After Fewshot Fixed: Fixed an issue where the "One More" feature did not work properly after completing a Fewshot inference.