본문으로 건너뛰기
버전: docs v25.02

EVA App Release Notes

v2.2.0 (Oct 2, 2025)

  • Helm Chart Version: 1.1.1
  • Alembic Version: 1c69e95a8307

Features  

  • Low-Light Image Enhancement Logic Applied: A preprocessing logic has been added to automatically adjust the brightness of images captured in low-light environments. This allows ML and VLM models to infer based on clearer visual information, improving overall recognition accuracy.

  • Expanded Vision ML Capabilities: The architecture has been improved to support multiple ML models within a single Vision Solution. Previously, Vision Solutions were mapped per device, but now they can be mapped per EVA App, allowing flexible selection of models optimized for each device domain.

  • Image Capture Frame Preview on New Device Registration: A new feature allows users to preview image capture frames in real-time during device registration, enabling immediate verification of device connection status and video quality.

  • Quick Button UI/UX Enhancement: The user interface has been improved with a newly designed Quick Button for faster access to key features. Frequently used functions such as Monitoring ON/OFF, Target settings, Alert settings, and viewing recent Alerts are now more intuitive.

  • New Scenario for Enrich Prompt Applied: When the user clicks the Generate button, an appropriate prompt is automatically applied. This generates a high-performance prompt even with minimal user input, reducing the burden of prompt writing and supporting efficient workflows.

  • Enhanced False Positive Feedback Functionality: The feedback feature for false positives (misrecognitions) has been expanded for more precise input.

    • A description field has been added to record detailed explanations of false positive cases.
    • A Similarity Threshold value can be set based on the similarity between the false positive image and the original image, improving the precision and usability of feedback.
  • Device-Specific VLM Inference Interval Setting: A new Alert Interval feature allows setting individual VLM inference intervals per device. This enables optimal inference cycles tailored to each device's characteristics and operating environment.

  • New Webhook URL Pattern Applied: The Webhook URL pattern for integration with external systems has been updated.

Bug Fixes  

  • Alert and Chat Message Lifecycle Improvement: The retention period for Alert and Chat message data can now be configured. Data is automatically deleted after a set period, helping to manage storage efficiently and improve system performance by reducing clutter.

  • Device List Loading Delay Issue Fixed: The issue where the device list loaded slowly has been resolved, allowing faster access to device information upon page entry.

  • New Device List Position Adjustment: Newly registered devices now appear at the top of the list instead of the bottom, making it easier to find and configure recently added devices.


v2.1.0 (Aug 26, 2025)

  • Helm Chart Version: 1.0.1
  • Alembic Version: a9526d6688c9

Features  

  • Image Brightness Preprocessing Setting Added: Users can now enable image brightness adjustment through configuration settings.

  • Enrich Detection Scenario Feature Applied: Detection scenarios entered by users are automatically enriched by the LLM based on camera context information, enabling more precise inference.

  • Quick Commands Feature Improved: Frequently used commands such as Monitoring On/Off, Brightness settings, and viewing recent Alerts can now be executed more easily through improved Quick Commands.

  • Web Title Changed: The web page title has been changed from "Edge Vision Agent" to "EVA" to strengthen brand consistency.

Internal Improvements

  • SSE Applied to Camera Detail View: Server-Sent Events (SSE) have been applied to the camera detail view, allowing real-time UI updates when values change.

  • Chat History Sent with LLM Queries: The last 7 chat history entries are now sent with LLM queries to improve context-aware response quality. Only text-based chats are included (Alert, Fewshot, and Bright image data are excluded).

  • VLM and LLM Model Info Managed in DB: Backend structure has been updated to manage VLM and LLM model information in a database. GPT-OSS and Gwen2.5-VL are registered by default, laying the foundation for future model expansion.

  • Response Config Info API Added (Admin Only): A new Dump API allows administrators to retrieve system configuration information, improving operational management efficiency.

  • LLM Server Exception Logging Added: Exception events on LLM and VLM servers are now logged to enhance debugging and operational stability.

Performance Enhancements

  • Streamer Process Architecture Improved: The streamer now operates on a process-based architecture instead of thread-based, improving streaming performance and resolving deadlock issues caused by inter-thread queues.

  • Streamer Annotation Boxing Algorithm Applied: An extrapolation algorithm based on speed prediction has been applied, allowing visualization at predicted positions even when boxing data is unavailable.

Bug Fixes  

  • ML Execution Halt on Disconnected Camera Monitoring Fixed: An issue where ML execution for all cameras stopped when monitoring was started on a disconnected camera has been resolved.

  • LLM Error on Detection Scenario Change Fixed: An issue where the LLM failed to operate correctly after changing the detection scenario has been fixed, ensuring stable scenario-based inference.

---    

v2.0.0 (Aug 19, 2025)

  • Helm Chart Version: 1.0.0
  • Alembic Version: 77560c1fffa6

Features  

  • New UX Design Applied: A new UX design has been applied to improve user experience, with a more intuitive layout and feature placement for better accessibility and convenience.

  • User Account Feature Added: A user account system has been introduced to distinguish user permissions within the system. Roles include Admin, Manager, and User, each with different access levels.

  • Enhanced Alert Functionality: The alert feature has been expanded to provide data on operational events and real-time notifications for quick response. Users can also provide feedback on detection images to help improve the model.

  • Improved User Interface: Monitoring start/stop buttons are now available on the device list screen for easier control. RTSP connection status (Conn / Disconn) can be checked in real-time when connecting new devices. A timezone setting has also been added to the EVA settings menu for regional customization.

Internal Improvements  

  • Separated Interfaces for LLM and VLM: The inference structure has been improved by separating the roles of LLM and VLM, enhancing detection performance and allowing independent management for better maintainability and scalability.

  • Data Storage Method Changed: The storage method has been changed from file-based to MySQL DB-based. Key data (e.g., edges, model, target) is now stored in a structured format, making it easier to search and manage.

Performance Enhancements  

  • Streaming Performance Improved: The video streaming protocol has been changed from WebSocket to Server-Sent Events (SSE), improving stability and real-time processing performance.

  • ML Result Visualization Improved: A linear interpolation algorithm has been applied to the annotation box for ML inference results, improving draw performance on the web interface for smoother and more accurate visual representation.

  • Ingester Performance Improved: Target FPS can now be applied differently depending on the camera state (stream, ml, idle). FPS settings can be configured during installation via values, enabling resource efficiency and operational optimization.

Bug Fixes  

  • Target-Specific Threshold Change Not Applied Issue Fixed: An issue where threshold values set per target were not applied has been resolved.

  • Target-Specific Fewshot Change Not Applied Issue Fixed: An issue where fewshot learning values set per target were not reflected has been fixed, improving inference accuracy based on model training.

Deprecated  

  • Predefined VLM Agent Prompt Removed: The fixed prompt structure has been removed and replaced with a more flexible prompt configuration method, enabling support for a wider range of scenarios.

v1.1.3 (July 16, 2025)

  • Helm Chart Version: 0.9.4

Bug Fixes  

  • Chat History Retrieval Bug Fixed: Fixed an issue where the last 100 chat records were supposed to be displayed when entering the chat window, but instead, 100 older records were shown.

v1.1.2 (July 3, 2025)

  • Helm Chart Version: 0.9.3

Features  

  • LLM Alert Interval Value Managed in DB: Previously, the default Alert Interval was fixed at 10 seconds. From this version, the value can now be managed in the database. For example, Region A can have a 60-second interval configured via DB, allowing flexible settings based on regional operational policies.

Bug Fixes  

  • Alert Status Stuck Issue Fixed: Fixed an issue where the Alert status remained "Yes" under certain conditions, causing the red alert light to stay on. When a detection occurred in Monitoring mode and the Target suddenly disappeared, the system failed to recognize the change and kept the Alert status active. Now, when the Target disappears and the system switches to detecting mode, the Alert status is properly reset.

v1.1.1 (July 1, 2025)

  • Helm Chart Version: 1.1.1
  • Alembic Version: a9526d6688c9

Features  

  • Improved Image Transmission for LLM(VLM) Analysis Requests: Previously, images with annotation boxes were sent for analysis. Now, original images (without boxes) are sent, allowing the model to infer based on more accurate raw data.

  • MySQL-Based DB Execution Environment Added: The system structure has been improved to store key data in a MySQL database.

    • Software version information is now stored in the DB and can be retrieved via a REST API.
    • Chat history storage has been changed from file-based to DB-based, and the frontend interface has been updated accordingly.
    • Alert data is also stored in the DB, enabling history management.
    • Timezone information is now stored in the DB, allowing Teams alert event times to be delivered in local time.
  • Quick Commands for Alert History Lookup Added: A Quick Command feature has been added to allow users to quickly retrieve alert history for improved convenience.

Performance Enhancements  

  • Detector Performance Improved: The detection pipeline has been changed from thread-based to job-based, enabling more efficient use of GPU resources and improving overall inference performance.

Internal Improvements  

  • Helm Repository Applied: A Helm repository has been applied to automate deployment, making installation and updates easier in operational environments.

  • Code Quality Improvements:

    • Unified code style using lint tools to improve readability.
    • Removed unnecessary code from Docker images to optimize image size.
    • Redefined and modified REST APIs between frontend and backend to ensure communication stability and clarity.

v1.0.4 (June 12, 2025)

  • Helm Chart Version: 0.8.2

Features  

  • Web Time Display Based on Local Timezone: Previously, time information on the web interface was displayed in UTC. This update uses the browser's timezone to display local time, reducing confusion and enabling more intuitive operations.

  • Alert Modal Message Info Updated: The message information displayed in the Alert Modal has been revised to be clearer and more intuitive, improving readability and user understanding of warning messages.


v1.0.3 (June 10, 2025)

  • Helm Chart Version: 0.8.1

Bug Fixes  

  • Webhook Image Size Adjustment: Fixed an issue where images sent via Webhook were too large to be delivered to external systems like Microsoft Teams. The image size is now appropriately adjusted for smoother integration.

  • Splunk Forward Interval Setting Added: Added a setting to configure the interval for sending log data to Splunk, allowing adjustment based on operational needs.


v1.0.2 (June 9, 2025)

  • Helm Chart Version: 0.6.3

Features  

  • Splunk Forward Message Updated: Vision Detection log messages have been changed from "vision_detected" to "Vision only". Also, a typo in referencing the DetectorResult variable has been fixed, improving log accuracy.

  • Zeroshot Threshold Value Adjusted: The default threshold value for Zeroshot label detection has been changed to 0.11. The default threshold for Fewshot remains at 0.55. This allows better control over Zeroshot inference sensitivity.

  • Adaptive Prompt Feature Improved: When using a Custom Agent, users can now directly input and configure prompts. The logic for extracting meta information within prompts has also been improved, enabling more accurate context-based inference.

Bug Fixes  

  • "One More" Feature Not Working After Fewshot Fixed: Fixed an issue where the "One More" feature did not work properly after completing a Fewshot inference.