EVA App Release Notes
v2.6.1 (Mar 20, 2026)
- Helm Chart Version:
1.7.2 - Alembic Version:
5e1303b49061
Bug Fixes
- Fixed device table search filtering behavior on the main screen: Resolved an issue where filtering via the search box in the device table on the main screen was not applied correctly. This fix ensures accurate device searches based on the entered conditions.
- Fixed UI display error for the number of registered webhooks: Resolved a bug where, when multiple webhooks were registered, the UI displayed the actual number registered. The UI has been improved to display only the registered webhooks regardless of the total number.
- Fixed detection alert display issue after Monitoring OFF: Resolved an issue where detection alerts sent from the Agent were still displayed in the chat window and database even after Monitoring was turned OFF. Detection alerts are now handled consistently according to the Monitoring status.
- Removed unnecessary tooltip text related to quick commands: Removed unnecessary quick-command-related text from the guide tooltip.
v2.6.0 (Mar 16, 2026)
- Helm Chart Version:
1.7.1 - Alembic Version:
5e1303b49061
Features
- Scenario Detection Using Multiple Images: The functionality has been expanded to perform scenario detection based on multiple images. This allows for a more comprehensive reflection of various situations compared to single-frame detection, resulting in improved overall detection accuracy.
- VM Only Detection Process Added: A VM Only detection process has been added, which generates detection notifications using only the VM without going through the VLM. This allows for a lightweight detection flow and faster notification processing in certain scenarios.
- Scenario Importance Setting: A feature has been added to define the importance of scenarios requiring urgent response in text. Detection notifications from scenarios set as important are marked as urgent notifications, allowing for prioritized awareness.
- Extended Feedback for False/True Positives and Chat Display: The feedback functionality has been extended to allow input for true positives in addition to false positives. The entered feedback can be viewed in the chat window, indicating whether it is a false or true positive. This information is updated upon refresh or re-entry to the screen, not in real-time.
- Main Camera List Column Configuration Changed: The model name and target information columns have been removed from the main screen camera list. New columns have been added to check notification severity, recent notification summary, scenario summary, and webhook settings, improving readability from an operational perspective.
- Multilingual Support for Vision Model Information: The names and descriptions of Vision Models are automatically displayed in the selected language (Korean/English).
- GUI Improvements:
- The color of the annotation box has been changed to gray tones.
- The drawing method for danger zone (Area) rectangles has been improved, and the related logic has been moved from the backend to the frontend, enhancing UI responsiveness and overall performance.
Performance Enhancements
- License Operation Performance Improvement: Communication with the server is now performed only when necessary, eliminating unnecessary server synchronization and duplicate code execution. The API interface has been reorganized to optimize the overall responsiveness of the web interface.
- Notification List Query Performance Improvement: The application of lazy loading has improved the delay that occurred when users clicked the notification icon.
- Backend DB Query Performance Optimization: Device Data Caching has been applied, utilizing in-memory cache data instead of querying the DB for each camera frame processing. This reduces DB I/O load and improves overall processing performance.
- Frontend Streaming Processing Performance Optimization: Canvas and Bitmap-based rendering using browser GPU acceleration have been applied. Stream processing and rendering have been separated into background threads using Web Worker and OffScreenCanvas (Chrome, Edge). This reduces the main thread load and improves streaming UI responsiveness. Additionally, danger zone area rendering has been moved from the backend to the frontend, reducing frame processing overhead.
- Frontend LLM Chat Processing Performance Optimization: Web Worker has been applied to process WebSocket-based messages (including images) in the background. Virtual components have been introduced to minimize the number of chat DOM renderings, optimizing memory usage and scroll performance.
Bug Fixes
- Webhook Validation Logic Fix: Fixed a bug where the newly registered webhook was not being validated when additional webhooks were registered.
- First Alert Delay Issue After App Launch Fix: Fixed an issue where the first alert was delayed due to the download of the InsightFace model for face mosaic processing after app launch. The InsightFace model is now pre-built into the app Docker image, eliminating the initial alert delay.
- Inconsistent Object Detection Sensitivity for the Same Target Fix: Fixed an issue where the same object detection sensitivity was not consistently applied to the same target when it was registered in both common and custom scenarios simultaneously.
v2.5.0 (Feb 11, 2026)
- Helm Chart Version:
1.6.1 - Alembic Version:
3599f5503a55
Features
- License Feature Added: A new licensing system has been introduced to control activation and usage of EVA features. A valid license key must be issued and registered before EVA can be used.
- Automatic Face Blur: Human faces detected in images are now automatically blurred. Blurred images are applied to detection DB entries, chat alerts, and webhook-transmitted images.
- UI & UX Improvements
- Tooltip Additions: Tooltips have been added to the Edit, Webhook, and Delete buttons on the camera detail page to improve feature clarity.
- Detection Area Setup Improvements: The default label is now set to “Area,” and the streaming view appears even if no label is entered.
- Detection Alert Information Update: The previous image description has been replaced with eva's response in the detailed detection alert view.
- Object Detection Sensitivity Update: “Detection Sensitivity” has been renamed to “Object Detection Sensitivity,” with additional guidance added. Sensitivity adjustment is now refined to 0.1 intervals, and newly added targets are automatically selected.
- Step-Based Save Buttons: Each step in the camera detail settings page now includes a Save button, enabling step-by-step configuration saving.
- ‘AI Inference Interval’ Terminology Update: The term “Detection Interval” has been updated to “AI Inference Interval” for clearer terminology.
- Feedback Author Display Improvement: The full user ID of the most recent feedback submitter is now displayed instead of an abbreviated form.
- Detection Scenario Guide Added: Guides have been added to both Common and Custom Scenario configuration sections to help users better understand the setup flow.
Performance Enhancements
- Boot-Time Performance Improvement: When 100+ cameras start simultaneously, DB access bottlenecks caused delays. A coalescing technique has been applied to batch event handling, reducing event-loop latency during the boot process.
- DB Access Optimization: Camera pipeline initialization now uses a single DB access model, making it better suited for service separation and pipeline microservices. This reduces DB load and improves overall processing performance.
Bug Fixes
- Language Setting Transmission Fix: Resolved an issue where the app’s language setting was not correctly passed to the Agent during scenario generation, ensuring correct processing of language-specific prompts.
- User Input Overwrite Fix: Fixed an issue where automatically refreshed data overwrote user input when navigating between pages or returning to screens.
- Camera List Pagination Fix: Fixed a problem where viewing Page 2+ in the camera list → opening camera detail → pressing back returned the user to Page 1 instead of the page they were on.
- Excessive Debug Log Output Fix: Resolved an issue where Vision ML API 422 errors produced excessive logs due to inclusion of image frame data.
- UI Breakage on Zoom Fix: Fixed an issue where tables broke visually on the camera detail page when the browser zoom level was increased.
v2.4.0 (Jan 19, 2026)
- Helm Chart Version:
1.5.1 - Alembic Version:
bc696b7c8353
Features
- Language Settings: Supports both Korean and English. You can select your preferred language in the settings menu, and language changes can only be made by accounts with Admin/Manager privileges.
- Device Favorites Feature Added: Users can mark devices as favorites and filter them for quick access.
- Move Device to Top on Alarm: When an alarm is triggered, the corresponding device moves to the top of the list for focused monitoring.
- Logout Session Timeout Configuration: Allows setting the logout session time in the Config according to the installation environment.
- Improved Mobile Image Access: Fixed the issue where repeated login requests occurred when viewing alarm images on smartphones, enabling image access without login.
- Device Registration UI Enhancement: Removed unnecessary metadata fields and expanded the detection scenario area for better readability.
- Edit Button Behavior Change: Clicking the Edit button on the device detail page now navigates directly to the detection scenario step.
- Zoom Feature for Detection Images: Enables viewing detection images in a larger, separate window via the “View Image” option in the chat.
- Device Name Change: Device names can now be modified even after registration.
- View Detection Images in Slack: Previously available only in Teams; now detection alarm images can also be viewed in Slack.
- Duplicate RTSP Registration: Devices with the same RTSP URL can now be registered multiple times.
Bug Fixes
- Device Registration Error: Fixed an error that occurred when deleting a region after it was set during device registration.
- Chat Scroll Position Issue: Resolved the problem where the scroll moved to the latest position after submitting false-positive feedback; now it stays at the previous position.
- Tooltip Text Correction: Corrected incorrectly displayed tooltip text.
- Duplicate Text Fix: Removed duplicate “Prompt Scenario” text in the Camera Registration > Common Scenario Settings step.
- Modal Text Correction: Fixed incorrect text in the camera information modal on the camera detail page.
- VLM and LLM Model List Update: Corrected discrepancies between the actual available model list and the list displayed on the web.
- Common Scenario Deletion Restriction: Common scenarios cannot be deleted if they are currently applied to one or more devices.
v2.3.0 (Dec 12, 2025)
- Helm Chart Version:
1.4.1 - Alembic Version:
7d2596322f4d
Features
-
Multi-Scenario Support: Added support for Common Scenarios that can be applied across multiple cameras and Custom Scenarios tailored for individual cameras. Each camera can apply both scenarios simultaneously for more precise configurations.
-
Detection Zone Configuration: Moved away from full-screen detection; users can now freely define specific regions and apply customized scenarios to those areas.
-
Monitoring State Persistence: When Eva App restarts, the previous monitoring state is automatically restored.
-
New Camera Settings: Read-only settings are now available when connecting via RTSP, including: brightness, contrast, saturation, hue, gain, exposure, convert_rgb, white_balance_temperature, rectification.
-
Image Guided Detection: Enhanced image-based detection functionality. The previously complex image management process has been streamlined, allowing intuitive image selection, labeling, registration, modification, and deletion. Additionally, embedding vector data saved through Image Guided Detection can now be reused even after restarting Eva App.
-
UX/UI Improvements
- Applied Slider Bar UI for Detection Sensitivity (Target Threshold) settings.
- UI Terminology Updates: Updated terms for better clarity:
- alert interval → detection interval
- feedback_similarity_threshold → false detection cutoff
- threshold → Detection Sensitivity
Performance Enhancements
-
RTSP Streaming Memory Optimization: Improved blob handling and leveraged JavaScript GC algorithms to optimize browser memory usage.
-
Backend → Agent Metadata Structure Improvement: Refined metadata structure sent from Backend to Agent to enhance LLM and VLM performance.
Bug Fixes
- Fixed issue where the “Sending..” button in the chat window did not disappear: If a type conversion error occurs during Agent response processing, an appropriate error message is now displayed.
v2.2.6 (Nov 21, 2025)
- Helm Chart Version:
1.3.2 - Alembic Version:
b17e2d45ed01
Performance Enhancements
- Improved Inter-Process Data Transmission: The previous approach of inserting frame data into a message queue has been replaced with a Shared Memory mechanism, resulting in enhanced data transmission performance. This change also resolved the issue where Annotation Boxes from Vision Model results were not displayed in the Streaming Viewer.
v2.2.5 (Oct 31, 2025)
- Helm Chart Version:
1.3.1 - Alembic Version:
b17e2d45ed01
Features
-
Alert History Enhancements and Expanded Filtering: The previous “Latest Alert” label has been changed to “Alert History”, with added guide text for better usability. Summary information shown in the chat window has been improved, and users can now filter alerts using multiple keyword inputs.
-
Customizable EVA Browser Title: The EVA browser title can now be configured via a separate setting, allowing customization based on the deployment site or customer requirements.
Performance Enhancements
- Performance Optimization for Bounding Box Accuracy and Confidence Display: Resolved issues where bounding boxes were inaccurately drawn on objects from previous frames or misaligned with actual object positions. The algorithm has been optimized to ensure more precise and real-time bounding box placement during object detection. Additionally, fixed a bug where confidence values were displayed even when they were below the configured threshold. Confidence is now shown only when it exceeds the defined threshold, providing more accurate feedback on detection reliability.
Bugs Fixes
-
Fix for Brightness Setting Command in Chat: Fixed a bug where the AI Solution status was incorrectly shown as Deployed before the actual deployment was completed in Edge Conductor. The status now correctly reflects Deploying during the process.
-
Incorrect Navigation When Model Is Not Selected: When the ML model is deployed but not yet selected on the camera, the Camera List screen now correctly navigates to the model selection (Edit) screen instead of the Edge Conductor registration page.
-
Duplicate Alert Display in Chat: Resolved an issue where fast VLM processing caused the same alert to appear twice in the chat window.
v2.2.4 (Oct 21, 2025)
- Helm Chart Version:
1.2.1 - Alembic Version:
b17e2d45ed01
Features
- Helm Template Enhancements:
- Updated the value controlling PV usage to enable it for AWS Cloud and on-premise environments, while disabling it for Naver Cloud Platform.
- Added a value to control whether a service account should be created, allowing for more flexible configuration.
- Adjusted the inference deployment YAML to manage credreg usage through the Helm template.
Internal Improvements
- Solution Deploy Code Update for Non-EKS Environments: Modified the solution deployment code to support credreg usage in non-EKS(AWS) environments such as Naver Cloud Platform, ensuring compatibility across various infrastructure setups.
v2.2.3 (Oct 20, 2025)
- Helm Chart Version:
1.1.7 - Alembic Version:
b17e2d45ed01
Features
- Search Filter for Latest Alert in Chat: A search filter has been added to the Latest Alert feature in Chat. Users can now filter alert history not only by date but also by keyword, allowing for more precise and efficient alert tracking.
Performance Enhancements
- Low-Light Mode Execution Time and Image Quality Optimization: Improved the execution time of low-light mode transitions and enhanced image quality during streaming. This update addresses issues where image clarity was reduced under low-light conditions and optimizes the overall performance of low-light processing.
v2.2.2 (Oct 17, 2025)
- Helm Chart Version:
1.1.6 - Alembic Version:
b17e2d45ed01
Bugs Fixes
- Fix for Brightness Setting Command in Chat: Resolved an issue where the set brightness to brightness_param command did not function correctly in chat.
v2.2.1 (Oct 17, 2025)
- Helm Chart Version:
1.1.5 - Alembic Version:
b17e2d45ed01
Features
-
EVA-App UUID Display on Web Setting Page: The Web Setting page now displays the UUID of each EVA-App. This enhancement allows for clearer identification and utilization of EVA-App identifiers within Edge Conductor, improving integration and system management.
-
Night Mode Indicator on Stream Screen: When night mode is automatically applied due to low brightness in the frame, an icon is now displayed on the stream screen. This provides users with a visual cue that brightness enhancement is active, improving transparency and user awareness.
Bugs Fixes
-
Preventing Unnecessary VLM Requests in Vision Model: The system has been updated to prevent VLM requests when no object is detected by Vision Model. This reduces unnecessary inference calls and optimizes resource usage.
-
Default Brightness Parameter Adjustment for Camera Registration: Upon camera registration, the default value for the User Brightness Parameter has been changed from 0 to 50, corresponding to a gamma value of 1.5. This adjustment improves initial image clarity and enhances user experience.
-
Webhook Configuration Improvements: Several enhancements have been made to the Webhook setup process
- When only one Webhook is configured, the delete (–) button is disabled to prevent accidental removal.
- Informative messages are now displayed for Webhook entries that have not yet passed validation, helping users complete setup accurately and confidently.
v2.2.0 (Oct 2, 2025)
- Helm Chart Version:
1.1.1 - Alembic Version:
1c69e95a8307
Features
-
Low-Light Image Enhancement Logic Applied: A preprocessing logic has been added to automatically adjust the brightness of images captured in low-light environments. This allows ML and VLM models to infer based on clearer visual information, improving overall recognition accuracy.
-
Expanded Vision Model Capabilities: The architecture has been improved to support multiple ML models within a single Vision Solution. Previously, Vision Solutions were mapped per device, but now they can be mapped per EVA App, allowing flexible selection of models optimized for each device domain.
-
Image Capture Frame Preview on New Device Registration: A new feature allows users to preview image capture frames in real-time during device registration, enabling immediate verification of device connection status and video quality.
-
Quick Button UI/UX Enhancement: The user interface has been improved with a newly designed Quick Button for faster access to key features. Frequently used functions such as Monitoring ON/OFF, Target settings, Alert settings, and viewing recent Alerts are now more intuitive.
-
New Scenario for Enrich Prompt Applied: When the user clicks the Generate button, an appropriate prompt is automatically applied. This generates a high-performance prompt even with minimal user input, reducing the burden of prompt writing and supporting efficient workflows.
-
Enhanced False Positive Feedback Functionality: The feedback feature for false positives (misrecognitions) has been expanded for more precise input.
- A description field has been added to record detailed explanations of false positive cases.
- A Similarity Threshold value can be set based on the similarity between the false positive image and the original image, improving the precision and usability of feedback.
-
Device-Specific VLM Inference Interval Setting: A new Alert Interval feature allows setting individual VLM inference intervals per device. This enables optimal inference cycles tailored to each device's characteristics and operating environment.
-
New Webhook URL Pattern Applied: The Webhook URL pattern for integration with external systems has been updated.
Bug Fixes
-
Alert and Chat Message Lifecycle Improvement: The retention period for Alert and Chat message data can now be configured. Data is automatically deleted after a set period, helping to manage storage efficiently and improve system performance by reducing clutter.
-
Device List Loading Delay Issue Fixed: The issue where the device list loaded slowly has been resolved, allowing faster access to device information upon page entry.
-
New Device List Position Adjustment: Newly registered devices now appear at the top of the list instead of the bottom, making it easier to find and configure recently added devices.
-
Fixed Issue with Teams Webhook Not Sending: An issue where Teams Webhook messages were not being sent in certain scenarios has been resolved. Notifications are now reliably delivered to external systems, ensuring uninterrupted communication flow.
-
Prevented VLM Request Loop Termination on Agent Error: The logic has been updated to prevent the VLM request thread loop from terminating when an HTTP error (e.g., httpx.HTTPStatusError) is returned by the VLM Agent. This enhancement ensures stable system operation even in the presence of external service errors.
v2.1.0 (Aug 26, 2025)
- Helm Chart Version:
1.0.1 - Alembic Version:
a9526d6688c9
Features
-
Image Brightness Preprocessing Setting Added: Users can now enable image brightness adjustment through configuration settings.
-
Enrich Detection Scenario Feature Applied: Detection scenarios entered by users are automatically enriched by the LLM based on camera context information, enabling more precise inference.
-
Quick Commands Feature Improved: Frequently used commands such as Monitoring On/Off, Brightness settings, and viewing recent Alerts can now be executed more easily through improved Quick Commands.
-
Web Title Changed: The web page title has been changed from "Edge Vision Agent" to "EVA" to strengthen brand consistency.
Internal Improvements
-
SSE Applied to Camera Detail View: Server-Sent Events (SSE) have been applied to the camera detail view, allowing real-time UI updates when values change.
-
Chat History Sent with LLM Queries: The last 7 chat history entries are now sent with LLM queries to improve context-aware response quality. Only text-based chats are included (Alert, Fewshot, and Bright image data are excluded).
-
VLM and LLM Model Info Managed in DB: Backend structure has been updated to manage VLM and LLM model information in a database. GPT-OSS and Gwen2.5-VL are registered by default, laying the foundation for future model expansion.
-
Response Config Info API Added (Admin Only): A new Dump API allows administrators to retrieve system configuration information, improving operational management efficiency.
-
LLM Server Exception Logging Added: Exception events on LLM and VLM servers are now logged to enhance debugging and operational stability.
Performance Enhancements
-
Streamer Process Architecture Improved: The streamer now operates on a process-based architecture instead of thread-based, improving streaming performance and resolving deadlock issues caused by inter-thread queues.
-
Streamer Annotation Boxing Algorithm Applied: An extrapolation algorithm based on speed prediction has been applied, allowing visualization at predicted positions even when boxing data is unavailable.
Bug Fixes
-
ML Execution Halt on Disconnected Camera Monitoring Fixed: An issue where ML execution for all cameras stopped when monitoring was started on a disconnected camera has been resolved.
-
LLM Error on Detection Scenario Change Fixed: An issue where the LLM failed to operate correctly after changing the detection scenario has been fixed, ensuring stable scenario-based inference.
v2.0.0 (Aug 19, 2025)
- Helm Chart Version:
1.0.0 - Alembic Version:
77560c1fffa6
Features
-
New UX Design Applied: A new UX design has been applied to improve user experience, with a more intuitive layout and feature placement for better accessibility and convenience.
-
User Account Feature Added: A user account system has been introduced to distinguish user permissions within the system. Roles include Admin, Manager, and User, each with different access levels.
-
Enhanced Alert Functionality: The alert feature has been expanded to provide data on operational events and real-time notifications for quick response. Users can also provide feedback on detection images to help improve the model.
-
Improved User Interface: Monitoring start/stop buttons are now available on the device list screen for easier control. RTSP connection status (Conn / Disconn) can be checked in real-time when connecting new devices. A timezone setting has also been added to the EVA settings menu for regional customization.
Internal Improvements
-
Separated Interfaces for LLM and VLM: The inference structure has been improved by separating the roles of LLM and VLM, enhancing detection performance and allowing independent management for better maintainability and scalability.
-
Data Storage Method Changed: The storage method has been changed from file-based to MySQL DB-based. Key data (e.g., edges, model, target) is now stored in a structured format, making it easier to search and manage.
Performance Enhancements
-
Streaming Performance Improved: The video streaming protocol has been changed from WebSocket to Server-Sent Events (SSE), improving stability and real-time processing performance.
-
ML Result Visualization Improved: A linear interpolation algorithm has been applied to the annotation box for ML inference results, improving draw performance on the web interface for smoother and more accurate visual representation.
-
Ingester Performance Improved: Target FPS can now be applied differently depending on the camera state (stream, ml, idle). FPS settings can be configured during installation via values, enabling resource efficiency and operational optimization.
Bug Fixes
-
Target-Specific Threshold Change Not Applied Issue Fixed: An issue where threshold values set per target were not applied has been resolved.
-
Target-Specific Fewshot Change Not Applied Issue Fixed: An issue where fewshot learning values set per target were not reflected has been fixed, improving inference accuracy based on model training.
Deprecated
- Predefined VLM Agent Prompt Removed: The fixed prompt structure has been removed and replaced with a more flexible prompt configuration method, enabling support for a wider range of scenarios.
v1.1.3 (July 16, 2025)
- Helm Chart Version:
0.9.4
Bug Fixes
- Chat History Retrieval Bug Fixed: Fixed an issue where the last 100 chat records were supposed to be displayed when entering the chat window, but instead, 100 older records were shown.
v1.1.2 (July 3, 2025)
- Helm Chart Version:
0.9.3
Features
- LLM Alert Interval Value Managed in DB: Previously, the default Alert Interval was fixed at 10 seconds. From this version, the value can now be managed in the database. For example, Region A can have a 60-second interval configured via DB, allowing flexible settings based on regional operational policies.
Bug Fixes
- Alert Status Stuck Issue Fixed: Fixed an issue where the Alert status remained "Yes" under certain conditions, causing the red alert light to stay on. When a detection occurred in Monitoring mode and the Target suddenly disappeared, the system failed to recognize the change and kept the Alert status active. Now, when the Target disappears and the system switches to detecting mode, the Alert status is properly reset.
v1.1.1 (July 1, 2025)
- Helm Chart Version:
1.1.1 - Alembic Version:
a9526d6688c9
Features
-
Improved Image Transmission for LLM(VLM) Analysis Requests: Previously, images with annotation boxes were sent for analysis. Now, original images (without boxes) are sent, allowing the model to infer based on more accurate raw data.
-
MySQL-Based DB Execution Environment Added: The system structure has been improved to store key data in a MySQL database.
- Software version information is now stored in the DB and can be retrieved via a REST API.
- Chat history storage has been changed from file-based to DB-based, and the frontend interface has been updated accordingly.
- Alert data is also stored in the DB, enabling history management.
- Timezone information is now stored in the DB, allowing Teams alert event times to be delivered in local time.
-
Quick Commands for Alert History Lookup Added: A Quick Command feature has been added to allow users to quickly retrieve alert history for improved convenience.
Performance Enhancements
- Detector Performance Improved: The detection pipeline has been changed from thread-based to job-based, enabling more efficient use of GPU resources and improving overall inference performance.
Internal Improvements
-
Helm Repository Applied: A Helm repository has been applied to automate deployment, making installation and updates easier in operational environments.
-
Code Quality Improvements:
- Unified code style using
linttools to improve readability. - Removed unnecessary code from Docker images to optimize image size.
- Redefined and modified REST APIs between frontend and backend to ensure communication stability and clarity.
- Unified code style using
v1.0.4 (June 12, 2025)
- Helm Chart Version:
0.8.2
Features
-
Web Time Display Based on Local Timezone: Previously, time information on the web interface was displayed in UTC. This update uses the browser's timezone to display local time, reducing confusion and enabling more intuitive operations.
-
Alert Modal Message Info Updated: The message information displayed in the Alert Modal has been revised to be clearer and more intuitive, improving readability and user understanding of warning messages.
v1.0.3 (June 10, 2025)
- Helm Chart Version:
0.8.1
Bug Fixes
-
Webhook Image Size Adjustment: Fixed an issue where images sent via Webhook were too large to be delivered to external systems like Microsoft Teams. The image size is now appropriately adjusted for smoother integration.
-
Splunk Forward Interval Setting Added: Added a setting to configure the interval for sending log data to Splunk, allowing adjustment based on operational needs.
v1.0.2 (June 9, 2025)
- Helm Chart Version:
0.6.3
Features
-
Splunk Forward Message Updated: Vision Detection log messages have been changed from
"vision_detected"to"Vision only". Also, a typo in referencing theDetectorResultvariable has been fixed, improving log accuracy. -
Zeroshot Threshold Value Adjusted: The default threshold value for Zeroshot label detection has been changed to 0.11. The default threshold for Fewshot remains at 0.55. This allows better control over Zeroshot inference sensitivity.
-
Adaptive Prompt Feature Improved: When using a Custom Agent, users can now directly input and configure prompts. The logic for extracting meta information within prompts has also been improved, enabling more accurate context-based inference.
Bug Fixes
- "One More" Feature Not Working After Fewshot Fixed: Fixed an issue where the "One More" feature did not work properly after completing a Fewshot inference.