Skip to main content
Version: Next

Target

Target settings determine which objects the Vision ML model will detect in the footage. This is a critical configuration for efficiently utilizing EVA’s system resources.




Basics of Target Settings

When creating a detection scenario, objects mentioned in the scenario are automatically extracted and set as Targets. Users can add or modify objects in this list at any time.

Model Selection and Threshold Adjustment

EVA provides various Vision ML models, allowing you to switch to a model that can more accurately detect the specified target objects to improve analysis quality.

Additionally, you can fine-tune the threshold to control the sensitivity of object detection. Raising the threshold ensures only highly certain objects are detected, while lowering it allows detection of less certain objects. Adjust this based on your operational environment to minimize false positives and false negatives.




Utilizing the Image Guided Detection Function

EVA’s Vision ML models are built on foundation models with zero-shot object detection capabilities, enabling excellent performance for detecting common objects. However, domain-specific objects (e.g., equipment with a specific logo or uniquely shaped components) may not be detected accurately.

In such cases, EVA provides the Image Guided Detection function.

  1. Provide Reference Image: Upload a reference image of the domain-specific or specific object to the system.
  2. Object Detection: The Vision ML model uses this reference image to detect the object without additional training.

💡 VLM requires significant computational resources due to its use in situation judgment. In contrast, Vision ML models are much lighter, making them efficient for continuous object detection. By configuring the system to accurately detect objects with Vision ML and request VLM for situation judgment only when conditions are met, you can efficiently utilize system resources.