Skip to main content

8 posts tagged with "VLM"

Vision-Language Model 기반 기술과 응용 사례를 다룹니다.

View All Tags

EVA x Rebellions: Journey of EVA on NPU

· 4 min read
Gyulim Gu
Gyulim Gu
Tech Leader

The integration and optimization journey between Mellerikat EVA and Rebellions NPU clearly demonstrates the future direction of next-generation AI infrastructure. Through this project, we verified that NPU-based architectures can address the high cost and power consumption challenges of traditional GPU-centric infrastructures. In particular, in Physical AI environments—where real-time perception and reasoning are critical—we confirmed the potential to achieve both significant TCO (Total Cost of Ownership) reduction and high performance simultaneously.

Today, we would like to share the porting process of moving GPU-based models to NPUs, along with the technical challenges behind it, which many people have been curious about.


1. The NPU Porting Process for GPU Models

Since NPUs are designed to accelerate specific types of computations, newly released models cannot be executed immediately without adaptation. To fully utilize the hardware’s capabilities, several essential steps are required.

  • Model Conversion

    The original models developed in PyTorch or TensorFlow must be converted into an executable format that the NPU can understand. Using the ATOM Compiler from Rebellions, the model’s computational graph is analyzed and converted into the .rbln executable format optimized for the NPU architecture.

  • NPU-Optimized Compilation

    The model is compiled into a hardware-optimized executable using the compiler in the Rebellions SDK (RBLN SDK).

    • Graph Optimization: Removes redundant operations and reorganizes the data flow.
    • Operator Fusion: Combines multiple small operations into a single large kernel to reduce memory access and execution overhead.
    • Data Layout Optimization: Adjusts tensor layouts to match the NPU memory architecture, improving data access efficiency.
  • Quantization

    Computational precision is adjusted to match the NPU architecture, improving both performance and memory efficiency. In the case of EVA, we optimized the model to ensure stable performance under an FP16-based inference environment.

  • vLLM Integration and Validation

    The optimized model is deployed within the vLLM-RBLN serving framework. Key metrics such as TTFT (Time To First Token) and throughput are measured and validated against GPU-based environments.


2. EVA Application Optimization and Technical Challenges

After porting the foundation model, the next step is deploying the actual service layer—the EVA Application. During this stage, we have been implementing the following optimization roadmap.

  • EVA Vision Optimization (1:1 Mapping & Batching)

    We mapped NPU cores and Vision Workers in a 1:1 configuration, eliminating context-switching overhead. In addition, by applying continuous batching techniques, we are building a foundation capable of processing data from hundreds of cameras in real time without latency.

  • EVA Agent Optimization (Reducing VLM Load)

    The input resolution of the Vision-Language Model (VLM) was standardized to 1280×720, and a two-stage reasoning architecture was applied to minimize unnecessary VLM calls. This immediately reduces the computational load on the Vision Encoder, which is one of the most expensive components in the pipeline.

  • System Memory Management and KV Cache Optimization

    In collaboration with Rebellions, we analyzed the memory usage patterns of vLLM-RBLN instances and improved resource utilization using a page-based memory management structure. This optimization allows the system to process a larger volume of visual data reliably within the same hardware environment.

  • Parallel Processing of the VLM Vision Encoder

    We are also improving the parallel execution architecture of the Vision Encoder, which accounts for a large portion of the computation in VLM inference. By optimizing how Vision Encoder operations are distributed across multiple NPU cores, we aim to significantly improve VLM serving throughput.


3. Conclusion: Evolving from PoC to a Production-Ready Solution

We are continuously addressing technical challenges discovered during stress testing while refining optimizations that maximize hardware utilization. From parallel processing of the Vision Encoder through close collaboration with Rebellions to the development of an intelligent scheduler within the EVA platform, every step is part of transforming “EVA on NPU” from a simple proof-of-concept (PoC) into a production-ready solution.

Ultimately, the success of AI services depends on meeting three essential conditions: economic efficiency, scalability, and service quality. EVA will continue to actively adopt the latest NPU technologies and present a global standard for Physical AI platforms—delivering the most competitive TCO and outstanding performance for our customers.

Multi-Frame Based VLM Detection: Moving Beyond Single Image Limits to Temporal Context

· 7 min read
Gyulim Gu
Gyulim Gu
Tech Leader
Seongwoo Kong
Seongwoo Kong
AI Specialist
Taehoon Park
Taehoon Park
AI Specialist
Jisu Kang
Jisu Kang
AI Specialist

Is a Single Frame Enough?

Recently, Vision-Language Models (VLMs) have demonstrated exceptional performance in understanding individual images. Large-scale multimodal models have theoretically expanded the possibilities of multi-frame reasoning by introducing architectures that process multiple images alongside text prompts.

However, real-world industrial detection scenarios are far more complex than controlled research environments. Problems that seem straightforward with a single frame often lead to various false positives and edge cases in production.

Consider a scene where a person is lying on the floor. Looking at that single moment, it is easy to categorize it as a "collapse." But what if the previous frame showed them stretching, or simply changing posture while working?

In nighttime environments, lens flares, light reflections, or glare can mimic the color patterns of fire, leading to false fire detections when based on a single image. When even humans find it difficult to be certain from a single snapshot, providing a model with only one frame inevitably creates structural limitations.



These cases all share a common problem: a "lack of context."




Time is the Most Powerful Context

Many detection scenarios inherently rely on a temporal flow.

For instance, "loitering" can only be defined by observing a pattern of staying in the same space for a certain period. Similarly, "long-term abandonment" requires the condition that an object remains unchanged for a specific duration after being placed.

Attempting to solve these problems with a single frame is structurally difficult because the focus must be on "change," not just "state."

We have categorized this into three levels of context:

  • Single Image-based Judgment
  • Short-term Multi-image Contextual Judgment (Momentary context)
  • Temporal Judgment (Involving long-term flow)

In actual operating environments, these three levels coexist. Some scenarios are sufficient with a single frame, some require consecutive frames at intervals of a few seconds, and others require tracking a flow over tens of seconds.




EVA's Multi Frame Manager

In EVA, user-defined scenarios are not treated as simple text conditions. The system analyzes the "level of context" required by each scenario and determines an appropriate frame collection strategy.

For example, "fainting detection" requires multi-images covering a few seconds before and after the event, rather than a single frame. In contrast, "long-term abandonment" requires continuous frame collection over a specific duration based on a sliding window.

The module responsible for this process is the Multi Frame Manager. This module dynamically determines the following based on the scenario characteristics:

  • Number of frames required
  • Collection intervals
  • Retention time
  • Event trigger expansion

Collected images are not simply listed. They are delivered to the VLM in a clearly sorted chronological order, accompanied by system prompts that guide the model to compare changes between frames.




Multi-Image Based VLM Inference Strategy

When multi-frame input is received, the VLM does more than just return independent detection results. In EVA, we designed the inference structure to interpret multi-images as a continuous temporal context rather than an independent set of images.

To achieve this, frames are delivered to the model using the following strategies:

  • Chronological Frame Alignment: Constructs time-series data from past to present to understand causality.
  • Comparative System Prompts: Uses instructions like "Identify changes compared to the previous frame" to analyze inter-frame correlations.
  • Temporal Reasoning: Derives logical conclusions based on state changes over time rather than fragmented snapshot judgments.

Case Study: The Power of Temporal Context in Reducing False Positives

The following case demonstrates how fragmented information from a single frame is accurately corrected through the "context" of multiple frames.



  • Single Image: A person is stationary in a low, prone position. A VLM looking only at this moment is highly likely to misinterpret the situation as "Collapse."
  • Multi-Image: In the subsequent frames, subtle movements are captured—the person moves their arms to operate a phone and tilts their head to look at the screen.
  • Result: Through Temporal Reasoning, EVA correctly concludes this is "Sitting and using a phone detected".

The core idea is to guide the model to understand the situation by comparing differences between frames, rather than judging each frame individually.

For high-risk detections like fainting, the model undergoes a process of Progressive Situation Refinement:

  1. Initial State Identification: Identifying the target object and initial visual features (e.g., prone posture).
  2. Dynamic Change Detection: Tracking meaningful changes in body angles or voluntary movements compared to previous frames.
  3. Consistency Verification: Determining if the posture is a forced freeze due to impact or involves intentional actions.
  4. Final Context Determination: Distinguishing between visual noise with similar patterns and actual events.

This Temporal Reasoning structure significantly reduces false positives in edge cases that plague single-image systems, providing much more stable results in real-world operations.


CategorySingle ImageMulti Image
AccuracyPrecisionRecallAccuracyPrecisionRecall
No PPE0.660.870.680.760.870.82
No Mask (Working)0.940.690.540.930.760.52
Loitering0.490.920.330.630.850.64
Fainting0.871.00.360.961.00.82

Ultimately, EVA’s multi-frame inference structure is not just about increasing the number of input images—it is an approach that directly integrates temporal change into the model's reasoning process.




The Cost of Multi-Frame: Computational Overload

Improvements in accuracy come with a price.

While multi-frame reasoning allows for more visual information, it also leads to increased computational costs. In multimodal models, image inputs are generally converted into embeddings via a Vision Encoder before being passed to the LLM, a process that is relatively resource-intensive.

Specifically, multi-frame analysis often encounters the following:

  • Identical or very similar images repeating in a sequence.
  • Multiple requests referencing the same camera frame.
  • Multiple queries performed on the same set of images.

In these cases, if the Vision Encoder processes the same image repeatedly, it creates unnecessary overhead.

In EVA, we developed a structure that maximizes the Encoder Cache feature provided by vLLM to solve this. vLLM offers an Encoder Cache Manager that allows the system to cache and reuse Vision Encoder results during multimodal processing.

By leveraging this, we can reuse previously generated encoder embeddings for identical image inputs, eliminating the need to repeat Vision Encoder operations. EVA applies a request management structure at the Agent Layer to effectively utilize this caching.


The Agent coordinates requests in the following ways:

  • Organizing requests so that identical image inputs can be reused.
  • Managing requests based on image units to enable cache hits.
  • Optimizing request flow to prevent redundant encoding.

This allows us to minimize Vision Encoder operations and utilize GPU resources more efficiently, even in a multi-frame analysis environment.




Conclusion

Multi-frame based VLM inference is an approach that significantly improves situational understanding and detection accuracy compared to single-image analysis.

However, as the number of frames increases, the computational load on the Vision Encoder grows significantly. Therefore, it is crucial to design a system that balances performance gains with computational efficiency and infrastructure costs.

EVA addresses this by actively utilizing vLLM's Encoder Cache and managing requests through the Agent Layer. Through this architecture, we maintain high inference performance while reducing unnecessary computations, continuously improving GPU efficiency and infrastructure operating costs.

This feature is available starting from EVA v2.6.0.

Teaching VLMs to Multitask: Enhancing Situation Awareness through Scenario Decomposition

· 8 min read
Hyunchan Moon
Hyunchan Moon
AI Specialist

At the core of EVA lies the ability to truly understand critical situations that occur simultaneously within a single scene—such as fires, people falling, or traffic accidents—without missing any of them. However, no matter how capable a Vision-Language Model (VLM) is, asking it to reason about too many things at once leads to a sharp degradation in cognitive performance.[2,3]

In this post, inspired by the recent text-to-video retrieval research Q₂E (Query-to-Event Decomposition)[1], we introduce Scenario Decomposition, a technique that enables VLMs to deeply understand complex, multi-scenario situations within a single frame.

Performance Enhancement through Instruction Tuning Based on User Feedback Data

· 12 min read
Jaechan Lee
Jaechan Lee
POSTECH
Yura Shin
Yura Shin
AI Specialist

This work is a collaborative research effort with Minjoon Son (advised by Prof. Youngmyung Ko) as part of the "Campus: Beyond Safety to Intelligence – Postech Living Lab Project with EVA"


🎯 Introduction: Shifting Feedback from 'Retrospective Correction' to 'Cognitive Enhancement'

When EVA makes judgments based on images, operators often provide specific feedback like: "This is indeed a safety vest. Why did it get confused?" or "Shouldn't there be an alert here?" This feedback contains not just the right or wrong answer, but also the human reasoning and context behind the judgment.

Previously, EVA utilized this feedback by storing it in a separate Vector DB and using it to adjust the Alert status when similar situations occurred. While this approach offered the advantage of quick application, it had a structural limitation: it did not improve the model's intrinsic reasoning capability and merely retrospectively filtered errors.

To fundamentally address this issue, we completely changed our approach. We reconstructed user feedback not as simple error reports, but as Instruction Data that the model can directly use in its inference process to strengthen its Visual Reasoning capability.

This article will focus on how VLM-based Instruction Tuning utilizing user feedback data overcomes the limitations of the previous Vector DB-centric approach and improves the model's visual reasoning performance.

From Image to Language, From Language to Reasoning: Boosting VLM Performance with Camera Context

· 7 min read
Minjun Son
Minjun Son
POSTECH
Jisu Kang
Jisu Kang
AI Specialist

This work is a collaborative research effort with Minjoon Son (advised by Prof. Youngmyung Ko) as part of the "Campus: Beyond Safety to Intelligence – Postech Living Lab Project with EVA"


📝 Introduction: Making User Queries Smarter: Enhancing Language with Image Context

EVA is a system that detects anomalies using hundreds to thousands of smart cameras. We utilized VLM/LLM to automatically infer the camera context and embedded this into the prompt, creating a camera-context aware anomaly detection pipeline that reflects the situation of the target image. By leveraging the camera context extracted from a single frame as prior knowledge for the VLLM, we confirmed a meaningful improvement in accuracy and deeper interpretability compared to the existing baseline.

From One-Shot Decisions to Two-Stage Reasoning

· 7 min read
Seongwoo Kong
Seongwoo Kong
AI Specialist
Jisu Kang
Jisu Kang
AI Specialist
Keewon Jeong
Keewon Jeong
Solution Architect

Instead of Making a Single Decision, Be Cautious Step-by-Step

The process of AI making a decision from a single camera image is more complex than most people think. Users may simply ask: “Notify me if someone falls down,” “Alert me when a worker isn’t wearing a mask,” But the AI has to: analyze the image, check the requested conditions, consider exceptions, make the final decision, and explain the reasoning — all in a single pass.

In EVA, we introduced an Enriched Input structure that separates the user’s requirements into Detection conditions and Exception conditions, which significantly improved performance. However, even with structured input, the AI still made contradictory judgments in multi-condition scenarios.

The issue was not only about structuring the conditions — but also about forcing the AI to perform multiple judgments all at once. So EVA moved beyond the limitations of the existing one-shot approach and introduced a new Two-Stage Reasoning process.

In this post, we cover:

  • Why structured input alone could not solve the problem
  • The fundamental limits of one-shot decision-making
  • Why AI works better when decisions are split into two stages
  • Performance improvements validated by real experiments

Turning Simple User Requests into AI-Understandable Instructions

· 11 min read
Seongwoo Kong
Seongwoo Kong
AI Specialist
Jisu Kang
Jisu Kang
AI Specialist
Keewon Jeong
Keewon Jeong
Solution Architect

Expanding User Queries So AI Can Clearly Understand Intent

EVA is a system that operates based on user-issued commands. For EVA to make stable and accurate decisions, it is crucial that user requests are delivered in a form that AI can clearly understand.

However, even if the natural language expressions we use daily seem simple and clear to humans, they can be ambiguous from an AI model’s perspective, or they may require excessive implicit reasoning. This gap is exactly what often leads to AI system malfunctions or inaccurate decisions.

To fundamentally address this, EVA uses a Few-Shot prompting technique to automatically expand simple user requests into a structured query representation.

In this post, we focus on:

  • Why simple natural-language requests are difficult for AI
  • How query expansion can improve AI’s understanding
  • How much performance improved in actual field deployments

and share practical methods and their impact for helping AI understand user intent more clearly.

Complete Mastery of vLLM: Optimization for EVA

· 17 min read
Taehoon Park
Taehoon Park
AI Specialist

In this article, we will explore how we optimized LLM service in EVA. We will walk through the adoption of vLLM to serve LLMs tailored for EVA, along with explanations of the core serving techniques.




1. Why Efficient GPU Resource Utilization is Necessary

Most people initially interact with cloud-based LLMs such as GPT / Gemini / Claude. They deliver the best performance available without worrying about model operations — you simply need a URL and an API key. But API usage incurs continuous cost and data must be transmitted externally, introducing security risks for personal or internal corporate data. When usage scales up, a natural question arises:

“Wouldn’t it be better to just deploy the model on our own servers…?”

There are many local LLMs available such as Alibaba’s Qwen and Meta’s LLaMA. As the open-source landscape expands, newer high-performance models are being released at a rapid pace, and the choices are diverse. However, applying them to real services introduces several challenges.

Running an LLM as-is results in very slow inference. This is due to the autoregressive nature of modern LLMs. There are optimizations like KV Cache and Paged Attention that dramatically reduce inference time. Several open-source serving engines implement these ideas — EVA uses vLLM. Each engine differs in model support and ease of use. Let’s explore why EVA chose vLLM.