Skip to main content

5 posts tagged with "VLM"

Vision-Language Model 기반 기술과 응용 사례를 다룹니다.

View All Tags

Performance Enhancement through Instruction Tuning Based on User Feedback Data

· 12 min read
Jaechan Lee
Jaechan Lee
POSTECH
Yura Shin
Yura Shin
AI Specialist

This work is a collaborative research effort with Minjoon Son (advised by Prof. Youngmyung Ko) as part of the "Campus: Beyond Safety to Intelligence – Postech Living Lab Project with EVA"


🎯 Introduction: Shifting Feedback from 'Retrospective Correction' to 'Cognitive Enhancement'

When EVA makes judgments based on images, operators often provide specific feedback like: "This is indeed a safety vest. Why did it get confused?" or "Shouldn't there be an alert here?" This feedback contains not just the right or wrong answer, but also the human reasoning and context behind the judgment.

Previously, EVA utilized this feedback by storing it in a separate Vector DB and using it to adjust the Alert status when similar situations occurred. While this approach offered the advantage of quick application, it had a structural limitation: it did not improve the model's intrinsic reasoning capability and merely retrospectively filtered errors.

To fundamentally address this issue, we completely changed our approach. We reconstructed user feedback not as simple error reports, but as Instruction Data that the model can directly use in its inference process to strengthen its Visual Reasoning capability.

This article will focus on how VLM-based Instruction Tuning utilizing user feedback data overcomes the limitations of the previous Vector DB-centric approach and improves the model's visual reasoning performance.

From Image to Language, From Language to Reasoning: Boosting VLM Performance with Camera Context

· 7 min read
Minjun Son
Minjun Son
POSTECH
Jisu Kang
Jisu Kang
AI Specialist

This work is a collaborative research effort with Minjoon Son (advised by Prof. Youngmyung Ko) as part of the "Campus: Beyond Safety to Intelligence – Postech Living Lab Project with EVA"


📝 Introduction: Making User Queries Smarter: Enhancing Language with Image Context

EVA is a system that detects anomalies using hundreds to thousands of smart cameras. We utilized VLM/LLM to automatically infer the camera context and embedded this into the prompt, creating a camera-context aware anomaly detection pipeline that reflects the situation of the target image. By leveraging the camera context extracted from a single frame as prior knowledge for the VLLM, we confirmed a meaningful improvement in accuracy and deeper interpretability compared to the existing baseline.

From One-Shot Decisions to Two-Stage Reasoning

· 7 min read
Seongwoo Kong
Seongwoo Kong
AI Specialist
Jisu Kang
Jisu Kang
AI Specialist
Keewon Jeong
Keewon Jeong
Solution Architect

Instead of Making a Single Decision, Be Cautious Step-by-Step

The process of AI making a decision from a single camera image is more complex than most people think. Users may simply ask: “Notify me if someone falls down,” “Alert me when a worker isn’t wearing a mask,” But the AI has to: analyze the image, check the requested conditions, consider exceptions, make the final decision, and explain the reasoning — all in a single pass.

In EVA, we introduced an Enriched Input structure that separates the user’s requirements into Detection conditions and Exception conditions, which significantly improved performance. However, even with structured input, the AI still made contradictory judgments in multi-condition scenarios.

The issue was not only about structuring the conditions — but also about forcing the AI to perform multiple judgments all at once. So EVA moved beyond the limitations of the existing one-shot approach and introduced a new Two-Stage Reasoning process.

In this post, we cover:

  • Why structured input alone could not solve the problem
  • The fundamental limits of one-shot decision-making
  • Why AI works better when decisions are split into two stages
  • Performance improvements validated by real experiments

Turning Simple User Requests into AI-Understandable Instructions

· 11 min read
Seongwoo Kong
Seongwoo Kong
AI Specialist
Jisu Kang
Jisu Kang
AI Specialist
Keewon Jeong
Keewon Jeong
Solution Architect

Expanding User Queries So AI Can Clearly Understand Intent

EVA is a system that operates based on user-issued commands. For EVA to make stable and accurate decisions, it is crucial that user requests are delivered in a form that AI can clearly understand.

However, even if the natural language expressions we use daily seem simple and clear to humans, they can be ambiguous from an AI model’s perspective, or they may require excessive implicit reasoning. This gap is exactly what often leads to AI system malfunctions or inaccurate decisions.

To fundamentally address this, EVA uses a Few-Shot prompting technique to automatically expand simple user requests into a structured query representation.

In this post, we focus on:

  • Why simple natural-language requests are difficult for AI
  • How query expansion can improve AI’s understanding
  • How much performance improved in actual field deployments

and share practical methods and their impact for helping AI understand user intent more clearly.

Complete Mastery of vLLM: Optimization for EVA

· 17 min read
Taehoon Park
Taehoon Park
AI Specialist

In this article, we will explore how we optimized LLM service in EVA. We will walk through the adoption of vLLM to serve LLMs tailored for EVA, along with explanations of the core serving techniques.




1. Why Efficient GPU Resource Utilization is Necessary

Most people initially interact with cloud-based LLMs such as GPT / Gemini / Claude. They deliver the best performance available without worrying about model operations — you simply need a URL and an API key. But API usage incurs continuous cost and data must be transmitted externally, introducing security risks for personal or internal corporate data. When usage scales up, a natural question arises:

“Wouldn’t it be better to just deploy the model on our own servers…?”

There are many local LLMs available such as Alibaba’s Qwen and Meta’s LLaMA. As the open-source landscape expands, newer high-performance models are being released at a rapid pace, and the choices are diverse. However, applying them to real services introduces several challenges.

Running an LLM as-is results in very slow inference. This is due to the autoregressive nature of modern LLMs. There are optimizations like KV Cache and Paged Attention that dramatically reduce inference time. Several open-source serving engines implement these ideas — EVA uses vLLM. Each engine differs in model support and ease of use. Let’s explore why EVA chose vLLM.