Skip to main content

EVA Release v2.6

· 6 min read
Danbi Lee
Danbi Lee
Product Leader

EVA v2.6: Understanding Situations More Deeply, Enabling Clearer Operational Decisions

EVA v2.6 is a release that begins with a real operational question: "How should detection results be interpreted, and which alerts should be handled first?" As accurate detection performance has become the baseline, what matters more in the field today is the clarity of operational judgment—questions such as "Is this alert truly critical?" and "Does this situation require immediate action?" This v2.6 update goes beyond what AI sees, focusing instead on how operators understand and utilize detection results.

  • Clearer prioritization criteria as the number of alerts increases
  • A structure that accumulates detection results as operational metrics, not just isolated incidents
  • Detection that understands the flow of a situation, not just a single image
  • Selectable detection processes that do not treat all scenarios equally

v2.6 represents a step forward for EVA—from a simple detection system to an AI operations platform that supports operator decision-making.



Emergency Alert Settings: Making Critical Alerts Impossible to Miss

As the number of scenarios increases, so does the volume of alerts. When all alerts are displayed with the same level of importance, operators are once again forced to rely on human judgment to determine priorities. To reduce this burden, v2.6 introduces a feature that allows text-based definition of alert types requiring urgent response. Alerts designated as emergency alerts are clearly distinguished from general alerts, enabling operators to immediately recognize situations that must be checked first—even among a large number of notifications.

As alerts increase, what matters is not the number of alerts, but which alert requires action first. Emergency alert settings in v2.6 make operational judgment criteria clearer and more actionable.




Expanded False/True Detection Feedback: Managing Feedback as Operational Records and Metrics

Previously, feedback could only be recorded for false detections. In v2.6, feedback can now be entered for true detections as well. Operators can explicitly record whether each detection result is false or true, and this data is accumulated and managed as scenario-level true detection rate metrics. As a result, detection results evolve from simple event logs into quantitative data used to evaluate and improve operational quality.
In addition, feedback history functions as an operational record, allowing teams to trace which alerts were actually reviewed and what decisions and responses were made. UI improvements also ensure that operators can immediately see whether feedback has been entered directly from the alert list in the chat window—without navigating to a detail page.



Main Screen Improvements: Only the Information Operators Truly Need, Presented More Intuitively

The main screen is where operators spend most of their time. In v2.6, the list structure has been reorganized based on actual operational workflows. Information with low immediate decision-making value—such as model names or detection targets—has been removed. Instead, columns have been restructured around operation-focused information, including alert severity, recent alert summaries, scenario summaries, and webhook configuration status. This enables operators to grasp what is happening right now and what actions are required at a glance, allowing for faster and more confident responses.




Multi-Image-Based Detection: Understanding Situations as a Flow, Not a Single Image

Most real-world situations do not occur in a static moment, but within a sequence of continuous changes. Traditional single-image-based detection evaluates only a single frame, making it difficult to accurately assess scenarios where context before and after an action is critical.
For example, a person lying on the floor in a gym could be stretching, resting, or experiencing a real medical emergency.
This distinction is difficult to determine without considering preceding and subsequent frames. To address this limitation, v2.6 introduces multi-image-based detection. By analyzing multiple consecutive images together, EVA can comprehensively assess motion flow and state changes. This significantly improves detection accuracy for scenarios where the process of an action—such as intrusion, assault, or collapse—is essential.
EVA is no longer reacting to isolated moments, but evolving toward an approach that understands how situations unfold over time.

💡 For a detailed explanation of the technical design and decision logic behind multi-image-based detection, please refer to our Tech Blog.
Tech Blog | Multi-Frame-based VLM Detection: Beyond Single-Image Limitations Toward Temporal Context




VM Only Detection: When Faster Alerts Matter More Than Complex Interpretation

Not all detections require complex scenario interpretation. In some environments, "the presence of a specific object" alone is immediately meaningful.
Previously, EVA followed a pipeline of Vision Model (VM) → Vision Language Model (VLM), performing scenario interpretation after object detection. While effective for complex judgments, this approach could introduce unnecessary latency for simpler scenarios.
To address this, v2.6 introduces a new VM Only detection process. Alerts are triggered immediately upon object detection, bypassing the VLM stage and thereby improving both detection speed and resource efficiency.
For example, in scenarios such as "Notify me when an ambulance appears," faster responses are possible through object detection alone, without complex interpretation. Operators can now choose detection methods based on scenario characteristics—distinguishing between cases where accurate interpretation is critical and those where rapid response is the priority.




EVA is not simply a system that generates alerts.
It aims to be an AI operations platform where operator judgment and experience are continuously accumulated.
v2.6 most clearly represents this direction.
Beyond improving detection accuracy, it enhances operational judgment flows, prioritization, records, and metrics together.
EVA will continue to evolve to better understand complex real-world environments and support operators in making faster, clearer decisions.
Your usage experiences and feedback are the most important foundation for taking EVA to the next level.


🚀 Coming Soon: A Preview of v2.7

  • Scenario-based automatic detection configuration
  • Polygon-based detection area settings
  • Change history alerts for detection scenarios and targets

EVA Signs Supply Agreement with KOCOM and Golden Seoul Hotel – A New Milestone for Expansion into Construction and Hospitality Markets

· 3 min read
Andy Yun
Andy Yun
Business Leader

EVA Signs Supply Agreement with KOCOM and Golden Seoul Hotel – A New Milestone for Expansion into Construction and Hospitality Markets

EVA has taken its first meaningful step toward strategic collaboration with KOCOM, a leading smart home solutions provider in Korea. The two companies have signed an EVA license supply agreement and will officially launch a project to apply AI-based video analytics in real-world operational environments.

This agreement goes beyond a simple solution supply contract. It represents a strategic partnership aimed at expanding AI-driven safety management systems across diverse industries.


A Validation-Driven Collaboration Beginning with KOCOM Headquarters and Golden Seoul Hotel

Under this agreement, EVA will first be deployed at KOCOM headquarters and Golden Seoul Hotel. By leveraging AI video analytics in real operational environments, the system will detect hazardous situations, identify abnormal behaviors, and manage operational risks in real time—raising on-site safety standards to a new level.

Hotels and office facilities accommodate large numbers of people daily, making proactive risk detection and response systems essential. This deployment is expected to serve as a key validation reference, demonstrating EVA’s stability and operational effectiveness in real-world settings.

💡 Expected Impact: Moving beyond simple monitoring, AI will recognize situations and proactively manage risks to create an intelligent safety operation environment.


Securing Strategic References for Expansion into Construction and Hospitality Markets

KOCOM brings extensive experience and market networks in smart homes, intercom systems, CCTV, and construction-focused security solutions. By combining EVA’s AI analytics technology with KOCOM’s industry expertise, a scalable model for AI-driven safety solutions—centered on construction and hospitality markets—is expected to gain momentum.

This project will go beyond individual deployments and serve as a standard reference model applicable to multiple domains, including construction site safety management, hotel and commercial facility operations, and integrated smart building management.

🏗️ Market Significance: This initiative marks a starting point in establishing the perception that “AI safety is not an option, but infrastructure.”


A Partnership That Grows Beyond Technology

This collaboration is not a one-time supply agreement but a long-term partnership focused on jointly developing the market.

EVA is a proven Physical AI solution across various industries, enabling rapid and efficient AI adoption using existing CCTV infrastructure. Combined with KOCOM’s customer touchpoints and industry network, the pace of real market expansion is expected to accelerate.

🤝 Partnership Direction: Building markets together based on technological excellence and delivering tangible value to customers.


A New Leap in the AI Safety Solutions Market

Starting with deployments at KOCOM headquarters and Golden Seoul Hotel, this project is set to become a flagship reference demonstrating EVA’s competitiveness in the construction and hospitality sectors.

Building on this foundation, the two companies plan to further expand their collaboration in AI-based safety management and continuously introduce solutions applicable across diverse industrial environments.

This partnership is expected to become a meaningful reference for the adoption of AI safety solutions in construction and hospitality markets, while creating a mutually beneficial growth opportunity for both companies in the broader AI solutions landscape.

EVA will continue to drive Physical AI innovation together with its partners, building safer and smarter physical environments.

Beyond GPUs to NPUs: EVA Achieves End-to-End Service Validation on Rebellions ATOM™-Max

· 3 min read
Gyulim Gu
Gyulim Gu
Tech Leader

In a previous post, I shared our commitment to collaborating with Rebellions NPUs to enable 24/7 “always-on AI” for industrial environments.

https://mellerikat.com/en/blog/News/rebellions

Today, I’m pleased to announce that this commitment has resulted in a tangible technical milestone.

mellerikat’s EVA (Evolved Vision Agent) has successfully completed end-to-end service validation on Rebellions’ latest server-grade NPU, ATOM™-Max, integrating Vision models, LLMs, and VLMs into a unified production pipeline.


🛠️ Beyond Running a Model — Executing the Entire Service Pipeline

Running a single model on an NPU is fundamentally different from operating an entire production service reliably. Through this validation, EVA demonstrated uninterrupted execution of the full pipeline on ATOM™-Max:


Camera Input → Object Detection (Vision) → Scenario Interpretation (VLM) → Situation Assessment (LLM) → Alert & Control Dispatch

This result confirms that complex AI pipelines required in real-world operations — beyond isolated model benchmarks — can be fully orchestrated on NPUs.

Rebellions has also recognized this milestone as “the first real-world operation of a VLM-based AI service on a commercial NPU platform,” expressing strong expectations for future adoption.


📈 Next Phase: Quantifying TCO Innovation Through Stress Testing

Following successful end-to-end validation, EVA now enters the stress testing phase, simulating real factory environments.

We will analyze system stability, throughput, and power efficiency under extreme conditions where multiple cameras generate simultaneous input streams. The insights gained will be delivered to customers as actionable guidance, including:

  1. Optimal NPU Configuration Standards Cost-efficient hardware configuration guidelines based on camera count and required inference performance.

  2. Quantified TCO Reduction vs. GPUs Practical economic analysis including power consumption and operational costs — not just hardware pricing.

  3. Minimized Deployment Risk Standardized NPU configurations that shorten deployment time and accelerate large-scale adoption.


✨ Conclusion: Reducing GPU Dependence and Enabling Sustainable AI

The key takeaway from this validation is clear: Multimodal industrial AI has reached a level where real-world operations are possible using NPUs alone.

For organizations that have hesitated to adopt AI due to high GPU costs, the combination of EVA and Rebellions offers a practical and powerful alternative.

By breaking the high-cost barrier and enabling safer, higher-quality, and more productive operations at lower cost, EVA and Rebellions are working together to establish a new standard for sustainable industrial AI.

EVA Release v2.5

· 5 min read
Danbi Lee
Danbi Lee
Product Leader

EVA v2.5: Providing a clearer, more convenient, and more stable operating experience

EVA v2.5 focuses on eliminating recurring inconveniences found in real-world operations and helping users more easily understand and manage the system’s status. This version introduces new features such as automatic face blurring to reduce exposure risks, license‑based operational visibility, and improved usability during configuration. Operators can now experience a more predictable and stable EVA environment.



Automatic Face Blur: Clear in real time, safe when recorded or shared

Faces detected in images are now automatically blurred, minimizing unnecessary exposure even when images are shared via chat notifications or Webhook. Importantly, the real‑time streaming view during monitoring remains clear, just like before. This means you maintain the same ability to make fast, accurate decisions on site, while only the stored or externally shared images are blurred—reducing the burden during the record‑and‑share stages.




License Status at a Glance: Instantly check how long and how much you can use

After registering a license key, you can immediately check the remaining validity period, the number of cameras allowed, and whether the license is Trial or Standard. Advance notifications alert you before expiration, preventing accidental lapses, and EVA automatically stops operation once the license expires—avoiding unintended continued operation. Users can now easily verify the installed EVA’s license status without additional inquiries or internal checks, enabling more planned and convenient management.




Object Detection Sensitivity: Adjust with numbers for clearer control of results

The previous term “Detection Sensitivity” has been updated to the more precise “Object Detection Sensitivity”, and instead of “Low/High,” it is now adjusted using a numeric value between 0 and 1, allowing you to intuitively understand how each setting affects detection results and tune it quickly. Lower values detect more objects across a wider range, while higher values detect only more confident cases. This value is used as the threshold applied by the Vision Model (VM) when detecting objects, and the detected object information is then passed to the Vision‑Language Model (VLM), directly affecting final scenario evaluation. With clearer terminology and detailed guidance, you can select the appropriate value for your environment and achieve the detection performance you want. In addition, the next version will introduce an Agent feature that automatically sets object detection sensitivity based on detection results, adjusting sensitivity for each camera according to its configured scenario—providing even more convenient and stable detection performance for users.




Understand EVA Features More Easily Through Conversation

If you have questions about the various features and configuration options provided by EVA, you can check them directly through the in-app chat window. The existing guidance capability has been further enhanced, delivering clearer and more structured explanations—including how specific features work, the meaning of configuration values, and how to apply them. You can ask about any EVA-related feature or setting at any time.

There is no need to navigate through complex menus one by one. Simply enter your question in natural language to receive immediate guidance for faster and more efficient operations. Beyond basic feature descriptions, EVA provides an enhanced support experience that helps you quickly understand features and configure them with confidence.




Camera Setup: Faster and easier with step navigation + instant save

The inconvenience of having to navigate back and forth through multiple steps to modify or save values within the four‑step camera configuration screen has been resolved. You can now save at any step, eliminating the need to move to the final step and return again. By clicking the step number at the top of the configuration screen, you can jump directly to the desired step, and the save button in each step allows you to quickly apply only the necessary changes. When configuring multiple cameras consecutively, fewer clicks are required, making the overall setup process significantly faster and more convenient.



EVA continues to evolve based on real operational experiences from our customers.
The EVA development team reflects customer needs and on‑site feedback faster than anyone and continues to advance our technology to enhance accuracy and stability.
We will keep delivering innovations that transform expectations of a “properly built product” into clear value.
Your feedback is the most important driving force that makes EVA a stronger and more complete solution!



🚀Coming Soon: Preview of v2.6.0

  • Improved accuracy with multi‑image‑based detection
  • Lightweight Vision Model detection mode for optimized operations
  • Significantly enhanced UX for feedback submission and review

POSTECH Future Technology Center Leaps Forward as the Heart of ‘AI That Reads the World’ with EVA

· 3 min read
Daniel Cho
Daniel Cho
Mellerikat Leader


POSTECH (Pohang University of Science and Technology), the epicenter of engineering education and research in Korea, has officially confirmed the adoption of mellerikat’s EVA at its Future Technology Center. This collaboration goes far beyond enhancing campus security—it marks a major milestone in building a Data Living Lab, where all visual data generated on campus is transformed into valuable research assets.


🏛️ The Entire Campus as a Massive AI Laboratory

An Integrated Indoor & Outdoor Sensor Network

The POSTECH Future Technology Center will unify its existing campus-wide CCTV infrastructure through the EVA platform. By integrating sensors across the entire campus, the university establishes an infrastructure capable of real-time perception and reasoning—understanding and interpreting events in the physical world without human intervention.


🧪 A New Fuel for R&D

From “Discarded Footage” to “High-Value Data”

Until now, campus video data has been treated as ephemeral information, deleted after a certain retention period. EVA fundamentally changes this paradigm by turning video into long-term research assets.

  • A Catalyst for LLM & VLM Innovation EVA automatically converts unstructured video into structured data. This creates high-quality multimodal datasets for training next-generation Large Language Models (LLMs) and Vision-Language Models (VLMs), allowing researchers to focus on insight generation rather than data preprocessing.

  • Spatiotemporal Metadata Enrichment Instead of storing video alone, EVA enriches data with contextual information—such as congestion levels during exam periods (Time) or in library spaces (Space)—significantly increasing data density and analytical value.


🔒 Privacy-by-Design

Secure Research Assets, Completed at the Moment of Collection

As massive volumes of visual data are transformed into research assets, privacy protection stands as the highest priority of the POSTECH Living Lab.

  • Immediate Data Processing Raw video captured by cameras is analyzed instantly at the point of collection, before being stored on servers.

  • Automatic De-identification Any personally identifiable information detected during analysis is immediately de-identified. This ensures that researchers can securely access high-quality research datasets without privacy concerns.


🚀 A Research Ecosystem Where Students Become Creators

Student as a Creator

The POSTECH Living Lab redefines students not as passive service users, but as active producers of technology.

  • AI Training with Real-World Data Instead of textbook examples, students train AI models using real data generated from their own campus environment, implementing advanced decision-making logic firsthand.

  • Development of On-Site, Practical Services By deploying AI services that solve everyday problems—such as cafeteria queue prediction or classroom energy optimization—students directly experience the full potential of a Physical AI platform.


“Reading the real world as data, interpreting it with AI, and transforming reality for the better.”

The virtuous cycle that POSTECH and EVA are building together is a compelling example of how a university campus can become a hub for future technologies. As everyday life at POSTECH turns into data—and that data, in turn, shapes the technologies of tomorrow—we invite you to follow this journey into the future.

2026 Government Economic Growth Strategy Announcement – A Turning Point for AI Safety Investment

· 5 min read
Sunmi Choi
Sunmi Choi
Solution Architect

The government’s “2026 Economic Growth Strategy” is reshaping the safety management paradigm across industrial sites

The core message is clear.
Generous tax incentives will be provided for the adoption of safety facilities leveraging new technologies such as AI,
while severe penalties—at a level that could threaten corporate survival—will be imposed for violations of safety obligations.

This policy is not a recommendation.
By presenting both clear incentives for safety investment and devastating penalties in the event of accidents,
the government is making safety investment a mandatory requirement rather than an optional choice for companies.


Three Key Pillars of the Policy

  • Three-Tier Tax Incentive Package
    Expansion of tax credits up to 12–40% for investments in AI-powered safety facilities
    (R&D tax credit rates: General 2–25%, New Growth & Core Technologies 20–40% / Investment tax credit rates: General 1–10%, New Growth & Core Technologies 3–12%)

  • Stronger Accountability
    Introduction of new penalty fines equivalent to 5% of operating profit or 3% of revenue in the event of fatal accidents (Legislation and amendments planned for the first half of 2026)

  • Intensified Oversight
    Expansion of industrial safety inspectors to 2,095 personnel and introduction of restricted bidding for high-risk construction projects




Safety Investment Is Now State-Supported

The government has begun recognizing AI-based safety facilities as “New Growth & Core Technologies.”

This signals more than regulatory relaxation—it reflects a shift in perception,
where safety technology investment is now regarded as a core pillar of national competitiveness.

  • AI Monitoring Systems Included in Integrated Investment Tax Credits

Tax credit eligibility is significantly expanded to include AI-powered intelligent CCTV monitoring systems,
even beyond legally mandated facilities.
The EVA solution represents the standard for next-generation safety facilities recommended by the government.

  • Preferential Treatment for R&D and Facility Investments Under New Growth & Core Technology Designation

Investments in advanced safety technologies utilizing AI and robotics are eligible for tax credits
ranging from up to 12% for facilities to 40% for R&D.
(R&D tax credit rates: General 2–25%, New Growth & Core Technologies 20–40% / Investment tax credit rates: General 1–10%, New Growth & Core Technologies 3–12%)
Now is the optimal time to deploy high-performance EVA solutions at the lowest possible cost.




A Single Accident Can Undermine an Entire Company

Regulatory scrutiny has become significantly more stringent.
The introduction of a punitive penalty framework starting in 2026 makes reactive, after-the-fact responses no longer viable.

  • Penalty fines of 5% of operating profit or 3% of revenue in the event of fatal accidents (cap: KRW 100 billion)

Under amendments to the Occupational Safety and Health Act and the Construction Safety Special Act,
companies facing repeated fatal accidents may be fined up to 5% of operating profit or 3% of revenue (legislation and amendments planned for the first half of 2026).

This goes beyond a simple fine—it poses a direct threat to corporate management and sustainability.

EVA intelligently monitors worksites 24/7, proactively identifying and blocking fleeting risk factors that human operators are likely to miss.
Accident prevention is now the most reliable cost-saving strategy.




Why EVA: A Full-Stack AI Safety Solution

More than just an AI solution, EVA delivers a full-stack AI experience optimized for each customer’s business environment.
Enjoy unparalleled operational efficiency and flexibility—free from dependence on any single technology stack.

1. Freedom to Choose AI Models Optimized for Your Environment

Select the AI stack best suited to your operational environment while maintaining powerful safety monitoring performance.

Global leading AI models such as HyperCLOVA X, Qwen, Gemma, LLaMA, ChatGPT, and EXAONE,
along with cutting-edge vision models including ultralytics YOLO, OWL-ViT, RT-DETR, and OmDet,
can all be freely deployed on the EVA platform.

2. One-Quarter of the Total Cost of Ownership (TCO) Compared to Competitors

High deployment costs and slow maintenance hinder growth.
EVA delivers overwhelming cost efficiency—from initial investment through ongoing operations.

Comparison Criteriamellerikat EVAConventional AI CCTV Solutions
Initial server requirements (100 CCTV cameras)1 high-performance GPU server (NVIDIA L40S), initial cost approx. KRW 25M4 high-performance GPU servers (NVIDIA L40S), initial cost approx. KRW 100M
Additional cost for scenario changesNo additional license required, no ongoing maintenance costAdditional cost for every scenario change, approx. KRW 200K per update
Scenario configuration & modificationDirectly editable by operators, under 30 secondsRequires specialized personnel, approx. 1–2 weeks


Now Is the Right Time. Let EVA Be Your Trusted Safety Partner.


Cisco Korea Manufacturing Summit 2026: EVA at the Center of Manufacturing Innovation

· 4 min read
Andy Yun
Andy Yun
Business Leader
Byungmoon Lee
Byungmoon Lee
Solution Architect
Sunmi Choi
Sunmi Choi
Solution Architect

EVA Meets Cisco Meraki: Strong On-Site Engagement Through the Ringnet Booth and Customer Case Presentation

On January 29, 2026, the Cisco Korea Manufacturing Summit 2026, a key event focused on digital transformation in the manufacturing industry, was successfully held. At this event, Mellerikat showcased its innovative Physical AI solution, EVA, presenting concrete ways to maximize safety and operational efficiency on manufacturing floors. In particular, the co-hosted booth with partner Ringnet and the live customer case presentation left a strong impression on attendees. Here are highlights from the vibrant现场.




1. Capturing Attention on the Floor: “EVA x Meraki” at the Ringnet Booth

At this summit, Mellerikat demonstrated a real-time Physical AI solution that combines Cisco Meraki with EVA at the Ringnet booth.

The booth featured a live demonstration integrating Meraki cameras, backed by robust cloud-based infrastructure, with EVA, which analyzes visual data much like the human eye and brain. Visitors were able to see firsthand how simply connecting EVA to an existing CCTV infrastructure transforms it into an intelligent system capable of understanding and 판단 complex physical environments on its own.

💡 On-site feedback: Many attendees showed strong interest in the ability to build an advanced AI analytics environment using their existing Meraki infrastructure—without investing in additional high-cost hardware. In particular, real-time dashboard visualizations for worker safety detection and restricted-area intrusion alerts, both critical in manufacturing environments, drew enthusiastic reactions from practitioners.




2. A Customer Case Presentation Filled with Laughter and Empathy

Starting at 1:00 PM, the Case Study session was held under the theme “An Intelligent Platform Opening the Era of Physical AI.” Rather than a simple feature walkthrough, the session vividly illustrated how EVA is actually being used in real manufacturing sites.

The presentation covered concrete examples of how EVA has been deployed in production environments to protect workers and improve operational efficiency. As demonstrated in locations such as Pyeongtaek Digital Park, scenarios where EVA checks compliance with safety gear requirements or immediately alerts operators to dangerous situations resonated strongly with the audience as practical, real-world solutions.

🗣️ Presentation atmosphere: Although it could have been a dry technical session, the speaker explained how EVA addresses on-site challenges with wit and clarity, keeping the audience engaged throughout. After the session, many attendees followed up with comments like “We want to apply this to our factory right away,” clearly demonstrating that EVA is not just theoretical technology, but a “living technology” ready for immediate deployment.




3. Powerful Technology Designed for Ease of Use: EVA

The core value of EVA emphasized throughout the summit was “professional AI that anyone can use easily.”

  • Natural language–based control: No complex coding is required. By entering scenarios in everyday language—such as “Find workers not wearing safety helmets”—EVA understands the intent and begins analysis. This enables on-site managers to operate AI systems without being IT experts.
  • Cost-efficient architecture: Through its unique architecture, EVA significantly reduces the operational costs typically associated with high-priced multi-modal LLMs. As a result, enterprises can deploy high-performance AI across manufacturing sites without excessive infrastructure 부담.
  • Beyond detection to action (Physical Action Trigger): EVA does more than just observe. When a risk is detected, it can trigger real-world actions—such as turning on warning lights or controlling gates—by integrating with physical devices, delivering a complete closed-loop system.



4. Moving Forward with Continuous Innovation

The Cisco Korea Manufacturing Summit 2026 clearly demonstrated the value that Mellerikat EVA can create in the manufacturing industry. The strong interest at the Ringnet booth and the lively interaction during the presentation sessions reinforced our confidence in the direction we are heading.

Together with trusted partners such as Cisco Meraki and Ringnet, Mellerikat will continue to lead innovation in Physical AI, helping manufacturing sites become safer, smarter, and more efficient.

Rebellions x EVA – Full-Stack Integration from Hardware to Service

· 3 min read
Daniel Cho
Daniel Cho
Mellerikat Leader

👉 Industrial AI Powered by EVA: Finding the Answer in 'Economic Efficiency' Beyond Performance

In industrial environments, AI is no longer a function to be called only when needed.

It must be an 'Always-on' system that performs continuous inference 24/7, processing the constant stream of data generated by cameras and sensors.

However, operating massive AI models like Vision-Language Models (VLMs) on expensive GPU-based infrastructure around the clock imposes a significant cost burden on customers.

To break down this cost barrier and accelerate the adoption of industrial AI, mellerikat EVA has joined forces with Rebellions.


👍 1. NPU: Surpassing GPU Limits in Price Competitiveness and Efficiency

Innovation in infrastructure costs (TCO) is essential for the widespread adoption of industrial VLM services. Rebellions' NPU meets these demands with superior efficiency compared to general-purpose GPUs.

  • Unrivaled Inference Performance: Rebellions’ ATOM NPU has proven up to 3x faster processing speeds in the vision domain compared to equivalent GPUs in global benchmarks like MLPerf.
  • Superior Performance-per-Watt and Cost Savings: In a 24/7 operational environment, the NPU reduces power consumption by more than 50% compared to GPUs while maintaining high throughput (TPS). This directly leads to a reduction in the customer's operational cost burden.
  • VLM Optimization: Latest large-scale model optimization technologies, such as FlashAttention and PagedAttention, are implemented at the hardware level, allowing heavy VLMs to run lightly and quickly.

EVA running on Rebellions NPU means customers can own cutting-edge Physical AI at the most economical cost.


👍 2. VLM Optimization: Extensive Model Testing and Technical Validation

mellerikat is currently testing various state-of-the-art VLM models within the EVA platform in the Rebellions NPU environment.

We are going beyond simply replacing hardware; we are advancing our software stack to ensure that complex industrial scenarios are processed at optimal speeds on the NPU.

Through this process, we are finalizing the 'NPU-based VLM Service, EVA' at a level ready for immediate field deployment.


👍 3. From LG Electronics Deployment to Global Supply: A Strategic Roadmap

The collaboration between mellerikat and Rebellions aims for substantial business expansion beyond a simple technical partnership.

  1. Advanced VLM Optimization: We are in the validation stage to ensure optimal power efficiency and processing speed by testing various latest VLM models on Rebellions NPU.
  2. Deployment at LG Electronics Production Sites: Based on validated VLM models and NPU optimization technology, we are prioritizing the introduction of NPU-based AI into major production processes and safety monitoring systems within LG Electronics.
  3. External Supply of EVA+NPU Package: Leveraging successful references at LG Electronics, we plan to provide the EVA platform in an integrated solution package with Rebellions NPU for future external clients.

By adopting EVA, customers can enjoy a full-stack service that combines optimized hardware and a robust platform at a reasonable price, without the hassle of selecting complex hardware themselves.


👏 Conclusion: The Standard for Sustainable Industrial AI

The combination of EVA and Rebellions NPU is a strategic choice to transform AI from a simple technology adoption into a service structure that operates continuously at a realistic cost.

Through this powerful collaboration spanning from hardware to the service platform, mellerikat will become a partner that accelerates AI transformation in industrial fields and fundamentally resolves the infrastructure burden for our customers.

EVA Release v2.4

· 5 min read
Jeongjun Park
Jeongjun Park
Product Developer

EVA v2.4: Taking Detection Monitoring Efficiency to the Next Level

The EVA v2.4 release reflects the requirements raised in real operational environments, significantly improving monitoring efficiency and user experience. This update focuses on Alarm Priority Sorting, Device Favorites, Expanded Language Support, and Enlarged View of Detection Result Images, delivering convenience that can be felt immediately in monitoring operations.




Alarm Priority Sorting – Never Miss Real-Time Responses Among Numerous Devices

In a monitoring environment with numerous connected devices, the most important thing is immediate identification and response to alarm-triggered points. When a detection alarm occurs, EVA automatically moves the corresponding device to the top of the list, allowing you to quickly and easily identify the alarm-triggered device even in a complex list. This feature is designed to display the latest events at the top in order, even when multiple alarms occur simultaneously, so you can see the most critical screen at a glance without scrolling or searching. As a result, monitoring efficiency and response speed are greatly improved, and when combined with the Favorites feature, you can prioritize monitoring of key devices where alarms occur.




Device Favorites – Focused Monitoring of Key Points by Operator

The more cameras you monitor, the more important it is to quickly check critical points. With this update, EVA allows users to register key devices they manage as favorites and filter them for instant access. This feature is especially useful in environments where device responsibility varies by operator. Each operator can manage their critical devices as a favorites group to ensure no important point is missed and maintain focused monitoring. Grouping priority monitoring cameras such as entrances, high-value equipment storage areas, and safety management zones will further enhance monitoring efficiency.




Expanded Language Support – EVA Accessible to Everyone

EVA now supports both Korean and English. You can select your preferred language in the settings menu, and language changes can only be made by accounts with administrator or manager privileges. This update makes EVA easy to use even for users unfamiliar with AI by providing all screens and functions in Korean and simplifying technical terms. In addition, detection analysis results can be provided in the desired language to minimize communication errors, enabling quick adaptation for new users and consistent information sharing across teams.




Enlarged View of Detection Result Images – Faster and More Accurate Situational Assessment

In real-world monitoring environments, the quality of response depends heavily on how quickly and accurately detection result images can be reviewed after an alert is triggered. With EVA v2.4, the image enlargement experience—previously available only through external notifications such as Microsoft Teams or Slack—has been extended directly into EVA’s built-in chat interface. Detection result images delivered via the chat window can now be opened in a separate, enlarged view, allowing users to inspect high‑resolution images and freely zoom in with a single click. This eliminates the need to rely on small, embedded thumbnails within the chat area and enables immediate, clearer analysis of detection results on a larger screen. As a result, operators can closely examine critical detection elements—such as people, vehicles, and object locations—making it easier to distinguish false positives and significantly reducing the time required to decide on follow-up actions.




Support for Duplicate Camera RTSP Registration – Run Multiple Scenarios from a Single Camera

In many real-world environments, a single camera feed often needs to support multiple detection scenarios simultaneously. For example, a parking lot camera may need to detect illegal parking, pedestrian safety risks, vehicle wrong-way driving, loitering, or intrusion — each as separate monitoring scenarios. To improve this flexibility, EVA now supports duplicate RTSP registration. You can register the same RTSP address to multiple virtual cameras, and configure different detection scenarios independently for each one. With this enhancement, you can monitor various situations in parallel using a single live video stream, while greatly improving scenario configuration flexibility without adding hardware or modifying the network setup.




Other Key Improvements

This release also includes the following enhancements:

  • Session logout time can be configured to match the installation environment.
  • Improved repeated login requests when viewing detection images on mobile.
  • Enhanced UI for device registration and expanded detection scenario area.
  • Added device name editing feature.
  • Detection alarm images can now be viewed in Slack.

EVA Listens to Your Feedback and Improves Faster

This release was implemented based on customer feedback. The EVA team is committed to quickly improving inconveniences through short release cycles and continuously enhancing accuracy and responsiveness based on data collected from PoC and commercial environments. UI/UX improvements prioritize on-site usability, and we will continue to deliver features that reflect customer needs quickly. Your feedback is the driving force that makes EVA a better product.

Experience the Updated EVA v2.4 Now

If you have any feature requests or improvement suggestions, please feel free to share them with us.

EVA’s Next Leap Forward with Ringnet

· 2 min read
Daniel Cho
Daniel Cho
Mellerikat Leader

We held strategic discussions with Ringnet CEO Lee Jeong-min regarding the expansion of the EVA business.

As the domestic distributor of Cisco Meraki, Ringnet is a specialized partner with powerful network/security infrastructure and on-site implementation capabilities. Through collaboration with LG Electronics, Ringnet will play a crucial role in expanding EVA's Physical AI technology across various industries, including manufacturing, logistics, and the public sector.

During this meeting, we discussed in-depth strategies for nationwide and global expansion, building on the success of the "Meraki CCTV + EVA" integration PoC (Proof of Concept) verified at the LG Electronics Pyeongtaek Digital Park. Based on the success factors of the Pyeongtaek case, we explored ways to seamlessly integrate EVA into the Meraki ecosystem, making it easier for domestic manufacturing companies to adopt the solution and establishing a foundation for long-term global market expansion.

Regarding the direction of the partnership, we agreed to promote mutual growth through various business models, such as proposing an integrated package of "EVA + Meraki CCTV + Action Trigger" sensors, as well as standalone EVA sales structures. Ringnet plans to identify key targets within its existing customer base in manufacturing, distribution, logistics, and the public sector through consultation with its sales force, which will further accelerate EVA's market penetration.

Notably, at the Manufacturing Summit hosted by Cisco Korea on January 29, EVA will be featured and demonstrated as the main solution at the Ringnet booth. Moving beyond a simple CCTV exhibition, we are co-developing real-time demo scenarios designed to capture customer interest. These collaborative efforts are opening new possibilities for EVA to lead innovation in safety, security, and operations at domestic manufacturing sites by combining with Meraki's global cloud infrastructure.

We look forward to the exciting developments ahead.