Skip to main content

2 posts tagged with "AI Agent"

AI Agent의 도입, 운영, 실제 활용 성과를 공유합니다.

View All Tags

Beyond GPUs to NPUs: EVA Achieves End-to-End Service Validation on Rebellions ATOM™-Max

· 3 min read
Gyulim Gu
Gyulim Gu
Tech Leader

In a previous post, I shared our commitment to collaborating with Rebellions NPUs to enable 24/7 “always-on AI” for industrial environments.

https://mellerikat.com/en/blog/News/rebellions

Today, I’m pleased to announce that this commitment has resulted in a tangible technical milestone.

mellerikat’s EVA (Evolved Vision Agent) has successfully completed end-to-end service validation on Rebellions’ latest server-grade NPU, ATOM™-Max, integrating Vision models, LLMs, and VLMs into a unified production pipeline.


🛠️ Beyond Running a Model — Executing the Entire Service Pipeline

Running a single model on an NPU is fundamentally different from operating an entire production service reliably. Through this validation, EVA demonstrated uninterrupted execution of the full pipeline on ATOM™-Max:


Camera Input → Object Detection (Vision) → Scenario Interpretation (VLM) → Situation Assessment (LLM) → Alert & Control Dispatch

This result confirms that complex AI pipelines required in real-world operations — beyond isolated model benchmarks — can be fully orchestrated on NPUs.

Rebellions has also recognized this milestone as “the first real-world operation of a VLM-based AI service on a commercial NPU platform,” expressing strong expectations for future adoption.


📈 Next Phase: Quantifying TCO Innovation Through Stress Testing

Following successful end-to-end validation, EVA now enters the stress testing phase, simulating real factory environments.

We will analyze system stability, throughput, and power efficiency under extreme conditions where multiple cameras generate simultaneous input streams. The insights gained will be delivered to customers as actionable guidance, including:

  1. Optimal NPU Configuration Standards Cost-efficient hardware configuration guidelines based on camera count and required inference performance.

  2. Quantified TCO Reduction vs. GPUs Practical economic analysis including power consumption and operational costs — not just hardware pricing.

  3. Minimized Deployment Risk Standardized NPU configurations that shorten deployment time and accelerate large-scale adoption.


✨ Conclusion: Reducing GPU Dependence and Enabling Sustainable AI

The key takeaway from this validation is clear: Multimodal industrial AI has reached a level where real-world operations are possible using NPUs alone.

For organizations that have hesitated to adopt AI due to high GPU costs, the combination of EVA and Rebellions offers a practical and powerful alternative.

By breaking the high-cost barrier and enabling safer, higher-quality, and more productive operations at lower cost, EVA and Rebellions are working together to establish a new standard for sustainable industrial AI.

Cisco Live 2025

· 5 min read
Byungmoon Lee
Byungmoon Lee
Solution Architect
Andy Yun
Andy Yun
Business Leader

EVA Showcases Innovation with Multi-Modal LLM-Based AI Services at Cisco Live 2025

Cisco Live 2025, June 8-12, 2025, San Diego

Mellerikat participated in Cisco Live 2025, seizing a valuable opportunity to present its innovative AI service, Mellerikat EVA, powered by Multi-Modal Large Language Models (LLMs), to global customers and partners. At this premier event focused on networking, security, and AI technologies, Mellerikat showcased a demo featuring a unique architecture that implements cost-efficient AI solutions, earning enthusiastic responses from attendees. As the first major event following Cisco’s acquisition of Splunk, Cisco Live 2025 highlighted the integrated future of AI and data analytics. Mellerikat unveiled a Multi-Modal LLM-based solution combining Mellerikat EVA, Cisco Meraki Camera, and Splunk Instance through its demo booth, demonstrating practical AI applications in industrial settings. Notably, our innovative architecture, which significantly reduces the operational costs of Multi-Modal LLMs, left a lasting impression on attendees.