top of page
Abstract Background

Research

At APMIC, we are committed to advancing AI technology through research and innovation, with a strong focus on private Large Language Models (LLMs) and Fine-tuning. Our mission goes beyond developing customized AI solutions for businesses; we strive to make our research accessible to the public, simplifying complex AI concepts into powerful, bite-sized tools that anyone can leverage effectively.

ACE-1 Series

APMIC's Traditional Chinese Inference and Multimodal Models

The ACE-1-3B is Taiwan's first 3-billion-parameter Traditional Chinese language model capable of running on mobile devices. With the integration of the Model Context Protocol (MCP), it seamlessly connects internal and external enterprise systems to enable natural language-driven information queries, process operations, and decision support. This creates an innovative smart human-machine interaction interface while supporting integrations with external APIs, ERP systems, and IoT devices.

Powered by APMIC's S1 model fine-tuning and distillation service, businesses can easily develop their own on-premises models that can respond in real-time and run smoothly on mid to low-tier GPU or embedded platforms. This significantly lowers both technical and cost barriers to adopting AI solutions, catering to the needs of industrial equipment manufacturers, tech companies, and so on. 

research-2.png

CaiGunn 34B

Traditional Chinese Language Model for All Use Cases

In January 2024, CaiGunn 34B ranked 64th globally on the Open LLM Leaderboard hosted by Hugging Face, with an average score of 71.19. In Taiwan, CaiGunn 34B also claimed the top position, marking a significant milestone for the Traditional Chinese language model in the APAC region.

CaiGunn 34B is built on APMIC's Brainformers architecture, integrating the LLaMA model foundation with Mamba dynamic computation flows, the Transformer framework, and the Mixture of Decoding Experts (MoDE) technology. This combination delivers high precision and flexible generative capabilities. Running entirely on the NVIDIA NeMo Framework, the model features efficient distributed training and inference deployment, supporting enterprises' needs for on-premises integration and private applications.

research-1.png

Media Coverage

scott-blake-x-ghf9LjrVg-unsplash.jpg

歡迎來到 Gemmaverse:分享關於 Gemma 的歷史,以及最新在 Google I/O 的模型 Gemma 3n的比較

APMIC 創辦人兼執行長 Jerry Wu

在 I/O Extended Taipei 活動中,APMIC 創辦人暨執行長 Jerry Wu 也深度分享了 Gemma 模型的演進歷程,並解析全新推出的 Gemma 3n 與系列模型(TxGemma、SignGemma、DolphinGemma)在各場景下的應用差異與潛力。

scott-blake-x-ghf9LjrVg-unsplash.jpg

運用 Google AI 技術快速打造多模態應用原型

APMIC MLOps 工程師 Simon 劉育維

APMIC MLOps 工程師劉育維(Simon Liu)受邀參與 Google Cloud Summit Taipei,現場展示如何結合 Google Gemini 強大 AI 模型,透過自然語言快速打造多模態應用原型(Prototype)。不論是圖像、文字還是結構化資料,只需一段對話,即可實現概念轉化,讓創新更有效率地落地。

scott-blake-x-ghf9LjrVg-unsplash.jpg

Knowledge Distillation in Enterprise AI

APMIC 創辦人兼執行長 Jerry Wu

APMIC CEO Jerry 受邀於 2025 Generative AI 開發者年會演講,分享模型微調與蒸餾技術如何解決企業落地的挑戰,真正打造專屬的主權AI,現場獲得許多開發者的迴響與討論

scott-blake-x-ghf9LjrVg-unsplash.jpg

透過知識蒸餾與測試階段擴展提升,LLM 準確度與運算效率

APMIC 創辦人兼執行長 Jerry Wu

APMIC CEO Jerry 更受邀於 GTC Taipei 演講,分享我們在 AI 模型精煉與商業化應用上的第一手經驗,引發現場熱烈回響,與國際技術社群深度對話,拓展更多合作可能。

scott-blake-x-ghf9LjrVg-unsplash.jpg

私有化 LLM 怎麼導?APMIC x 研華揭開企業 AI 部署實戰關鍵!

APMIC 共同創辦人兼產品負責人 Eli

受到研華邀請,參與digitimes研討會, APMIC 分享他們在製造、金融、政府機構導入私有化 LLM 的第一手經驗,深入解析:為什麼「知識管理」是最適合 LLM 的第一步?APMIC 如何用自訓模型、資料蒸餾與微調技術強化回應品質?

scott-blake-x-ghf9LjrVg-unsplash.jpg

APMIC × Twinkle AI|繁體中文推理資料集三大禮包開源發布|打造台灣 LLM 最強語料庫

APMIC 行銷團隊

APMIC 與 Twinkle AI 社群合作發起資料集建置計畫,打造三大繁體中文推理資料集,全面涵蓋數理邏輯、日常推理與工具指令應用三大面向。

scott-blake-x-ghf9LjrVg-unsplash.jpg

APMIC x Twinkle AI jointly launched Taiwan's first 3B Traditional Chinese Language Model: Formosa-1

APMIC Team

APMIC and Twinkle AI jointly launched Taiwan's first 3B Traditional Chinese Language Model: Formosa-1, which can run on mobile devices. This marks a major milestone for on-device AI. Following the success, APMIC has recently launched a self-developed Model Context Protocol (MCP), which extends Formosa-1's capabilities to connect with both internal and external enterprise systems.

scott-blake-x-ghf9LjrVg-unsplash.jpg

The Lightest Traditional Chinese Inference Model — Formosa-1 (Llama-3.2–3B-F1) Model

Simon Liu

Twinkle AI Community and APMIC collaborate to launch Llama-3.2–3B-F1: A lightweight model tailored for Traditional Chinese contexts and address Taiwan-specific tasks, combining efficiency with practicality. This article explores its features and application potential.

scott-blake-x-ghf9LjrVg-unsplash.jpg

AI's New Era: Insights from GTC 2025 on the Future of Blackwell and AI Agent Deployment Solutions

Jerry Wu

At GTC 2025, NVIDIA showcased groundbreaking innovations, including the Blackwell architecture and the rise of Agentic AI. Join Will’s Tech Exchange livestream as APMIC founder Jerry Wu breaks down the key announcements and insights from the conference, exploring the future of AI development and deployment.

bottom of page