top of page
Abstract Background

Research

At APMIC, we are committed to advancing AI technology through research and innovation, with a strong focus on private Large Language Models (LLMs) and Fine-tuning. Our mission goes beyond developing customized AI solutions for businesses; we strive to make our research accessible to the public, simplifying complex AI concepts into powerful, bite-sized tools that anyone can leverage effectively.

ACE-1 Series

APMIC's Traditional Chinese Inference and Multimodal Models

The ACE-1-3B is Taiwan's first 3-billion-parameter Traditional Chinese language model capable of running on mobile devices. With the integration of the Model Context Protocol (MCP), it seamlessly connects internal and external enterprise systems to enable natural language-driven information queries, process operations, and decision support. This creates an innovative smart human-machine interaction interface while supporting integrations with external APIs, ERP systems, and IoT devices.

Powered by APMIC's S1 model fine-tuning and distillation service, businesses can easily develop their own on-premises models that can respond in real-time and run smoothly on mid to low-tier GPU or embedded platforms. This significantly lowers both technical and cost barriers to adopting AI solutions, catering to the needs of industrial equipment manufacturers, tech companies, and so on. 

research-ace-1-3b

CaiGunn 34B

Traditional Chinese Language Model for All Use Cases

In January 2024, CaiGunn 34B ranked 64th globally on the Open LLM Leaderboard hosted by Hugging Face, with an average score of 71.19. In Taiwan, CaiGunn 34B also claimed the top position, marking a significant milestone for the Traditional Chinese language model in the APAC region.

CaiGunn 34B is built on APMIC's Brainformers architecture, integrating the LLaMA model foundation with Mamba dynamic computation flows, the Transformer framework, and the Mixture of Decoding Experts (MoDE) technology. This combination delivers high precision and flexible generative capabilities. Running entirely on the NVIDIA NeMo Framework, the model features efficient distributed training and inference deployment, supporting enterprises' needs for on-premises integration and private applications.

research-apmic-caigunn

Media Coverage

scott-blake-x-ghf9LjrVg-unsplash.jpg

APMIC × Twinkle AI|繁體中文推理資料集三大禮包開源發布|打造台灣 LLM 最強語料庫

APMIC 行銷團隊

APMIC 與 Twinkle AI 社群合作發起資料集建置計畫,打造三大繁體中文推理資料集,全面涵蓋數理邏輯、日常推理與工具指令應用三大面向。

scott-blake-x-ghf9LjrVg-unsplash.jpg

APMIC x Twinkle AI jointly launched Taiwan's first 3B Traditional Chinese Language Model: Formosa-1

APMIC Team

APMIC and Twinkle AI jointly launched Taiwan's first 3B Traditional Chinese Language Model: Formosa-1, which can run on mobile devices. This marks a major milestone for on-device AI. Following the success, APMIC has recently launched a self-developed Model Context Protocol (MCP), which extends Formosa-1's capabilities to connect with both internal and external enterprise systems.

scott-blake-x-ghf9LjrVg-unsplash.jpg

The Lightest Traditional Chinese Inference Model — Formosa-1 (Llama-3.2–3B-F1) Model

Simon Liu

Twinkle AI Community and APMIC collaborate to launch Llama-3.2–3B-F1: A lightweight model tailored for Traditional Chinese contexts and address Taiwan-specific tasks, combining efficiency with practicality. This article explores its features and application potential.

scott-blake-x-ghf9LjrVg-unsplash.jpg

AI's New Era: Insights from GTC 2025 on the Future of Blackwell and AI Agent Deployment Solutions

Jerry Wu

At GTC 2025, NVIDIA showcased groundbreaking innovations, including the Blackwell architecture and the rise of Agentic AI. Join Will’s Tech Exchange livestream as APMIC founder Jerry Wu breaks down the key announcements and insights from the conference, exploring the future of AI development and deployment.

scott-blake-x-ghf9LjrVg-unsplash.jpg

APMIC and Twinkle AI Launched Taiwan's First Traditional Chinese 3B Inference Model and Evaluation Tool

APMIC Team

APMIC has recently joined forces with Twinkle AI to launch Taiwan's first 3B Traditional Chinese Inference Model: Formosa-1, promoting on-premises AI development and helping businesses create their private AI solutions with PrivAI.

scott-blake-x-ghf9LjrVg-unsplash.jpg

Twinkle Eval, an Evaluation Tool for Large Language Model Inference

Simon Liu

Twinkle Eval is a community-driven development project designed to provide engineers and businesses with an efficient and accurate evaluation tool for large language models (LLMs). This article offers a comprehensive introduction to the Twinkle Eval project, focusing on its technical features and practical applications.

scott-blake-x-ghf9LjrVg-unsplash.jpg

Tech Talk IC: What's Hot in the Industry This Year (Part 2) - The Computing Power Shortage Ft. Jerry Wu, Founder & CEO of APMIC

Jerry Wu

Invited by the 'Tech Talk IC' program, APMIC Founder Jerry Wu discusses the AI computing power shortage faced by the industry and the impact it has had on resource and service supply. AI assistants are the most intuitive expectation for AI technology. Which industries are currently most eager to develop AI assistants?

scott-blake-x-ghf9LjrVg-unsplash.jpg

Industry Experts Discuss: Partnering with NVIDIA to Build an AI Era ft. Jerry, Founder of APMIC | Statement Dog Podcast 344

Jerry Wu

APMIC is one of the partners showcased by NVIDIA in the COMPUTEX 2024 keynote. Founder and CEO Jerry Wu shared insights on the various business applications of large language models, collaboration with NVIDIA, and experiences in helping enterprises localize AI during the Statement Dog Podcast.

scott-blake-x-ghf9LjrVg-unsplash.jpg

Enjoying the benefits of LLMs without knowing the costs? Let Jerry show you the real price tag of working with large language models!

Jerry Wu

Do you know how much it costs to develop and maintain LLMs or generative AI? Tech Exchange Center livestream has invited APMIC’s Jerry to break down the costs of training, fine-tuning, and inference for large language models (LLMs). He’ll also provide insights into NVIDIA DGX machines and discuss machine evaluations. Don’t miss this deep dive into the world of cutting-edge AI technology!

bottom of page