# Overview


Actiq combines on-device AI, distillation-on-demand (DoD), and a decentralized knowledge layer (CAG) to create Membria -a fast, private, and evolving AI plaform that learns from user interaction and scales through distributed memory using Decentralized Knowledge Graph (DKG).

The rapid adoption of lightweight, on-device Large Language Models (Tiny LLMs) has created a critical bottleneck in personalization, knowledge freshness, and reasoning capabilities. While millions of users deploy these models for everyday tasks, their ability to learn, update, and reason remains limited, requiring costly and slow interventions via centralized Big LLM APIs or fine-tuning.

We introduce the Knowledge Cache Graph (KCG) and Cache-Augmented Generation (CAG) - a decentralized, verifiable, and efficient knowledge reasoning framework. Instead of relying on retraining or direct weight manipulation, KCG+CAG enables Distillation on Demand (DoD), where knowledge is distilled, validated, and recorded in a public, immutable knowledge cache. This approach empowers Tiny LLMs to dynamically retrieve, reason over, and consume high-quality, validated knowledge via fast Selective Contextual Reasoning (SCR) pipelines, dramatically reducing inference costs, latency, and vendor lock-in.

By creating an ecosystem where knowledge grows, self-validates, and becomes reusable, KCG+CAG transforms AI reasoning into an open, democratic, and self-improving infrastructure for millions of Tiny LLMs.

This document outlines the architecture, purpose, and benefits of Cache-Augmented Generation (CAG) and on-demand distillation within the Actiq ecosystem.