#
Membria: Knowledge Cache Graph (KCG) for Cache-Augmented Generation (CAG)
#
Introduction
This documentation describes the architecture, purpose, and benefits of Cache-Augmented Generation (CAG) and Distillation on Demand (DoD) within the Actiq ecosystem. The system combines on-device AI, distillation on demand, and a decentralized knowledge layer to create a fast, private, and evolving intelligence system that learns from user interaction and scales through distributed memory using Decentralized Knowledge Graph (DKG).
#
Table of Contents
- Overview
- Problem Statement
- 2.1 Key Challenges
- Training Methods for Large and Tiny Models
- 3.1 Centralized Foundation Model Training
- 3.2 Decentralized & Self-Hosted Approaches
- 3.3 Current Limitations
- 3.4 Role of Distillation on Demand (DoD)
- Alternative Knowledge Learning Methods
- What Are Tiny Language Models
- Proposed Solution
- Membria for Tiny LM's
- Technical Architecture Overview
- 8.1 Storage Layer
- 8.2 Index & Graph Layer
- 8.3 Access & Query Layer
- 8.4 Distillation & Validation Layer
- 8.5 Economic Layer
- 8.6 Governance & Reputation Layer
- 8.7 Architectural Principles
- Workflow Overview
- CAG Storage, Query & Privacy Architecture
- 10.1 CAG Storage in Arweave
- 10.2 Fast Querying and Relationship Analysis (Graph Layer)
- 11.1 Storage Layer: Arweave
- 11.2 Index & Query Layer: The Graph Subgraph
- 11.3 GraphQL API Interface
- 11.4 Sync & Update Flow
- 11.5 Advantages Over Centralized Graph Databases
- 11.6 Architecture
- 13.1 SCR Reasoning Layer as the Default Inference Path
- Roles of Validators, Gateways, and DoD Agents
- 14.1 DoD Agents: Knowledge Distillation Layer
- 14.2 Gateways: Infrastructure & Access Layer
- 14.3 Validators: Quality & Consensus Layer
- 14.4 Role Comparison
- 14.5 Synergy of Roles
- Validator Infrastructure & Placement
- 14.1 Off-chain Validator Nodes
- 14.2 Consensus & Proof Submission (On-chain Interaction)
- 14.3 Validator Deployment Environments
- 14.4 Architectural Positioning
- 14.5 Benefits of Off-chain Validation
- KCG Entry Format for Arweave Storage
- 15.1 Entity Entry Example (JSON format)
- 15.2 Relation Entry Example (JSON format)
- 15.3 Arweave Transaction Tags
- 15.4 Integrity and Linking
- 15.5 Storage Optimizations
- Tokenomics & Deflation Design
- 16.1 Sources of Token Demand
- 16.2 Reward Distribution (Emission Side)
- 16.3 Deflationary Burn Mechanisms
- 16.4 Emission Control Levers
- 16.5 Sustainability & Scarcity Model
- 16.6 Summary Flow
- Token Flow and Reward Distribution
- 17.1 Internal Validator Allocation
- 17.2 Dynamic Complexity Adjustment
- 17.3 DoD Agents Performance Bonuses
- Integrations
- Advantages of the KCG+CAG Approach
- Conclusion
#
Purpose of Documentation
This documentation is intended for developers, researchers, and stakeholders interested in understanding the architecture and functionality of the Knowledge Cache Graph (KCG) and Cache-Augmented Generation (CAG) system. It covers technical aspects, economic model, and benefits of the approach for creating a scalable, efficient, and verifiable knowledge augmentation system for Tiny Language Models (LLMs).
#
How to Use This Documentation
The documentation is organized into logical sections, each focusing on a specific aspect of the KCG+CAG system. You can read the documentation sequentially or navigate to specific sections of interest using the table of contents above.
To get started, it is recommended to read the Overview and Problem Statement sections to understand the core concepts and challenges that the KCG+CAG system addresses.