Skip to content

Understanding the System โ€‹

A comprehensive technical deep dive into Wake Intelligence's 3-layer temporal brain, architecture patterns, and implementation decisions.

This guide helps developers, contributors, and engineers understand why Wake Intelligence is built the way it is. We'll explore the 3-layer brain architecture, examine key technical decisions, review implementation details, and walk through real challenges we solved.


๐ŸŽฏ What You'll Learn โ€‹

  • 3-layer brain architecture - Past (causality), Present (memory), Future (propagation)
  • Design rationale - Why we chose each approach and what trade-offs we made
  • Technical implementation - Real code examples with algorithms
  • Testing philosophy - How we achieved 109 comprehensive tests
  • Problem-solving - Real challenges (prediction optimization, dependency detection)
  • Semantic intent patterns - Observable anchoring and intent preservation

Project Overview โ€‹

What is Wake Intelligence? โ€‹

Wake Intelligence is an MCP server implementing a 3-layer temporal intelligence brain for AI agents: Past (causality tracking), Present (memory management), and Future (predictive pre-fetching).

Core capabilities:

  • AI agents learn from history through causal chain tracking
  • 109 passing tests demonstrate comprehensive coverage
  • Deploys to Cloudflare Workers (global edge computing)
  • Reference implementation of semantic intent + hexagonal architecture

Value proposition:

  • AI agents remember WHY decisions were made (causality)
  • Automatic memory optimization with 4-tier LRU system
  • Proactive pre-fetching based on composite prediction scoring
  • Production-ready with deterministic, explainable algorithms

Tech stack: TypeScript, Cloudflare Workers, D1 Database, Workers AI, MCP SDK, Vitest


The 3-Layer Temporal Intelligence Brain โ€‹

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                   WAKE INTELLIGENCE BRAIN                    โ”‚
โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค
โ”‚                                                               โ”‚
โ”‚  LAYER 3: PROPAGATION ENGINE (Future - WHAT)                โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚ โ€ข Predicts WHAT will be needed next                 โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Composite scoring (40% temporal + 30% causal +    โ”‚    โ”‚
โ”‚  โ”‚   30% frequency)                                    โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Pre-fetching optimization                         โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Pattern-based next access estimation              โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ”‚                            โ–ฒ                                  โ”‚
โ”‚  LAYER 2: MEMORY MANAGER (Present - HOW)                    โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚ โ€ข Tracks HOW relevant contexts are NOW              โ”‚    โ”‚
โ”‚  โ”‚ โ€ข 4-tier memory classification                      โ”‚    โ”‚
โ”‚  โ”‚   (ACTIVE/RECENT/ARCHIVED/EXPIRED)                  โ”‚    โ”‚
โ”‚  โ”‚ โ€ข LRU tracking + automatic tier updates             โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Expired context pruning                           โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ”‚                            โ–ฒ                                  โ”‚
โ”‚  LAYER 1: CAUSALITY ENGINE (Past - WHY)                     โ”‚
โ”‚  โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”‚
โ”‚  โ”‚ โ€ข Tracks WHY contexts were created                  โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Causal chain tracking                             โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Dependency auto-detection                         โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Reasoning reconstruction                          โ”‚    โ”‚
โ”‚  โ”‚ โ€ข Action type taxonomy                              โ”‚    โ”‚
โ”‚  โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚
โ”‚                                                               โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Why 3 Layers? โ€‹

Layer 1: Causality (Past - WHY)

  • Tracks WHY contexts were created
  • Builds causal chains showing decision history
  • Enables reasoning reconstruction
  • Example: "Why did I make this architectural decision?"

Layer 2: Memory (Present - HOW)

  • Manages HOW relevant contexts are NOW
  • 4-tier system based on access recency
  • LRU tracking with automatic tier recalculation
  • Example: "What contexts are actively being worked on?"

Layer 3: Propagation (Future - WHAT)

  • Predicts WHAT will be needed next
  • Composite scoring (temporal + causal + frequency)
  • Pre-fetching optimization
  • Example: "What contexts should we load ahead of time?"

Why this structure?

  • Progressive enhancement - Each layer builds on previous
  • Temporal completeness - Past informs present, present informs future
  • Observable at each layer - No black-box predictions
  • Explainable decisions - Every prediction has traceable reasoning

Technical Architecture โ€‹

Hexagonal Architecture (Ports & Adapters) โ€‹

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚      Presentation Layer (MCPRouter)        โ”‚
โ”‚        HTTP Request Routing                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚        Application Layer                   โ”‚
โ”‚   โ€ข ToolExecutionHandler                   โ”‚
โ”‚   โ€ข MCPProtocolHandler                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚           Domain Layer                     โ”‚
โ”‚   โ€ข PropagationService (Layer 3)          โ”‚
โ”‚   โ€ข MemoryManagerService (Layer 2)        โ”‚
โ”‚   โ€ข CausalityService (Layer 1)            โ”‚
โ”‚   โ€ข ContextService (Orchestrator)         โ”‚
โ”‚   โ€ข ContextSnapshot (Entity)              โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚ (Ports: Interfaces)
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚      Infrastructure Layer                  โ”‚
โ”‚   โ€ข D1ContextRepository                    โ”‚
โ”‚   โ€ข CloudflareAIProvider                   โ”‚
โ”‚   โ€ข CORSMiddleware                         โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Why Hexagonal?

  1. Testability - Domain logic has zero infrastructure dependencies
  2. Flexibility - Could swap D1 for PostgreSQL by changing only Infrastructure layer
  3. Maintainability - Clear boundaries, changes localized to specific layers
  4. Composition root - Only 74 lines (90% reduction from monolithic version)

Key Design Decisions โ€‹

Why 3-Layer Brain vs Traditional Context Management? โ€‹

Decision: Temporal intelligence with Past/Present/Future layers

Rationale:

  • Causality (Past) - Understand WHY contexts exist (decision history)
  • Memory (Present) - HOW relevant is it NOW (LRU + tiers)
  • Propagation (Future) - WHAT will be needed next (predictive)

Trade-off analysis:

  • โœ… Rich temporal understanding
  • โœ… Proactive optimization
  • โœ… Explainable predictions
  • โŒ More complex than simple key-value storage
  • โŒ Additional database columns

Alternative considered: Simple key-value context store

  • โœ… Simpler implementation
  • โŒ No temporal intelligence
  • โŒ No prediction capability

Why Composite Prediction Scoring? โ€‹

Decision: 40% temporal + 30% causal + 30% frequency

Algorithm:

typescript
predictionScore =
  0.4 * temporalScore +      // Recency (exponential decay)
  0.3 * causalStrength +     // Position in causal chains
  0.3 * frequencyScore       // Access frequency (log scale)

Why these weights?

  • 40% temporal - Recency is strongest signal (most recent = most likely next)
  • 30% causal - Causal roots often re-accessed (important contexts)
  • 30% frequency - High-use contexts likely needed again

Each component explained:

Temporal Score (exponential decay):

typescript
const hoursSince = (now - lastAccessed) / 3600000;
const score = Math.exp(-hoursSince / 24);  // Half-life of 24 hours

Causal Strength (position in chains):

typescript
if (isRoot && hasDependents) return 0.5+;  // High importance
if (hasDependents) return 0.3+;            // Moderate
return 0.2;                                 // Leaf node

Frequency Score (logarithmic):

typescript
const score = Math.log(accessCount + 1) / Math.log(101);

Trade-offs:

  • โœ… Balanced multi-factor prediction
  • โœ… Deterministic (not black-box ML)
  • โœ… Each component is explainable
  • โŒ Weights are heuristic (could be tuned with ML later)

Why 4-Tier Memory System? โ€‹

Decision: ACTIVE (< 1hr) / RECENT (1-24hr) / ARCHIVED (1-30d) / EXPIRED (> 30d)

Implementation:

typescript
calculateMemoryTier(lastAccessed: string | null, timestamp: string): MemoryTier {
  const referenceTime = lastAccessed || timestamp;
  const hoursSince = (now - new Date(referenceTime).getTime()) / 3600000;

  if (hoursSince < 1) return MemoryTier.ACTIVE;
  if (hoursSince < 24) return MemoryTier.RECENT;
  if (hoursSince < 720) return MemoryTier.ARCHIVED;  // 30 days
  return MemoryTier.EXPIRED;
}

Benefits:

  • Observable tiers based on time since last access
  • Auto-recalculation as contexts age
  • Pruning candidates (EXPIRED tier)
  • Search prioritization (ACTIVE/RECENT ranked higher)

Trade-offs:

  • โœ… Simple, observable logic
  • โœ… Automatic memory optimization
  • โœ… Prevents database bloat
  • โŒ Time thresholds are fixed (could be configurable)

Why Cloudflare Workers vs Traditional Server? โ€‹

Decision: Deploy to Cloudflare Workers (edge computing)

Rationale:

  • Global edge deployment - Low latency worldwide
  • Serverless - No servers to manage
  • D1 + Workers AI integration - Native Cloudflare ecosystem
  • Auto-scaling - Handles traffic spikes

Trade-offs:

  • โœ… Fast (edge-deployed globally)
  • โœ… Scalable (auto-scale)
  • โœ… Cost-effective (pay-per-use)
  • โŒ Platform lock-in (Cloudflare-specific)
  • โŒ Cold start latency (first request)

Implementation Highlights โ€‹

Composition Root (90% Reduction) โ€‹

Location: src/index.ts

What it does: Wires all dependencies in 74 lines (down from 483!)

typescript
export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    // Infrastructure
    const repository = new D1ContextRepository(env.DB);
    const aiProvider = new CloudflareAIProvider(env.AI);

    // Domain services (3-layer brain)
    const causalityService = new CausalityService(repository);
    const memoryService = new MemoryManagerService(repository);
    const propagationService = new PropagationService(repository, causalityService);

    // Orchestrator
    const contextService = new ContextService(
      repository,
      aiProvider,
      causalityService,
      memoryService,
      propagationService
    );

    // Application + Presentation
    const toolHandler = new ToolExecutionHandler(contextService);
    const protocolHandler = new MCPProtocolHandler(toolHandler);
    const router = new MCPRouter(protocolHandler);

    return router.handle(request);
  }
};

Benefits:

  • Single source of truth for dependency graph
  • 90% reduction from previous monolithic approach
  • Explicit dependencies make testing easy

Layer 1: Causality Engine โ€‹

Auto-dependency detection:

typescript
async detectDependencies(project: string): Promise<string[]> {
  // Find contexts created in last 24 hours
  const recent = await this.repository.findRecent(project, 5, 24);

  // Auto-detect dependencies from temporal proximity
  return recent
    .filter(ctx => {
      const hoursSince = (Date.now() - new Date(ctx.timestamp).getTime()) / 3600000;
      return hoursSince < 1;  // Created within last hour
    })
    .map(ctx => ctx.id);
}

Causal chain building:

typescript
async buildCausalChain(targetId: string): Promise<ContextSnapshot[]> {
  const chain: ContextSnapshot[] = [];
  let current = await this.repository.findById(targetId);

  while (current.causality?.causedBy) {
    chain.unshift(current);
    current = await this.repository.findById(current.causality.causedBy);
  }

  chain.unshift(current);  // Add root
  return chain;
}

Why this matters:

  • Temporal proximity heuristic for dependency detection
  • Reconstruct decision history for "Why did I do this?"
  • Observable causal relationships

Layer 2: Memory Manager โ€‹

LRU tracking:

typescript
async trackAccess(contextId: string): Promise<void> {
  const context = await this.repository.findById(contextId);
  const newTier = this.calculateMemoryTier(new Date().toISOString(), context.timestamp);

  await this.repository.updateAccessTracking(contextId, {
    lastAccessed: new Date().toISOString(),
    accessCount: context.accessCount + 1,
    memoryTier: newTier
  });
}

Why this matters:

  • Observable time-based tiers
  • Fire-and-forget access tracking (don't block responses)
  • Automatic tier recalculation

Layer 3: Propagation Engine โ€‹

Composite scoring:

typescript
calculatePropagationScore(context: ContextSnapshot, causalStrength: number): number {
  const temporal = this.calculateTemporalScore(context);
  const frequency = this.calculateFrequencyScore(context);

  return 0.4 * temporal + 0.3 * causalStrength + 0.3 * frequency;
}

Temporal score (exponential decay):

typescript
private calculateTemporalScore(context: ContextSnapshot): number {
  if (!context.lastAccessed) {
    // Never accessed - use tier-based default
    return context.memoryTier === 'ACTIVE' ? 0.3 :
           context.memoryTier === 'RECENT' ? 0.2 :
           context.memoryTier === 'ARCHIVED' ? 0.1 : 0.0;
  }

  const hoursSince = (Date.now() - new Date(context.lastAccessed).getTime()) / 3600000;
  return Math.exp(-hoursSince / 24);  // Half-life of 24 hours
}

Why this matters:

  • Explainable predictions (not black-box ML)
  • Deterministic algorithm (same inputs = same outputs)
  • Composite multi-factor scoring

Testing Strategy โ€‹

Test Distribution โ€‹

Total: 109 tests (all passing โœ…)

LayerTestsStrategy
Domain20Pure logic, no mocks
Application10Mock domain services
Infrastructure20Mock D1/AI
Presentation12HTTP routing tests
Integration13End-to-end flows
Specialized Services34Causality, Context, Memory, Propagation

Testing Philosophy โ€‹

Domain Layer - No Mocks:

typescript
describe('CausalityService', () => {
  it('should detect dependencies from temporal proximity', async () => {
    const recentContexts = [
      { id: 'ctx-1', timestamp: '2024-01-01T10:00:00Z' },
      { id: 'ctx-2', timestamp: '2024-01-01T10:30:00Z' }
    ];

    const deps = await causalityService.detectDependencies('project-1');
    expect(deps).toContain('ctx-2');  // Created within 1 hour
  });
});

Why no mocks? Pure business logic, no infrastructure dependencies

Infrastructure Layer - Mock External:

typescript
describe('CloudflareAIProvider', () => {
  it('should use fallback when AI throws error', async () => {
    const mockAI = {
      run: vi.fn().mockRejectedValue(new Error('AI unavailable'))
    };

    const provider = new CloudflareAIProvider(mockAI);
    const summary = await provider.generateSummary(longContent);

    expect(summary).toHaveLength(203);  // Truncated to 200 + '...'
  });
});

Real-World Challenges & Solutions โ€‹

Challenge: Temporal Proximity Dependency Detection โ€‹

Problem: How to auto-detect which contexts are related without explicit user input?

Solution: Temporal proximity heuristic

typescript
// Contexts created within 1 hour of each other are likely related
const hoursSince = (now - context.timestamp) / 3600000;
if (hoursSince < 1) {
  dependencies.push(context.id);
}

Why this works:

  • Observable signal (time is measurable)
  • Reasonable assumption (recent contexts likely related)
  • Simple heuristic (no complex inference)

Trade-offs:

  • โœ… Works without user input
  • โœ… Simple, deterministic
  • โŒ May miss long-running projects
  • โŒ May create false positives

Future improvement: Add semantic similarity (embeddings) to complement temporal proximity


Challenge: Prediction Weight Tuning โ€‹

Problem: How to balance temporal, causal, and frequency scores?

Solution: Start with heuristic weights (40/30/30), plan for meta-learning

Current approach:

typescript
const score = 0.4 * temporal + 0.3 * causal + 0.3 * frequency;

Rationale:

  • Temporal dominant (40%) - Recency is strongest signal
  • Causal + Frequency balanced (30% each)
  • Simple starting point for validation

Future improvement (Layer 4 concept):

typescript
// Could add meta-learning to tune weights
interface PredictionOutcome {
  predicted: number;
  actuallyAccessed: boolean;
}

// Tune weights based on accuracy
function optimizeWeights(outcomes: PredictionOutcome[]) {
  // Gradient descent or similar optimization
}

Challenge: Fire-and-Forget Access Tracking โ€‹

Problem: Don't want to slow down context retrieval with access tracking

Solution: Fire-and-forget pattern

typescript
async loadContext(project: string): Promise<ContextSnapshot[]> {
  const contexts = await repository.findByProject(project);

  // Fire-and-forget access tracking (don't await!)
  contexts.forEach(ctx => {
    memoryManager.trackAccess(ctx.id).catch(err => {
      console.error(`Failed to track access for ${ctx.id}:`, err);
    });
  });

  return contexts;
}

Why this matters:

  • Fast responses (don't block on tracking)
  • Best-effort tracking (log errors, continue)
  • Acceptable trade-off (tracking is optimization, not critical)

Engineering Principles โ€‹

1. Semantic Intent as Single Source of Truth โ€‹

Every decision is based on meaning, not technical characteristics.

Example:

typescript
// โŒ Bad: Technical characteristic
if (content.length > 1000) { /* summarize */ }

// โœ… Good: Semantic intent
if (exceedsHumanReadableSize(content)) {
  summary = generateConciseSummary(content);
}

2. Observable Property Anchoring โ€‹

All behavior anchored to directly observable semantic markers.

Example (Layer 2 Memory Tiers):

typescript
// Observable: Time since last access (measurable, no interpretation)
const hoursSince = (now - lastAccessed) / (1000 * 60 * 60);

// Semantic tiers based on observable time
if (hoursSince < 1) return MemoryTier.ACTIVE;
if (hoursSince < 24) return MemoryTier.RECENT;

3. Deterministic Algorithms โ€‹

All predictions use deterministic, explainable algorithms.

Not a black-box ML model - every score component is traceable:

typescript
predictionScore =
  0.4 * exponentialDecay(hoursSinceAccess) +  // Temporal
  0.3 * causalChainStrength +                  // Causal
  0.3 * log(accessCount + 1) / log(101)        // Frequency

4. Progressive Enhancement โ€‹

Each layer builds on the previous, adding intelligence.

Layer 1 (Past) โ†’ Track causality
   โ†“
Layer 2 (Present) โ†’ Manage memory based on access
   โ†“
Layer 3 (Future) โ†’ Predict using causality + memory patterns

Technical Deep Dive: Common Questions โ€‹

Q: How does dependency auto-detection work? โ€‹

A: Temporal proximity heuristic - contexts created within 1 hour are likely related

Algorithm:

typescript
async detectDependencies(project: string): Promise<string[]> {
  // Find recent contexts (last 24 hours)
  const recent = await repository.findRecent(project, limit=5, hours=24);

  // Filter by temporal proximity (< 1 hour)
  const dependencies = recent
    .filter(ctx => {
      const hoursSince = (now - ctx.timestamp) / 3600000;
      return hoursSince < 1;
    })
    .map(ctx => ctx.id);

  return dependencies;
}

Why 1 hour threshold?

  • Observable - Time is measurable
  • Reasonable assumption - Developer likely working on related tasks
  • Simple heuristic - No complex inference needed

Q: How does the 4-tier memory system work? โ€‹

A: Observable time-based classification with automatic recalculation

TierTime RangeSearch PriorityAuto-Actions
ACTIVE< 1 hrHighestTop of results
RECENT1-24 hrHighInclude in searches
ARCHIVED1-30 daysLowDe-prioritize
EXPIRED> 30 daysLowestPruning candidate

Automatic tier updates:

typescript
async trackAccess(contextId: string): Promise<void> {
  const context = await repository.findById(contextId);
  const newTier = this.calculateMemoryTier(new Date().toISOString(), context.timestamp);

  await repository.update(contextId, {
    lastAccessed: new Date().toISOString(),
    accessCount: context.accessCount + 1,
    memoryTier: newTier  // Auto-update tier
  });
}

Q: Why Cloudflare Workers for edge deployment? โ€‹

A: Global performance and serverless benefits

Benefits:

  • Edge deployment - Deployed to 275+ locations worldwide
  • Low latency - Runs close to users
  • Auto-scaling - Handles traffic automatically
  • D1 + Workers AI - Native integration

Trade-offs:

  • โœ… Fast, scalable, cost-effective
  • โŒ Platform-specific (Cloudflare)
  • โŒ Cold start latency on first request

Design for edge constraints:

  • Lazy prediction refresh (don't recalculate every time)
  • Batch operations where possible
  • Stateless design (each request independent)

Learning Resources โ€‹

Key Files to Study โ€‹

Architecture:

Domain Layer (3-layer brain):

Governance:

Part of Semantic Intent ecosystem:

All demonstrate similar patterns applied to different domains.


Quick Reference Stats โ€‹

  • 109 passing tests (100% pass rate)
  • 3-layer brain (Past/Present/Future)
  • 4-tier memory (ACTIVE/RECENT/ARCHIVED/EXPIRED)
  • 74-line composition root (90% reduction)
  • Composite prediction (40% temporal + 30% causal + 30% frequency)
  • Edge deployment (Cloudflare Workers, 275+ locations)
  • TypeScript 5.8 with strict types

Contributing โ€‹

Understanding the system architecture and design decisions is the first step to contributing effectively.

Next steps:

  1. Read ARCHITECTURE.md
  2. Review SEMANTIC_ANCHORING_GOVERNANCE.md
  3. Study the composition root in src/index.ts
  4. Run tests: npm test
  5. Check CONTRIBUTING.md

This guide helps you understand not just what Wake Intelligence does, but why it's built this way and how the engineering decisions were made. ๐Ÿง