Documentation

Documentation

Compiler Structure

Detailed documentation of the ManifoldScript compiler architecture and pipeline.

Compiler Pipeline Overview

ManifoldScript Compiler Pipeline

graph TD A[Source Code] --> B[Lexical Analysis] B --> C[Syntax Parsing] C --> D[AST Generation] D --> E[Semantic Analysis] E --> F[Type Checking] F --> G[Optimization Passes] G --> H[IR Generation] H --> I{Target Platform} I -->|NVIDIA CUDA| J[CUDA Backend] I -->|Apple Metal| K[Metal Backend] I -->|AMD ROCm| L[ROCm Backend] J --> M[Object Code] K --> M L --> M M --> N[Linking] N --> O[Executable] classDef frontend fill:#e1f5fe classDef middleend fill:#f3e5f5 classDef backend fill:#e8f5e9 classDef final fill:#fff3e0 class B,C,D,E,F frontend class G,H middleend class I,J,K,L backend class M,N,O final

Frontend: Parsing and Analysis

Compiler Frontend Architecture

graph LR subgraph "Frontend Components" A[Lexer] -->|Tokens| B[Parser] B -->|Parse Tree| C[AST Builder] C -->|AST| D[Symbol Resolver] D -->|Symbols| E[Type Checker] E -->|Typed AST| F[Semantic Analyzer] end subgraph "Data Structures" G[Token Stream] H[Abstract Syntax Tree] I[Symbol Table] J[Type Environment] end A --> G B --> H D --> I E --> J classDef process fill:#e3f2fd classDef data fill:#f3e5f5 class A,B,C,D,E,F process class G,H,I,J data

Middle-End: Optimization

Middle-End Optimization Passes

graph TD A[Typed AST] --> B[Constant Folding] B --> C[Dead Code Elimination] C --> D[Inline Functions] D --> E[Loop Optimization] E --> F[Vectorization] F --> G[Tensor Optimization] G --> H[Memory Optimization] H --> I[GPU-Specific Opt] I --> J[IR Representation] subgraph "Tensor Optimizations" K[Tensor Fusion] --> G L[Shape Inference] --> G M[Memory Layout] --> G end subgraph "GPU Optimizations" N[Thread Mapping] --> I O[Memory Coalescing] --> I P[Kernel Fusion] --> I end classDef standard fill:#e1f5fe classDef tensor fill:#f3e5f5 classDef gpu fill:#e8f5e9 class B,C,D,E,F standard class G,K,L,M tensor class I,N,O,P gpu

Backend: Code Generation

NVIDIA CUDA Backend

CUDA Backend

graph TD A[IR] --> B[CUDA CodeGen] B --> C[Kernel Generation] C --> D[PTX Assembly] D --> E[CUBIN Binary] E --> F[CUDA Runtime] G[Memory Manager] --> H[Unified Memory] H --> I[Device Memory] I --> F classDef codegen fill:#e1f5fe classDef memory fill:#f3e5f5 class B,C,D,E,F codegen class G,H,I memory

Apple Metal Backend

Metal Backend

graph TD A[IR] --> B[Metal CodeGen] B --> C[MSL Generation] C --> D[Metal Library] D --> E[Metal Runtime] F[Memory Manager] --> G[Shared Storage] G --> H[Buffer Objects] H --> E classDef codegen fill:#e1f5fe classDef memory fill:#f3e5f5 class B,C,D,E codegen class F,G,H memory

AMD ROCm Backend

ROCm Backend

graph TD A[IR] --> B[HIP CodeGen] B --> C[HIP Source] C --> D[HSACO Assembly] D --> E[ISA Binary] E --> F[ROCm Runtime] G[Memory Manager] --> H[Fine-Grained] H --> I[LDS Memory] I --> F classDef codegen fill:#e1f5fe classDef memory fill:#f3e5f5 class B,C,D,E,F codegen class G,H,I memory

Runtime System

Runtime System Architecture

graph TB subgraph "Runtime Components" A[GPU Context] --> B[Memory Manager] B --> C[Kernel Launcher] C --> D[Event Manager] D --> E[Profiler] end subgraph "Memory Management" F[Host Memory] --> G[Device Memory] G --> H[Unified Memory] H --> I[Memory Pools] end subgraph "Execution Model" J[Command Queue] --> K[Stream Manager] K --> L[Dependency Graph] L --> M[Schedule Executor] end B --> F C --> J E --> M classDef runtime fill:#e3f2fd classDef memory fill:#f3e5f5 classDef execution fill:#f3e5f5 class A,B,C,D,E runtime class F,G,H,I memory class J,K,L,M execution

Data Flow Through Compiler

End-to-End Compiler Data Flow

sequenceDiagram participant Source as Source Code participant Frontend as Compiler Frontend participant Middle as Middle-End participant Backend as Backend participant GPU as GPU Runtime Source->>Frontend: manifoldscript source.ms Frontend->>Frontend: Lexical Analysis Frontend->>Frontend: Syntax Parsing Frontend->>Frontend: AST Generation Frontend->>Frontend: Type Checking Frontend->>Middle: Typed AST Middle->>Middle: Optimization Passes Middle->>Middle: Tensor Optimizations Middle->>Middle: GPU-Specific Opt Middle->>Backend: Optimized IR Backend->>Backend: Target CodeGen Backend->>Backend: Binary Generation Backend->>GPU: Load Binary GPU->>GPU: Execute Kernels GPU->>Source: Results & Metrics

Error Handling and Diagnostics

Compile-Time Errors

  • • Syntax errors with precise locations
  • • Type mismatch detection
  • • Semantic validation
  • • GPU capability checks

Runtime Errors

  • • Memory allocation failures
  • • Kernel execution errors
  • • Device synchronization
  • • Multi-GPU communication