Waves
AI is not a single technology shift — it is a series of overlapping waves, each redefining what enterprises must understand and act on. We track 28 distinct waves across the AI landscape, from foundational model infrastructure to governance and cost economics, using them to frame and filter the signals we surface across the companies we profile.
Large Language Models (LLMs)
Neural network models trained on large corpora of text and code to predict and generate language. LLMs serve as the foundational reasoning and generation layer for modern AI applications, enabling tasks like summarization, translation, planning, and code synthesis.
Vector Databases
Datastores optimized for storing and querying high-dimensional embeddings. They enable semantic search, similarity matching, and contextual retrieval by comparing vector representations rather than exact keywords.
Generative Pre-trained Transformer (GPT)
A class of transformer-based language models pre-trained on broad datasets and fine-tuned for specific tasks. GPTs are designed to generate coherent, context-aware text and are commonly used as general-purpose AI engines.
Open-Source LLMs
Language models whose weights, architectures, or training code are publicly available. They enable self-hosting, customization, transparency, and ecosystem innovation outside proprietary platforms.
Artificial General Intelligence (AGI)
A hypothetical form of AI capable of understanding, learning, and applying knowledge across a wide range of tasks at a human or superhuman level. AGI implies general reasoning ability rather than task-specific competence.
Coding Assistants
AI tools embedded in development environments that assist with writing, refactoring, debugging, and understanding code. They translate natural language intent into executable code and provide real-time developer feedback.
Prompt Engineering
The practice of designing inputs, instructions, and examples to guide model behavior. Prompt engineering shapes outputs without changing model weights, acting as a lightweight control layer over model capabilities.
Context Engineering
The discipline of designing and managing the full context provided to a model — including system prompts, retrieved documents, tool outputs, conversation history, memory, and structured metadata. Context engineering evolves beyond prompt engineering into a holistic practice of shaping everything a model sees before it generates a response.
Retrieval-Augmented Generation (RAG)
An architectural pattern that combines external data retrieval with language model generation. Retrieved context is injected into prompts to ground responses in up-to-date or domain-specific information.
Model Routing / Orchestration
Systems that dynamically select, sequence, or combine multiple models and tools to fulfill a task. Routing optimizes for cost, latency, accuracy, or capability by choosing the right model at the right time.
Small Language Models (SLMs)
Compact language models optimized for efficiency, speed, and on-device or edge deployment. SLMs trade scale for controllability and cost, making them suitable for constrained or embedded environments.
Reasoning Models
Models explicitly optimized to perform multi-step reasoning, planning, and problem decomposition. They emphasize structured thought over raw text generation to improve correctness on complex tasks.
Copilots
AI assistants designed to work alongside humans within specific workflows. Copilots provide suggestions, automation, and contextual assistance while keeping humans in the decision loop.
MCP (Model Context Protocol)
A protocol for exposing tools, APIs, and data sources to models in a structured, machine-readable way. MCP standardizes how models discover, invoke, and reason about external capabilities.
Agents
Autonomous or semi-autonomous systems that use models, tools, and memory to pursue goals over time. Agents can plan, act, observe outcomes, and iterate without continuous human prompting.
Moltbook
A notebook-centric development pattern where code, prompts, data, and execution co-evolve. Moltbooks emphasize rapid iteration and mutation, often blurring the line between experimentation and production.
Gastown
A metaphor for early-stage AI infrastructure: dense, experimental, and rapidly evolving. Gastown systems prioritize velocity and exploration over polish, governance, or long-term maintainability.
Ralph Wiggum
A shorthand for AI systems that are earnest but unreliable — confidently producing outputs without true understanding. The term highlights failure modes where models appear capable but lack grounding or judgment.
OpenClaw / Clawdbot
Experimental, community-driven AI tools and bots that emerge from hackathons, side projects, and open experimentation. They represent the playful, exploratory edge of AI development — often more theater than production — but occasionally surface patterns or capabilities that inform more serious enterprise work.
Skills
Discrete, reusable capabilities exposed to models or agents as callable functions. Skills encapsulate business logic, integrations, or workflows that extend model usefulness beyond text generation.
Memory Systems
Mechanisms for storing, retrieving, and updating information across interactions. Memory systems enable personalization, long-running tasks, and continuity beyond a single prompt or session.
Fine-Tuning & Model Customization
The process of further training pre-trained models on domain-specific or proprietary data to improve performance on targeted tasks. Fine-tuning sits between prompt engineering and training from scratch, offering deeper behavioral customization while inheriting general capabilities from the base model.
Multimodal AI
Models and systems capable of processing and generating across multiple data types — text, images, audio, video, and documents. Multimodal capabilities extend AI beyond language-only tasks into visual inspection, document understanding, meeting transcription, and cross-modal reasoning.
Evaluation & Benchmarking
The practice of systematically measuring model and system performance across accuracy, reliability, safety, and task completion. Evaluation frameworks include model-level benchmarks, RAG quality scoring, agent success rates, hallucination detection, and enterprise-specific acceptance criteria.
Governance & Compliance
Organizational and regulatory frameworks for managing AI risk, accountability, and transparency. This includes model registries, risk classification, bias auditing, explainability requirements, and adherence to emerging regulation like the EU AI Act and executive orders.
Cost Economics & FinOps
The discipline of understanding, forecasting, and optimizing the financial cost of AI operations — including inference spend, GPU procurement, token-level pricing, training costs, and model hosting. AI FinOps applies cloud cost management principles to a new and often less predictable cost surface.
Supply Chain & Dependency Risk
The set of dependencies enterprises take on when building with AI — model providers, chip manufacturers, API pricing stability, open-source model licensing, and the risk of deprecation or breaking changes. Supply chain risk in AI mirrors software supply chain concerns but with less mature tooling and higher concentration among a small number of providers.
Data Centers
The physical and cloud infrastructure housing the compute required for AI training and inference. Data centers encompass GPU clusters, power and cooling capacity, geographic placement for data sovereignty, and the massive capital expenditure underpinning AI capabilities. They are the material foundation that every other wave depends on.