Meet Moderne Prethink

Semantic Code Context for Coding Agents

Schedule Demo

Faster reasoning. Lower LLM costs. More consistent results.

Moderne Prethink gives coding agents compiler-accurate understanding of your codebase so they can reason from resolved structure, dependencies, and architecture instead of inferring from raw files and prompts.

Structured semantic context for agents

Your codebase’s full structure, dependencies, configurations, and conventions become resolved context so agents reason from facts, not inferences

Faster results with fewer tokens

Agents stop reconstructing the repository every query and instead work faster with an authoritative understanding of the code, requiring fewer tokens

Fully customizable knowledge

Knowledge is generated programmatically with Moderne recipes, so teams can customize Prethink output

What is Moderne Prethink?

Prethink is a structured, machine readable representation of how your codebase actually works, built directly from deep insights from your code. It captures relationships, dependencies, conventions, and architecture so agents can reason from resolved context instead of inferring from raw files and prompts.

This isn't RAG, embedding, or prompt engineering.

It's Moderne Prethink.

Approach How context is built Result
MCP / RAG Retrieved snippets Partial, token-heavy inference
Embeddings Probabilistic vectors No semantic guarantees
Prompt engineering Manual curation Brittle, inconsistent
Prethink Deterministic analysis of semantic code models Compiler-accurate, reusable context

How Moderne Prethink works: From code to context

Prethink builds a shared, system-level understanding of your codebase using deterministic analysis and customizable recipes, then configures your agents to reference that context directly.

Build LST code models per repo

An LST is a fully resolved, compiler-accurate model of the repository, capturing both structure and meaning of the code.

  • Fully resolved types, symbols, and class hierarchies
  • Code structure and execution paths
  • Cross-repo dependency relationships

Run Moderne Prethink recipes to build context

Prethink is generated using a set of recipes that run on the LSTs, bootstrapping repository-level knowledge quickly and consistently—structure, intent, constraints, and relationships.

  • Architectural patterns and conventions
  • Call graphs, usage patterns, and service boundaries
  • Transitive dependency chains and upgrade impact

Put context where agents work

Prethink recipes output inspectable CSV, Markdown, and CALM artifacts to a repo or context registry; agent configurations are updated to point to Prethink.

  • Works with your existing agents and internal tools
  • Versioned with your code or stored centrally
  • Refreshable via CI or scheduled runs

Agents access Prethink in their workflow

Agents reference Prethink to understand resolved knowledge about the code without needing to parse it—shifting agent effort away from inferring and toward execution.

  • Agents reason from resolved structure, not raw files
  • No context reconstruction on every interaction
  • Faster execution with fewer tokens consumed

Keep context up-to-date continuously

Teams control exactly when and how that context is refreshed—as part of the CI pipeline, on a scheduled cadence, or after major dependency updates and refactors.

  • As part of the CI pipeline
  • On scheduled cadences
  • After major dependency updates or refactors

What agents can understand with Moderne Prethink

Semantic structure and patterns

Agents learn how code is meant to be written and organized in your environment, reducing rework caused by violating conventions or patterns.

Service interfaces and integrations

Agents see resolved service endpoints, external calls, and integration points, making it easier to reason about impact and downstream effects of changes

Dependency intelligence

Agents understand compatibility, risk, and upgrade impact across direct and transitive dependencies without relying on incomplete scans or guesswork.

Security & configuration context

Agents can reason about runtime behavior and security posture based on real configuration, not inferred assumptions

Architecture & system relationships (CALM)

CALM-formatted architecture gives agents an explicit model of system structure, boundaries, and relationships.

Test coverage & intent

Agents can see how code is validated and what tests cover, making it easier to propose safe changes and identify gaps with confidence.

Less guesswork. Lower costs. Better results.

When agents can’t see the whole story, they burn time and tokens piecing together context. Moderne Prethink gives them a shared understanding from the start.

Icon showing connection within a grid of dots

Less token waste

Prethink eliminates repeated context reconstruction, cutting token usage and time spent across sessions and workflows

Fewer hallucinations

With real structure and relationships in context, agents hallucinate less and produce more reliable output.

More consistent output

Teams get predictable results because agents work from the same trusted context every time.

Give your agents the context they’ve been missing.

Dive Deeper

Blog

How Moderne Prethink accelerates coding agents and reduces token use

AI tools promise speed, but context is the bottleneck. Moderne Prethink changes how coding agents understand real-world codebases.

moderne docs

Dive deeper into Moderne Prethink Documentation

See how Prethink works under the hood and how to tailor context for your repositories and agent workflows.

technology

Lossless Semantic Tree (LST)

The compiler-accurate, format-preserving code model that makes safe, automated modernization possible across thousands of repositories.

Frequently Asked Questions

How do you give LLMs accurate context for millions of lines of code?
How do you prevent hallucinations in large codebases?
Why is raw code a poor input for LLM reasoning?
What is semantic code context?
Why does sending code to LLMs get so expensive?
How can structured context reduce token usage?
How do AI agents understand service boundaries?
How can architecture be represented for machines, not just humans?
How do you ensure AI recommendations are grounded in reality?