Finding every way a Spring controller commits a response too early

Jonathan Schneider
|
Contents

Key Takeaways

A few weeks ago the Spring Security team disclosed CVE-2026-22732. The one-line version is that if a controller writes the response body before Spring Security's filter chain has a chance to add its lazy security headers, those headers get silently dropped. The longer version is more interesting: there is no single call site to grep for, Spring Security’s contract leaves the responsibility for not-committing-too-early to the application, and the set of ways an application can commit too early is larger than it looks. This is a walk through how we built a recipe that finds all of them, what it can and can't do, and what that recipe composes with on the remediation side.

The bug in one small example

Spring Security builds an ordered chain of Filter instances around every request. One of the filters it adds late in the chain — the one responsible for X-Frame-Options, X-Content-Type-Options, Strict-Transport-Security, and the cache-control headers that keep authenticated responses out of shared caches — writes its headers lazily. “Lazily” here has a specific meaning: the filter delays HttpServletResponse.setHeader until just before the response is committed, so it has the latest state of the application’s own header decisions to layer its defaults on top of. That delay is an optimization, and like most optimizations it has a corner case.

The corner case is that if the application commits the response — flushes the buffer, writes body bytes past the buffer threshold, or explicitly sets Content-Length — before that last filter runs, the container has already sent the headers over the wire. Spring Security’s lazy setHeader call is a no-op against an already-committed response. Nothing throws. Nothing logs. The response simply goes out without the security headers it was supposed to have.

A minimal vulnerable controller looks like this:

@GetMapping("/vuln/stream")
public void stream(HttpServletResponse response) throws IOException {
    response.setContentType("text/plain");
    response.getOutputStream().write("hello".getBytes(UTF_8));
    response.getOutputStream().flush();
}

Three lines of boring code, and depending on which version of Spring Security is on the classpath the response goes out without X-Frame-Options, without X-Content-Type-Options, and without the cache headers that keep your authenticated API responses out of a shared CDN. The Spring Security team’s advisory covers this in more detail, and the patched versions — 5.7.22, 5.8.24, 6.3.15, 6.4.15, 6.5.9, and 7.0.4 — restructure the filter to flush headers eagerly rather than lazily. But “upgrade” is the remediation once you know which repositories are affected, and if you have more than a handful of Spring applications you probably don’t.

Why finding it is interesting

The advisory describes the server-side behavior. It doesn't tell you which of your 300 Spring repositories actually contain code paths that commit the response early. That's the detection problem — and it's the part that the advisory can't answer for you, because it depends on what your code does, not what Spring Security's did.

The three canonical patterns

Semgrep published a small demo repository a few days after the disclosure that enumerates three canonical patterns. The demo is a minimal Spring Boot app with three vulnerable endpoints and a JUnit test that hits each one and asserts on the missing headers. It’s an admirably direct way to communicate “this is the shape of the bug,” and we leaned on it as a starting point. The three patterns are:

The first is the one shown above: the controller calls response.getOutputStream() and then write or flush on the resulting ServletOutputStream. Once the buffer fills or flush is called explicitly, the response is committed.

The second is the print-writer variant: response.getWriter().write(...) or .println(...), followed sometimes by response.flushBuffer(). Functionally the same story — the writer wraps the same underlying stream, and writing to it past the buffer commits the response.

The third is more subtle. An application that computes Content-Length in its own code and sets it explicitly — either through the dedicated setContentLength(int) / setContentLengthLong(long) APIs, or through setIntHeader("Content-Length", n), or through setHeader("Content-Length", "n") — gives the container enough information to commit the response as soon as the body reaches that length. Servlet containers optimize for this case, which is exactly what makes the pattern hard to catch: there is no flush, no write past a threshold, no explicit commit call. The response commits because it can.

These three patterns are the headline cases, and any detection worth building has to cover them. The question is what happens when the code doesn’t look exactly like the demo.

Where pattern matching starts to run out

If you read Semgrep’s demo as a specification — these three patterns, and nothing else — you can write a perfectly reasonable pattern-matching rule for it. Semgrep’s rule language is expressive, and for well-scoped problems the lightweight model of “write down the AST shape, write down the autofix,” works. The value proposition is real, and I don’t want to wave it away. For a great many security rules, text-and-structure patterns are the right tool.

But the patterns above are easy cases. The interesting cases are the ones where the same underlying vulnerability shows up without the pattern on the surface. Consider what happens when someone factors the stream acquisition out into a helper:

private OutputStream streamFor(HttpServletResponse response) throws IOException {
    return response.getOutputStream();
}

@GetMapping("/vuln/helper")
public void helper(HttpServletResponse response) throws IOException {
    streamFor(response).write("hello".getBytes(UTF_8));
}

The call to response.getOutputStream() is still there, but the write call site is a method invocation on the return value of streamFor, not on response directly. A pattern rule keyed on “$RESPONSE.getOutputStream().$SINK(...)” won’t match this. A pattern rule keyed on “$STREAM.write(...) where $STREAM comes from getOutputStream” has to reason about return types across procedure boundaries, which is exactly the kind of thing pattern engines punt on by default. Semgrep’s interprocedural analysis is explicit about its scope: “by default, Semgrep Code can analyze interactions beyond a single function but within a single file,” with cross-file analysis available as an opt-in feature that takes longer to run. For a vulnerability that can span helper methods, utility classes, and wrapper subclasses in the same codebase, “within a single file” is a real constraint.

Now consider what happens when someone stashes the stream on a field:

class Exporter {
    private OutputStream out;

    void init(HttpServletResponse response) throws IOException {
        this.out = response.getOutputStream();
    }

    void send(byte[] bytes) throws IOException {
        out.write(bytes);
    }
}

The tainted value isn’t flowing through a return — it’s flowing through an instance field. The write that commits the response is in a different method from the getOutputStream that obtained the stream. To flag this, the analysis has to track that this.out carries a property acquired somewhere else and then used somewhere else again. That’s the textbook definition of an interprocedural data-flow problem with heap support, and it’s not something a pattern language handles directly.

One more — constant propagation across a local. This one looks trivial but trips up pattern rules surprisingly often:

final String header = "Content-Length";
response.setIntHeader(header, body.length);

If your rule checks the literal first argument of setIntHeader, this doesn’t match. If your rule checks “the first argument is a variable whose definition is the string literal Content-Length,” you need constant propagation across the local binding — which is fine for a compiler, and also fine for a data-flow framework, and exactly not what pattern matching is designed for.

The last variation worth mentioning is wrapper subclasses. The Servlet API ships HttpServletResponseWrapper and a well-behaved codebase occasionally has one or more subclasses of it, sometimes deep in a logging filter or a rate-limiting interceptor. A subclass of HttpServletResponseWrapper that overrides getOutputStream — or that exposes any method whose name happens to match a sink pattern — will be missed by a rule that matches the declared type of the receiver exactly. Covering subclasses requires the detection tool to reason about the type hierarchy and about method overrides; grepping for “jakarta.servlet.http.HttpServletResponse” as a type name won’t find them.

Taint flow as the right abstraction

The shared property of all five variations — direct call, helper indirection, instance-field stash, constant-propagated literal, wrapper subclass — is that there is a value obtained from one API call that is eventually consumed by another. In the direct case, it’s obtained and consumed on the same expression. In the helper case, it’s obtained and returned and then consumed. In the field case, it’s obtained, stashed, read back, and consumed across method boundaries. In the constant-propagation case, the thing being tracked isn’t the stream but the string literal "Content-Length", but the shape is the same: a value from one place ends up influencing a call somewhere else.

This is what taint analysis is for. A taint analysis formalizes “obtained here, reaches there” as a flow over a program’s data dependence graph. You define sources (APIs that produce a value you care about), sinks (APIs that consume it in a way that matters), and the analysis tells you which source flows reach which sink. OpenRewrite’s rewrite-program-analysis library provides a taint framework that is interprocedural, field-sensitive, and follows method summaries — the three properties that make the five variations above tractable.

For CVE-2026-22732, the recipe defines three taint specs — one for the output-stream flow, one for the writer flow, and one for the Content-Length literal flow — plus a fourth structural recipe that handles flushBuffer() and the dedicated setContentLength APIs (which don’t have a tainted value to track, so taint flow isn’t the right shape for them). Two of the three taint specs share nearly all of their code, so we extract a small abstract base class:

abstract class AbstractHttpResponseReceiverCommitSpec implements TaintFlowSpec {

    protected abstract MethodMatcher[] sourceMethods();
    protected abstract MethodMatcher[] sinkMethods();
    protected abstract TaintType taintType();

    @Override
    public Set<? extends TaintType> matchSource(Cursor cursor) {
        return TaintFlowSpec.anyMatchedCall(cursor, sourceMethods(), taintType());
    }

    @Override
    public Set<? extends TaintType> matchSink(Cursor cursor, SinkContext context,
                                                   Set<? extends TaintType> flowing) {
        if (!context.isReceiver() || !flowing.contains(taintType())) return emptySet();
        // ... match the invocation against sinkMethods() ...
    }
}

The interesting constraint is the receiver check — context.isReceiver(). A method invocation has a receiver (the thing to the left of the dot) and zero or more arguments. For this CVE we care when the tainted stream is the receiver of the write call, not when it’s passed as an argument to some unrelated method. The framework surfaces that position as part of the sink context, so the spec can be precise. That’s something that a pattern language can express with enough metavariables, but it expresses a lot of other things too, and keeping the noise down at pattern-matching scope often involves writing the same guard several times across several rules.

Critically, each MethodMatcher is constructed with matchOverrides=true:

new MethodMatcher("jakarta.servlet.ServletResponse getOutputStream()", true)

That flag is what covers the wrapper-subclass case. HttpServletResponse inherits getOutputStream from ServletResponse; the matcher is written against the declaring type, and the matchOverrides flag tells the framework to also match any subclass that overrides this method — which is exactly what HttpServletResponseWrapper subclasses do. There is no enumeration of wrapper classes, no allowlist, no hand-curated type hierarchy. OpenRewrite knows the type graph because the LST is fully type-attributed; we just ask the matcher to use it.

The field-sensitivity and interprocedural flow, similarly, aren’t things the recipe code enables or configures. They’re on by default in the framework. When the analysis encounters this.out = response.getOutputStream() in one method and out.write(bytes) in another, the method summaries computed over the enclosing class tie them together. The recipe author doesn’t write that logic — it’s already there, the same way a compiler’s liveness analysis is already there when you write a compiler pass that consumes it.

Stage by stage, across one trace

It’s worth slowing down for a moment and walking the analysis through a finding that exercises all of the machinery at once. Consider a slightly elaborated version of the earlier field-stash example, the kind of class that actually shows up in a real codebase:

class JsonExporter {
    private OutputStream out;

    private OutputStream stream(HttpServletResponse r) throws IOException {
        return r.getOutputStream();
    }

    void init(HttpServletResponse r) throws IOException {
        OutputStream s = stream(r);
        this.out = s;
    }

    void emit(byte[] data) throws IOException {
        out.write(data);
    }
}

The getOutputStream call is not adjacent to the write call. It’s routed through a private helper, assigned to a local, stored on a field, and finally used from a different method. The two ends of the flow are connected by four intervening stages, none of which contain the pattern a rule would grep for. To catch this, the analysis has to traverse all four.

It goes like this, one stage at a time. At the first call site, stage one is the source match. The spec declares jakarta.servlet.ServletResponse getOutputStream() (with matchOverrides=true) as a source, and the method invocation inside stream matches it. The framework records that the return of that call carries the taint type OUTPUT_STREAM_COMMIT. Up to this point, everything looks like a pattern match.

The interesting part is stage two. Because the tainted value is the return of an enclosing method — the private stream — the framework computes a summary for stream that says “the return value carries OUTPUT_STREAM_COMMIT taint whenever the input is a HttpServletResponse.” This is not something the recipe author asks for; it’s the standing behavior of TaintAnalysis. The summary is attached to the method, so every call site of stream will see the return as tainted without re-analyzing the body.

At the call site inside init, stage three consumes that summary. The expression stream(r) is evaluated, the summary fires, and the return flows into the local variable s. Local binding propagation is the analysis’s most elementary step — the thing every data-flow analysis does for free — and it carries the taint onto s.

Stage four is the field store. The assignment this.out = s records that the out field of this holds a value of type OUTPUT_STREAM_COMMIT. Field-sensitivity here is what distinguishes this analysis from the intraprocedural kind that Semgrep CE ships by default. The analysis doesn’t conflate JsonExporter.out with some other class’s out field; it tracks taint on the specific field, keyed by the JavaType.Variable that OpenRewrite attaches to the AST during type attribution. (I wrote about why we don’t have to choose between field-sensitive and field-insensitive in the design notes for rewrite-program-analysis; the short version is that the type system gives us field identity for free, so it’s cheaper to always be field-sensitive than to hedge.)

In emit, the flow picks up again. Stage five is the field load: out is read, the taint is pulled off the field and onto the local receiver of the next invocation. Stage six is the sink match. The spec’s matchSink implementation checks that the cursor is at a method invocation, that the invocation’s receiver position matches (context.isReceiver() returns true), that the taint on the receiver includes OUTPUT_STREAM_COMMIT, and that the invoked method matches java.io.OutputStream write(..) under matchOverrides. All four checks pass. A TaintFlowTable row is emitted, with the source line number, the sink line number, and the taint type.

Each of those six stages is a transfer function inside the same analysis. None of them are specific to this CVE. The recipe author wrote one line identifying the source (“this method produces the taint”), one line identifying the sink (“this method consumes it on the receiver”), and one data table binding. The framework handled the summary computation, the local propagation, the field sensitivity, and the receiver check. The value of a layered analysis like this is exactly the shared middle: once the framework exists, every recipe that needs any subset of those stages gets them at no marginal cost.

Stage by stage, across five detections

There’s a second kind of stitching, at a different layer. Inside a single trace the stages compose into one finding. Across the whole recipe, five independent pipelines compose into one actionable output. They don’t feed each other directly — each one scans the same source code for a different shape — but their outputs join, and the join is what makes the result usable at estate scale.

The outer stages are:

Stage one is the OutputStream commit taint flow. Source: ServletResponse.getOutputStream() (servlet and jakarta, plus any subclass that overrides it). Sink: receiver calls to write, flush, close, print, or println on the resulting stream. Emits into TaintFlowTable, tagged OUTPUT_STREAM_COMMIT.

Stage two is the Writer commit taint flow. Source: ServletResponse.getWriter(). Sink: receiver calls to write, print, println, format, or close. Emits into TaintFlowTable, tagged WRITER_COMMIT. Structurally identical to stage one — they share a base class — but with a different taint type so that a downstream consumer can tell which flow produced the finding.

Stage three is the Content-Length literal taint flow. Source: the string literal "Content-Length" (case-insensitive). Sink: argument 0 of setIntHeader, setHeader, or HttpHeaders.set. This is the stage where pattern matching consistently misses things — the literal might be bound to a local or a constant field, and the taint analysis follows it across either. Emits into TaintFlowTable, tagged CONTENT_LENGTH_HEADER_NAME.

Stage four is the structural direct-commit detection. No taint needed, because the sink call doesn’t consume a value from another source — it self-commits by its very existence. This covers setContentLength(int), setContentLengthLong(long), flushBuffer(), and the reactive ServerHttpResponse.setComplete() and writeWith(...). A JavaIsoVisitor walks the AST, matches method invocations against the right matchers, and emits into HttpResponseDirectCommitTable.

Stage five is the project-level Spring Security version lookup. Unlike the first four, it operates at the project boundary rather than the source file. It runs as the aggregator’s getScanner, which means it fires once per source file but uses a ConcurrentHashMap.newKeySet() to record each project identity only once. For each unique project it reads the MavenResolutionResult or GradleProject marker attached to the build file, finds the resolved spring-security-web or spring-security-core coordinate, and emits a single row into SpringSecurityVersionByProject. One row per project, regardless of how many source files the project has.

The five stages produce three data tables. That’s where the stitching happens. TaintFlowTable rows from stages one, two, and three share a schema, so a triager can filter by taintType to get any subset. HttpResponseDirectCommitTable rows from stage four use a different schema because the finding shape is different — there’s no source-to-sink flow, just a single call site with a kind tag. SpringSecurityVersionByProject rows from stage five are the join key: every finding has a project identity, every project has a version row, and the join of finding-rows against version-rows is the answer to “show me every suppression flow in a project that is also on an affected Spring Security version.” That intersection is what actually needs to be remediated.

Why the stages are independent

None of the detection stages read each other's output. They scan the same source code in parallel, each for its own shape, and each emits into its own table. The stitching is deferred to the data-table consumer — which is the right place for it, because the join is where the triage decision actually gets made. It also means adding a sixth detection (say, for a new API that's discovered to have the same liability) is a drop-in: write the spec, wire it into the aggregator's buildRecipeList, and it emits into the same tables without anyone else having to know it exists.

Once the join exists, the remediation stage — a UpgradeDependencyVersion composed into the same recipe list — runs only against the filtered subset. Projects with no findings, or findings on already-patched versions, are left alone. The stitching is the thing that makes the remediation targeted instead of blanket.

What autofix means for this CVE

Semgrep has an autofix feature that pairs a detection pattern with a replacement pattern. For simple substitutions — replacing a deprecated API call with its recommended successor, fixing a misnamed parameter, wrapping a bare SQL literal in a parameterized-query builder — autofix is a genuine productivity win. Semgrep’s more recent autofix work extends this with LLM-driven contextual guidance. For pattern-to-pattern rewrites inside a single file, the value is real.

For CVE-2026-22732, though, “autofix” is not a two-line text substitution. The canonical fix for this CVE is upgrade Spring Security past the patched version. The patched versions (5.7.22, 5.8.24, 6.3.15, 6.4.15, 6.5.9, 7.0.4) restructure the filter to flush headers eagerly rather than lazily, so the application’s commit timing stops mattering. The fix, in other words, is a dependency upgrade — which is exactly the kind of thing OpenRewrite’s catalog has as a first-class recipe.

What this means is that the “detect and remediate” story for this CVE is a composition, not a single rule. A pipeline that actually does something useful looks like: find the repositories that commit the response in a way that triggers the CVE, and inside those repositories upgrade Spring Security to a patched version. In OpenRewrite that composition is written directly:

---
type: specs.openrewrite.org/v1beta/recipe
name: io.moderne.cve202622732.FindAndPatch
displayName: Find and patch CVE-2026-22732
recipeList:
  # 1. Identify suppressed-header flows.
  - io.moderne.recipe.cve202622732.FindSpringSecurityHeaderSuppression

  # 2. Upgrade Spring Security wherever this project’s version is affected.
  - org.openrewrite.maven.UpgradeDependencyVersion:
      groupId: org.springframework.security
      artifactId: spring-security-*
      newVersion: 6.5.9
  - org.openrewrite.gradle.UpgradeDependencyVersion:
      groupId: org.springframework.security
      artifactId: spring-security-*
      newVersion: 6.5.9

The detection recipe emits findings as data-table rows, and those rows include the resolved Spring Security version per project alongside the individual suppression findings. A team running this at estate scale doesn’t need to upgrade every repository — they need to upgrade the repositories that (a) contain one of the suppression patterns and (b) are on an affected Spring Security version. The data table gives them exactly that filter. The remediation recipe then runs as a second pass, gated by whatever criterion the team picks.

There’s a version of this workflow in which the team does something more targeted than a dependency upgrade — rewriting the controller to use a ResponseEntity<T> rather than a raw stream, say, or switching to Spring’s StreamingResponseBody. That’s also composable: OpenRewrite recipes for refactoring controller signatures exist, and a composition that chains detection to a structural fix is the same pattern. The point isn’t that dependency upgrade is always the right fix — it’s that the remediation vocabulary lives at the same abstraction level as the detection vocabulary, so chaining them is straightforward rather than ad hoc.

The WebFlux variant

Servlet-based Spring MVC applications are the obvious target, but Spring also ships a reactive web stack (WebFlux) that uses ServerHttpResponse and DataBuffer instead of HttpServletResponse and ServletOutputStream. The reactive equivalents are response.writeWith(Publisher), response.writeAndFlushWith(Publisher), and response.setComplete(). The same underlying issue applies: if the application commits the response before the reactive security filter has written its headers, the headers get dropped.

The WebFlux coverage is structurally the same recipe. Add a couple more method matchers to the sink set, add a UsesType precondition for org.springframework.web.server.ServerHttpResponse, and the taint spec works. The only genuine difference is that the reactive receiver is a ServerHttpResponse rather than an HttpServletResponse, which is type information the recipe expresses directly rather than encoding as a pattern. The taint analysis infrastructure doesn’t need to care what framework the receiver belongs to.

How fast this was to build

It’s tempting, when describing a piece of software, to leave out how long it took — either because you’re embarrassed by how long it took, or because you don’t want to make it sound trivial. I’ll share the number anyway: this recipe was built in an afternoon. That includes reading the CVE advisory, reading the Semgrep demo, sketching the taint-spec shape against two existing recipes in rewrite-program-analysis, writing the three concrete specs, writing the structural recipe for flushBuffer and setContentLength, building the aggregator and the per-project Spring Security version lookup, and writing 39 unit tests across six test files.

12
Java source files
39
unit tests
~800
lines of main source
0
new framework code

The “0 new framework code” stat is the one that matters. Every novel thing in the recipe — interprocedural taint, constant propagation, method summaries, wrapper-subclass handling, field sensitivity — was already there in rewrite-program-analysis. The taint framework has been in production use for FindXxeVulnerability, FindProcessControlInjection, FindUnsafeReflectionInjection, and the cryptography-flow recipes for months. The CVE-2026-22732 work was entirely composition: naming the sources, naming the sinks, wiring them to a precondition and a data table. The per-project version lookup reused the same pattern that other recipes in the catalog use for reporting dependency information. The license header, the build configuration, the GitHub Actions workflows — all copied verbatim from a sibling repository and lightly edited.

This is the compounding property of a recipe library built on a shared analysis foundation. You don’t pay for the analysis infrastructure on each new rule — you pay for it once, in a shared module, and every subsequent recipe that wants taint flow, type-aware matching, or build-file traversal gets it for free. The incremental cost of “one more CVE” converges toward the cost of writing down the sources, the sinks, and the test cases.

What the recipe doesn’t do

It’s worth being explicit about scope. The recipe does not do any of the following:

It doesn’t cover interprocedural flow that crosses project boundaries. If your controller calls into a shared response-writer utility that lives in a different JAR and that JAR isn’t part of the same build, the taint analysis has seen the call but not the body. This is the same scope limit every large-scale static analysis ultimately runs into, and the CVE detection is no different. In practice it matters less than it sounds, because the controller-to-commit distance inside a typical Spring service is usually short.

It doesn’t auto-remediate the controller itself. For the “rewrite the response writing to use ResponseEntity” case, the recipe flags the site but doesn’t rewrite it. The rewrite is composable — the recipes exist — but we didn’t bundle them as part of this release because the right structural fix depends on what the controller is actually trying to do, and that judgment belongs with the team that owns the code, not with a blanket rule.

It doesn’t try to distinguish a well-behaved streaming endpoint (for instance, an SSE endpoint that correctly opts out of security headers) from a vulnerable one. The recipe flags any commit-triggering flow that is not gated by a Spring Security exception. A team that has legitimate streaming endpoints will want to triage those by inspection. This is a choice — we could be cleverer about it — and the balance point is that false positives on legitimate streaming are cheap to dismiss, whereas false negatives on vulnerable endpoints are the thing we’re trying to avoid.

Running it

The recipe is available as io.moderne.recipe.cve202622732.FindSpringSecurityHeaderSuppression. To run it against a single repository:

mod run . --recipe io.moderne.recipe.cve202622732.FindSpringSecurityHeaderSuppression
mod study . --last-recipe-run --data-table TaintFlowTable
mod study . --last-recipe-run --data-table HttpResponseDirectCommitTable
mod study . --last-recipe-run --data-table SpringSecurityVersionByProject

The output is three data tables: TaintFlowTable for the stream- and writer-based flows, HttpResponseDirectCommitTable for the Content-Length and flushBuffer cases, and SpringSecurityVersionByProject for the dependency-version context. Joining the three gives you the filtered list of suppression findings alongside the Spring Security version of each project — which is the input to whatever remediation decision you’re making.

For estate-scale runs, the same recipe runs against the Moderne DX with the same output. The data tables persist across runs, so “show me every repository that exhibits this CVE and is on an affected version” is a single SQL-ish query against the stored results rather than a re-scan.

Closing thought

There’s a recurring pattern in how security advisories get absorbed by an engineering organization. A CVE drops. A small team reads the advisory and writes a pattern-matching rule — usually in whatever tool they already have — that matches the headline case. The rule ships. The team moves on. A few weeks later, someone asks whether the rule caught the helper-factored variant in the data-ingest service, or the wrapper-subclass case in the legacy filter chain, and the answer is “probably not, but we can write another rule.” That cycle has a cost that compounds, and it’s the cost of treating every security signal as a bespoke regex rather than as an instance of a smaller set of data-flow problems.

The alternative is to build detection on a foundation that already knows how to follow values across methods, across fields, across type hierarchies — because that foundation pays back the moment the code you’re looking for stops matching the pattern you wrote down. Every variation of this CVE that shows up in the wild — and they will show up — is cheap to catch if the detection is written at the data-flow level. And, just as importantly, every remediation that composes with detection — dependency upgrades, structural rewrites, configuration changes — is cheap to chain in, because it’s written in the same vocabulary. The recipe we shipped this week for CVE-2026-22732 is one instance of that. The next one will be cheaper.

Explore More

Detecting CVE-2026-22732 at scale

An OpenRewrite recipe, an honest contrast with pattern matching, and what an afternoon’s composition looks like.

References