HTTP/1.1βs weak request separation enables widespread HTTP request smuggling (desync) attacks that have been used to compromise user sessions, poison caches, and take over millions of sites by exploiting parser discrepancies across front-end/back-end chains. The paper documents multiple novel desync classes (including 0.CL, CL.0, H2.TE, Expect-based attacks), case studies affecting Akamai, Cloudflare, Netlify and others, and calls for upstream HTTP/2 adoption as the long-term fix. #Akamai #Cloudflare
Keypoints
- HTTP/1.1βs ambiguous request-length semantics allow tiny parser discrepancies between front-end and back-end servers to cause request desynchronization, enabling request smuggling and site takeover.
- Common mitigations and WAF rules have masked classic detection techniques, creating a βdesync endgameβ where hidden flaws remain exploitable despite appearing patched.
- New exploit classes demonstrated include 0.CL, CL.0, CL.TE, H2.TE and Expect-based desyncs, with practical weaponization (double-desyncs, RQP, cache poisoning) against major CDNs and vendors.
- Case studies showed critical impacts: an internal Cloudflare desync exposing ~24M sites, Expect-based desyncs affecting Akamai/Netlify, and multiple vulnerabilities in IIS behind ALB and other proxies.
- An open-source Burp extension (HTTP Request Smuggler v3.0) and supporting Turbo Intruder scripts improved reliable detection of parser discrepancies (V-H / H-V) and aided high-value bug bounty discoveries.
- The root cause is a protocol design flaw: HTTP/1.1βs text-based, multi-length semantics make small implementation bugs disproportionately dangerous; upstream HTTP/2 (binary framing) is recommended as the long-term fix.
- If upstream HTTP/2 cannot be enabled, the paper recommends strict normalization/validation on front-ends/back-ends, disabling upstream connection reuse, and regular desync-focused scanning as mitigations.