AUTONOMY DIRECTORATE

๐Ÿ  Main

๐Ÿงช Interactive Apps

๐Ÿ“ฐ News

๐Ÿ›ก๏ธ PQ Crypta Proxy

๐Ÿ‘ค Account

โŸจ QUANTUM ERROR PORTAL โŸฉ

Navigate the Error Dimensions

PQ Crypta Logo

From Kernel to Browser

How PQ Crypta's Six-Tool Suite Exposes the Real-World Gap Between QUIC/WebTransport and TCP

QUIC & HTTP/3 TCP Analysis Post-Quantum TLS Allan Riddel · Architect, PQ Crypta / PQPDF · March 2026 · Updated April 2026
Abstract

PQ Crypta includes a suite of six independent diagnostic tools built to answer a deceptively simple question: what is actually happening at the protocol layer, versus what servers claim? The Speed Test uses genuine QUIC datagrams and WebTransport streams to measure throughput, latency, jitter, and packet loss against TCP — directly in the browser. The HTTP/3 QUIC & WebTransport Scanner performs native QUIC probing via quinn/h3 Rust libraries to extract transport parameters, fingerprint server implementations, and grade HTTP/3 deployment quality across five tiers. The PQC Readiness Scanner performs real TLS handshake analysis to detect NIST-standardized post-quantum algorithm support. The pqcrypta-proxy is an open-source Rust reverse proxy providing the infrastructure layer: native HTTP/3, WebTransport, hybrid PQC TLS (X25519MLKEM768), and production-grade operations. The Streaming Encryption Tester validates WebTransport as a live cryptographic operation channel — running key generation, encryption, and decryption of 31 post-quantum algorithms over real QUIC bidirectional streams and measuring per-algorithm throughput, latency, and compression performance in the browser. Each tool was built because the last one exposed a gap that required direct measurement to close. This paper explains the architecture, motivation, and methodology behind each tool — and what the measurements actually show.

Why this matters: Most ‘HTTP/3 support’ amounts to CDN edge termination — Cloudflare, Fastly, or Akamai terminates QUIC at the edge, converts to TCP, and forwards to a TCP-only origin. The user gets an HTTP/3 badge in DevTools. The origin never sees a QUIC packet. The tools built here were designed to cut through that.

Origin and Motivation

PQ Crypta did not begin with a blueprint. It began with a problem. The experiment was simple: build something real with WebTransport over QUIC, then measure what actually happens. What followed was a cascade of tools, each one built because the previous measurement revealed a gap that required direct evidence to close.

Building the pqcrypta-proxy — a Rust reverse proxy with native HTTP/3, WebTransport, and hybrid post-quantum TLS — required a way to verify that post-quantum key exchange was actually being negotiated, not just advertised. So the PQC Readiness Scanner was built: a tool that performs a real TLS handshake and inspects what the server actually selects.

Verifying the WebTransport/QUIC HTTPS stack externally required something beyond checking response headers. Alt-Svc headers can be injected by CDNs, cached by intermediaries, and left pointing at endpoints that no longer speak QUIC. So the HTTP/3 QUIC & WebTransport Scanner was built: a server-side native QUIC probe using the quinn and h3 Rust libraries that sends real QUIC Initial packets and reports what actually happens at the transport layer.

And then the throughput numbers were not what the QUIC specification or the surrounding industry hype suggested they should be. To understand why, the Speed Test was built: a browser-based measurement that runs genuine QUIC datagrams and WebTransport streams side-by-side with TCP, eliminating server-side variables and exposing what is actually happening on the user's connection.

Confirming that the transport and handshake layers worked still left a gap: nobody had measured whether WebTransport could carry real post-quantum cryptographic workloads at acceptable latency, or how per-algorithm performance varied over an actual QUIC connection. So the Streaming Encryption Tester was built: a browser-based tool that runs key generation, encryption, and decryption of 31 post-quantum algorithms over live QUIC bidirectional streams and measures throughput, latency, and compression ratios on every operation.

The six tools collectively answer a question the industry has largely avoided asking directly: what does the protocol stack actually look like at the transport, implementation, cryptographic, and application layers — versus what servers claim?

Kernel-Level Architecture: Why QUIC Behaves Differently

Understanding why TCP and QUIC produce different measurements requires understanding where each protocol lives in the operating system. This is not an abstract distinction — it determines throughput ceilings, latency floors, and hardware acceleration potential.

TCP: Kernel Space

TCP lives in net/ipv4/tcp.c (≈30,000 lines of C compiled into the kernel binary on Linux). Packet processing occurs entirely in kernel space: segmentation, reassembly, retransmission, ACK generation, congestion control, and flow control all execute without userspace context switches.

This enables critical hardware optimisations: TSO pushes segmentation to the NIC; GSO extends this in software; GRO coalesces incoming segments; kTLS enables AES-NI hardware acceleration at the kernel level; sendfile() performs zero-copy transfer. Jaeger et al. [1] measured TCP achieving ≈8,000 Mbit/s on 10 Gbps links with NIC offloading and AES-NI enabled.

QUIC: Userspace

QUIC runs entirely in userspace atop UDP sockets. Every QUIC operation — AEAD encryption/decryption, header protection, packet numbering, congestion control, loss detection, stream multiplexing — executes in userspace library code.

Every packet requires a context-switch boundary crossing. The kernel cannot parse encrypted QUIC headers and cannot offload QUIC-specific operations. Jaeger et al. measured QUIC throughput varying from 90 Mbit/s to 4,900 Mbit/s on the same 10 Gbps hardware depending solely on QUIC implementation — a 50× range. Default Linux UDP buffer sizes (212,992 bytes) compound the problem at high packet rates.

Aspect TCP (Kernel Space) QUIC (Userspace)
Implementation Kernel (net/ipv4/tcp.c, tcpip.sys) Userspace library (quinn, quiche, MsQuic)
TLS integration Bolt-on via OpenSSL/BoringSSL, separate handshake Mandatory TLS 1.3, fused with transport handshake
Handshake RTTs 2–3 RTT (TCP SYN + TLS) 1 RTT combined; 0-RTT resumption available
Hardware offload Full: TSO, GRO, GSO, checksum, kTLS/AES-NI Partial: UDP send/recv only; crypto in userspace
Congestion control Kernel-managed (CUBIC, BBR) Library-managed, pluggable
Max measured goodput (10G) [1] ≈8,000 Mbit/s 90–4,900 Mbit/s (implementation-dependent)
Stream multiplexing None (1 byte-stream per connection) Native independent streams, no HOL blocking
Buffer tuning Minimal — kernel auto-tunes Critical — 10× default UDP buffer increase required

Tool 1: PQ Crypta Speed Test

Built because: The QUIC throughput numbers were not matching expectations. Rather than assume the numbers were wrong, a measurement tool was needed — one that runs on real connections, in real browsers, without synthetic lab conditions.

The Speed Test is a browser-based diagnostic that directly compares QUIC/WebTransport performance against TCP on the user's actual connection. It measures five metrics — latency (RTT), jitter (σ), packet loss (%), download throughput (Mbps), and upload throughput (Mbps) — using genuine QUIC datagrams and WebTransport streams, not TCP round-trips, HTTP polling, or simulated UDP.

Latency and Packet Loss: True UDP Datagram Echo

RTT and jitter are measured via real QUIC/UDP datagram echo using QUIC DATAGRAM frames as defined in RFC 9221. Each ping embeds a high-resolution timestamp and sequence number. The server echoes immediately; the client computes round-trip time with sub-millisecond accuracy. This is not an HTTP-level ping — it is the true cost of a single UDP round trip through the user's ISP, middleboxes, and WAN path.

Packet loss probing fires 200 datagrams concurrently. Any datagram failing to echo within 2 seconds counts as lost. This concurrent burst approach reveals bursty loss caused by shallow buffers, ISP traffic policing, or wireless interference — patterns that sequential probing misses. TCP retransmissions mask loss at the transport layer; QUIC datagrams expose it directly.

Download: Trimmed Steady-State Measurement

QUIC Download: 12 concurrent WebTransport streams download data over a time-bounded window. The first 1.5 seconds are discarded to exclude congestion-control ramp-up. Only the steady-state window is averaged, eliminating slow-start artifacts. Deep Test mode extends the window to 10 seconds (standard: 5 seconds), allowing QUIC to fully saturate the link.

TCP Download: 6 parallel HTTP/1.1 streams, each on an independent TCP connection. HTTP/1.1 is explicitly negotiated — not HTTP/2. HTTP/2 would coalesce all streams onto a single TCP pipe via multiplexing, meaning one stalled HTTP/2 stream would drag all streams to zero simultaneously (head-of-line blocking). The server sends pseudo-random bytes to defeat ISP compression inflation.

Upload: Server-Measured Steady-State

QUIC Upload: 16 WebTransport streams each send data to the server. The server independently times each stream from first byte to FIN, discards the first 2 seconds, and reports steady-state Mbps. The 16 per-stream values are summed. Chrome serialises writes within a single QUIC stream, so parallel streams are required to fill the shared congestion window. Deep Test mode extends to 12 seconds (standard: 8 seconds).

TCP Upload: 6 parallel XHR streams each POST a 50 MB body over independent HTTP/1.1 connections. XHR is used instead of fetch() because Chrome throws a TypeError when a streaming ReadableStream body with duplex:'half' is sent over HTTP/1.1 — a browser implementation constraint, not a protocol limitation.

Upload formula: Up_Mbps = Σ B(u,i) × 8 / T(ms) / 1000 — where B(u,i) is bytes uploaded by stream i, and T(ms) is test duration in milliseconds (8,000 standard; 12,000 deep). For QUIC, the server measures each of 16 WebTransport streams independently with the first 2s discarded. For TCP, 6 XHR streams are measured client-side via xhr.upload.onprogress.
Metric QUIC / WebTransport Method TCP Method
Latency (RTT) True QUIC datagram echo with timestamps (RFC 9221) TCP SYN timing
Packet Loss 200 concurrent datagrams, 2s timeout N/A — TCP retransmits mask loss
Download 12 WebTransport streams, 1.5s trim, steady-state avg 6 HTTP/1.1 streams, independent TCP connections
Upload 16 WT streams, server-measured, 2s trim 6 XHR streams, 50 MB POST, client-measured
Deep Test DL / UL 10s download / 12s upload 10s download / 12s upload
0-RTT Disabled — pure 1-RTT baseline measurement N/A
Anti-inflation N/A Pseudo-random bytes defeat ISP compression inflation

Live Path Traceroute and Network Fingerprint

A background traceroute runs four probe methods concurrently from the selected server โ€” ICMP, UDP/53, TCP/80, and UDP/4433 (the live QUIC port) โ€” and streams hops to the UI as each method completes, without waiting for the slowest probe. Running all four in parallel recovers hop data from paths where ICMP or DNS probes are filtered but QUIC-port traffic passes through. Each visible hop is reverse-geolocated via the GeoLite2 database to show city and ISP. A composite stability score classifies the link as suitable for real-time gaming, video conferencing, or general browsing. Percentile comparison against recent real users provides context for what the numbers mean. When multiple server locations are available, the traceroute re-runs automatically after each location switch.

Critical Design Decisions

0-RTT is disabled. Every QUIC session is a full 1-RTT handshake. The RTT displayed is the true baseline cost — not an artificially low number produced by session resumption.

Diagnostic rule: If QUIC throughput measures more than 30–35% below TCP on either direction, the likely cause is ISP protocol differentiation — the ISP is deprioritising UDP traffic relative to TCP. This is increasingly common on mobile networks and some residential broadband providers. Extreme cases (>90% delta on upload) have been observed on Charter/Spectrum residential paths, where upload over QUIC measured 32 Mbps against 465 Mbps over TCP on the same physical link — a 93% deficit strongly indicative of UDP deprioritisation; congestion control and pacing differences may contribute but cannot account for a gap of this magnitude.

Server-Side Infrastructure: Built Into the Proxy

Every server-side endpoint the Speed Test calls is implemented as a native handler inside pqcrypta-proxy — not a separate backend service. The QUIC DATAGRAM echo handler, the WebTransport stream endpoints for download and upload measurement, the parallel HTTP/1.1 TCP handlers, and the ICMP traceroute worker all run within the same Rust process that terminates TLS and speaks HTTP/3.

This architecture is what makes the comparison valid. Both QUIC and TCP measurements hit the same physical server, the same network interface, the same kernel receive buffers, and the same CPU. The only variable is the protocol stack. A QUIC test contacting one server and a TCP test contacting another would introduce uncontrolled hardware and routing variance; co-locating both inside the proxy eliminates that entirely.

It also means the proxy must speak all three simultaneously: HTTP/3 over QUIC (UDP/443) for the WebTransport streams and datagram echo, HTTP/1.1 over TCP (TCP/443) for the parallel download and upload streams, and raw sockets for ICMP traceroute probes. pqcrypta-proxy handles all three in a single tokio async runtime, with each protocol path running as an independent task pool sharing the same connection state.

Why this matters: The Speed Test was not bolted onto an existing service. The measurement requirements — genuine QUIC datagrams, concurrent WebTransport streams, raw ICMP access, and server-timed upload measurement — drove specific implementation decisions in pqcrypta-proxy itself. The proxy exists partly because the Speed Test needed infrastructure that no off-the-shelf reverse proxy provided.

Tool 2: HTTP/3 QUIC & WebTransport Scanner

Built because: Alt-Svc headers lie. They can be injected by CDNs and cached long after the underlying server changes. The Scanner was built to verify what the transport layer actually does.

The HTTP/3 Scanner is a server-side native QUIC probe. It determines whether a target server actually supports HTTP/3, QUIC, and WebTransport — not inferred from HTTP response headers, but verified through actual QUIC transport-layer connections using the quinn and h3 Rust libraries.

How It Works

The Scanner sends real QUIC Initial packets to the target server and attempts a full QUIC handshake. If the handshake succeeds, it negotiates HTTP/3 via ALPN and opens HTTP/3 streams. If it fails, it records exactly why — timeout, TLS alert, version negotiation failure, connection refused. This is fundamentally different from checking whether an HTTP/2 response includes an Alt-Svc: h3=":443" header. The header is a promise. The Scanner verifies whether the promise is kept.

What It Extracts

  • QUIC transport parameters: MTU, idle timeout, datagram support, congestion window, max streams, initial flow control limits
  • Connection metrics: handshake completion time, TTFB, RTT as observed by the QUIC stack, packets sent and lost during the probe
  • TLS extension analysis: ALPN protocols offered and selected, key share groups, ECH (Encrypted Client Hello) support detection
  • Server implementation fingerprinting: identifies Cloudflare (quiche), Google GFE, Facebook (mvfst), Fastly (H2O), LiteSpeed (LSQUIC) via QUIC transport parameter signature analysis
  • Alt-Svc analysis and 0-RTT replay attack risk assessment
  • WebTransport capability testing with security validation on ports 443, 4433, or custom
  • JA3/JA4 TLS fingerprinting with 3-tier confidence scoring and ML-enhanced recommendation engine

Five-Tier Grading System

Grade Condition Meaning
A++ HTTP/3 + QUIC + 0-RTT disabled + WebTransport Maximum security and features. Native QUIC with replay protection and bidirectional streaming.
A+ HTTP/3 + QUIC, 0-RTT disabled Excellent. Core QUIC transport is genuine, 0-RTT disabled for maximum security.
A HTTP/3 + QUIC, 0-RTT enabled Good. Genuine QUIC, but 0-RTT enabled introduces replay attack risk.
C HTTP/3 claimed but not reachable at QUIC layer Misconfigured. Server claims HTTP/3 via headers but QUIC handshake fails at the transport layer. Most diagnostic result — exposes CDN masking.
F No HTTP/3 support detected Legacy HTTP/2 or HTTP/1.1 only.

The “C” grade is the most revealing result. A typical scenario: the CDN injects an Alt-Svc: h3=":443" header, but the origin server's firewall blocks UDP port 443. The browser attempts a QUIC connection, times out, falls back to TCP, and the user never knows. The Scanner reports the failure directly.

Tool 3: PQC Readiness Scanner

Built because: Building the pqcrypta-proxy with hybrid PQC TLS required external verification that PQC key exchange was actually being negotiated end-to-end — not just configured at the proxy and then stripped or ignored by intermediaries.

The PQC Readiness Scanner performs real-time TLS handshake analysis to detect whether a server supports NIST-standardized post-quantum cryptography algorithms. It is separate from the HTTP/3 Scanner — the HTTP/3 Scanner probes QUIC transport capabilities; the PQC Scanner probes cryptographic algorithm support at the TLS layer. A server can score “A++” on the HTTP/3 Scanner and “F” on the PQC Scanner, or vice versa. They measure different things.

How It Works

The Scanner performs an actual TLS 1.3 handshake with the target server and inspects the negotiated algorithms. It offers PQC key share groups in its ClientHello and observes whether the server selects them. If the server negotiates X25519MLKEM768, the Scanner detects ML-KEM support. If the server falls back to X25519, it records the absence of PQC. This is not a header check, certificate analysis, or DNS lookup — it is a real cryptographic exchange.

What It Detects

  • ML-KEM-1024 (Kyber) and ML-DSA-87 (Dilithium) algorithm support — NIST FIPS 203/204 standardised algorithms
  • Hybrid mode compatibility: X25519MLKEM768, SecP256r1MLKEM768, SecP384r1MLKEM1024, X448MLKEM1024
  • TLS version support: TLS 1.3 only versus TLS 1.2 fallback — critical for downgrade attack exposure assessment
  • Quantum resistance assessment with detailed remediation recommendations tailored to the server’s current configuration

Three-Tier Grading System

Grade Condition Security Posture
A+ PQC ready, TLS 1.3 only Highest achievable posture. Fully protected against quantum threats and protocol downgrade attacks. No TLS 1.2 fallback exposed.
A PQC ready, older TLS versions still supported PQC is negotiated, but TLS 1.2 remains as a fallback. An active attacker can force a downgrade, bypassing PQC entirely.
F No PQC detected Classical cryptography only. Vulnerable to harvest-now-decrypt-later attacks.

Real-World Deployment Results (March 2026)

8
A+ Sites
pqcrypta.com, pqpdf.com, …
177
A Sites
Google, Cloudflare, Baidu, …
357
F Sites
fbi.gov, cia.gov, khanacademy, …

Fewer than 5% of scanned sites achieve the highest security posture. Government and intelligence agency sites — the organisations that should be leading PQC migration per NSA CNSA 2.0 mandates — overwhelmingly score “F.” Announcements are not deployments.

Tool 4: pqcrypta-proxy

The pqcrypta-proxy is an open-source, production-ready Rust reverse proxy. It is the infrastructure layer of the PQ Crypta ecosystem — the implementation that “walks the walk” after the diagnostic tools expose gaps. It is not a diagnostic tool; it is the fix.

Core Architecture

Built on the tokio asynchronous runtime, using the quinn crate for QUIC transport and h3 for HTTP/3 framing. Supports HTTP/1.1, HTTP/2, HTTP/3 (QUIC), and native WebTransport bidirectional streaming on a unified UDP listener. Domain-based routing maps incoming requests to backend services. Zero unsafe code in application logic.

Three TLS Modes

  • Terminate — Decrypt at the proxy, forward plain HTTP to the backend. Standard reverse proxy behaviour.
  • Re-encrypt — Decrypt at the proxy, re-encrypt with HTTPS to the backend. Supports mutual TLS (mTLS) for zero-trust architectures.
  • Passthrough — SNI-based routing without decryption. The proxy reads the SNI from the TLS ClientHello and forwards the encrypted stream untouched. Zero visibility, zero overhead, zero certificate management at the proxy layer.

Post-Quantum TLS: X25519MLKEM768

The proxy implements hybrid PQC key exchange via OpenSSL 3.5+ with native ML-KEM support. The default algorithm is X25519MLKEM768 — the IETF hybrid combining classical X25519 with ML-KEM-768, providing NIST Level 3 security (192-bit equivalent). Algorithm negotiation is automatic: clients that support PQC negotiate PQC; clients without PQC support fall back to classical X25519 without service interruption. This is not bolted on — the proxy was designed PQC-first.

Security Features

  • JA3/JA4 TLS fingerprinting: Extracts ClientHello signatures during handshake, enabling client identification even behind NAT and corporate proxies.
  • Advanced multi-dimensional rate limiting: Composite keys combining IP address, JA3/JA4 fingerprint, URL path, JWT subject extraction, and X-Forwarded-For trust chains. IPv6 /64 subnet grouping prevents per-host evasion.
  • Adaptive baseline anomaly detection: ML-inspired model that learns normal traffic patterns and flags statistical deviations with configurable standard deviation threshold.
  • Circuit breaker: Backend health monitoring with automatic recovery. Configurable failure and success thresholds prevent cascading failures.
  • WAF: Injection pattern detection covering SQLi, XSS, path traversal, SSRF, command injection (CMDi), and XXE.
  • GeoIP blocking via MaxMind GeoLite2 for country-level access control.
  • Security headers: HSTS, X-Frame-Options, CSP, COEP, COOP, CORP — configurable per domain.

Load Balancing

Six algorithms: least_connections (default), round_robin, weighted_round_robin, random, ip_hash, and least_response_time (exponential moving average tracking). Session affinity via cookie, IP hash, or custom header. Health-aware routing bypasses unhealthy backends. Slow start prevents overwhelming a recovering backend. Connection draining enables graceful maintenance.

HTTP/3 Advanced Features

  • Early Hints (103): Preload CSS, JavaScript, and fonts via Link headers before the main response is ready.
  • Priority Hints: RFC 9218 Extensible Priorities for HTTP/3 resource scheduling.
  • Request coalescing: Deduplicates identical in-flight GET/HEAD requests.
  • Alt-Svc advertisement: Automatic HTTP/3 upgrade headers on all ports.
  • PROXY Protocol v2 for client IP preservation behind other load balancers.

Operations

Hot reload of configuration and TLS certificates without restart — no connection drops. ACME automation for Let’s Encrypt provisioning and renewal via HTTP-01 or DNS-01 challenges. OCSP stapling with caching. Prometheus metrics covering TLS handshake durations, connection counts, request rates, backend latencies, error rates, and PQC algorithm negotiation statistics. Admin API for health checks, configuration reload, and graceful shutdown. Runs on Linux, macOS, and Windows.

Tool 5: PQ Crypta Streaming Encryption Tester

Built because: Verifying that a QUIC connection exists and a TLS handshake negotiates PQC does not answer the harder question: can WebTransport carry real application cryptography at production latency? The Streaming Encryption Tester was built to answer that directly — by running key generation, encryption, and decryption of 31 post-quantum algorithms over live QUIC bidirectional streams and measuring the results in the browser.

The other five tools diagnose the transport layer and infrastructure. The Streaming Encryption Tester goes one layer up: it uses WebTransport as the delivery channel for real cryptographic operations and measures how each of PQ Crypta’s 31 supported algorithms performs over that channel. Latency, throughput, compression ratio, and per-operation timing are reported live as the operations execute.

Protocol Architecture: Bidirectional QUIC Streams per Operation

Every cryptographic operation — key generation, encryption, decryption, or ping — opens a dedicated bidirectional QUIC stream to the server. JSON is written to the send side; the server reads, processes, and writes the JSON response to the same stream’s recv side, then closes it. Concurrent operations open concurrent streams over a single shared QUIC connection. This is not HTTP request–response over an existing session — it is multiplexed stream-per-operation with no head-of-line blocking between operations.

The server-side handler (quinn_webtransport.rs) accepts QUIC bidirectional streams inside an established WebTransport session, dispatching to the same Rust crypto engine that backs the main API — encrypt_handler(), decrypt_handler(), and generate_keys_handler() — and returns per-operation timing, throughput (Mbps), and compression ratio alongside the cryptographic result.

Connection Establishment

The client connects to https://api.pqcrypta.com:4433/webtransport. The server advertises SETTINGS_ENABLE_CONNECT_PROTOCOL = 1, SETTINGS_H3_DATAGRAM = 1, and SETTINGS_WEBTRANSPORT_MAX_SESSIONS = 100 in its HTTP/3 SETTINGS frame. The client sends an HTTP/3 CONNECT request with :protocol=webtransport; the server responds 200, and the WebTransport session is live. The QUIC connection is persistent: subsequent key generation, encryption, and decryption operations reuse it without a new handshake. Automatic reconnection (2 s delay) handles unexpected disconnects.

Aspect WebTransport (Streaming Tester) Standard REST (HTTP/2)
Protocol HTTP/3 + QUIC bidirectional streams HTTP/2 over TCP
Connection Single persistent QUIC session New TCP connection per session (keep-alive pool)
Per-operation overhead Open QUIC stream (<1 ms) HTTP request setup (∼5–20 ms)
Head-of-line blocking None — each operation on independent stream HTTP/2 streams share one TCP flow
Concurrent operations Native (multiple QUIC streams in parallel) Requires multiple HTTP/2 requests
Metrics returned Throughput (Mbps), processing_time_ms, compression_ratio HTTP response body only
Crypto engine Same Rust handlers as main API Same Rust handlers as main API

What the Tester Measures

  • Key generation latency: Time to generate a key pair for each of 31 PQC algorithms over a QUIC stream — exposes the real cost of lattice, code-based, and hash-based key generation at the wire level, not in a local benchmark.
  • Encryption throughput (Mbps): Server-reported bytes-per-millisecond for each encrypt operation, normalised to Mbps. Reveals per-algorithm overhead under real QUIC transport conditions.
  • Decryption throughput and integrity: Decryption result includes an integrity verification field (data_matches_original) confirming the round-trip is cryptographically correct — not just that data was returned.
  • ML compression ratio: Adaptive ZSTD/LZ4/Brotli compression is applied before encryption when enabled. The server reports the compression algorithm selected and ratio achieved, making the cost/benefit trade-off directly observable.
  • QUIC connection latency: Ping operation sends a timestamped datagram and measures QUIC round-trip latency independently of crypto processing time, separating network cost from computation cost.
  • Session statistics: Cumulative bytes sent/received, request count, average operation latency, and error count over the persistent session.

31 Algorithms Over the Wire

The Streaming Tester supports all 31 PQC algorithms available in PQ Crypta’s main encryption platform — from classical (X25519 + AES-256-GCM) through hybrid (ML-KEM-1024 + ML-DSA-87), the HQC code-based series (NIST 2025 standardisation), FN-DSA variants, Max Secure series, and experimental multi-algorithm stacks. Each can be tested individually over a WebTransport stream, allowing direct comparison of key sizes, ciphertext sizes, and operation latency at the protocol level rather than in a synthetic benchmark.

This matters for transport design. An ML-KEM-1024 key pair (public key: 1,568 B) fits comfortably in a single QUIC packet. SLH-DSA-SHA2-256s signatures (29,792 B) require multi-packet fragmentation across several QUIC frames. The Streaming Tester makes this observable: the difference in operation latency between a compact algorithm and a large-signature algorithm is the direct cost of QUIC fragmentation and reassembly under real network conditions.

Browser requirement: The Streaming Tester requires Chrome 97+ or Edge 97+. Firefox and Safari do not implement the WebTransport API as of 2026. This is a browser limitation, not a server limitation — the server endpoint is available to any WebTransport-capable client, including native Rust/Python/Go implementations.

Why This Completes the Tool Suite

The Speed Test answers: does my network transport QUIC datagrams efficiently? The HTTP/3 Scanner answers: does the server actually speak QUIC? The PQC Scanner answers: does the server negotiate post-quantum TLS? The pqcrypta-proxy provides the infrastructure that makes all three possible. The Streaming Tester answers: can WebTransport carry real post-quantum cryptographic workloads at acceptable latency, and which algorithms perform best over the wire? The Telemetry Wall answers the remaining question: does QUIC stream isolation actually hold under real impairment conditions?

Together, the six tools cover the complete stack: transport performance, server implementation quality, handshake cryptography, infrastructure deployment, application-layer cryptographic throughput, and live stream isolation under real impairment conditions.

Tool 6: WebTransport Telemetry Wall

Built because: Stream isolation is QUIC’s defining advantage over TCP, but no existing tool made it directly observable. Numbers alone — throughput, RTT, loss rate — do not show why the other streams are unaffected when one degrades. The Telemetry Wall was built to make that isolation tangible: inject a real impairment, watch the target stream react, confirm the others do not.

The WebTransport Telemetry Wall opens six independent QUIC unidirectional streams from the server simultaneously, each running in its own tokio task at 20 Hz sending 32 KB frames — approximately 5 Mbps of virtual throughput per channel. A bidirectional control stream accepts JSON impairment commands. A 60 Hz datagram echo loop drives the RTT oscilloscope and jitter heatmap. A dedicated stats stream delivers server-side QUIC congestion window, CPU, and memory readings at 10 Hz. Everything runs over a single WebTransport session to api.pqcrypta.com:4433 or api2.pqcrypta.com:4433, auto-selected by racing both QUIC handshakes and taking the faster response.

Stream Architecture: Six Independent tokio Tasks

Each throughput channel is a QUIC unidirectional stream opened by a dedicated tokio async task on the server. The tasks share no state. An impairment applied to CH1 — delay, packet loss, bandwidth cap, jitter, or disconnect — modifies only that task’s write loop. The other five tasks continue writing frames at full rate, completely unaware of CH1’s condition. This is not emulated: it is the direct consequence of QUIC’s per-stream sequence spaces and independent flow control.

Stream / Channel Protocol Rate Purpose
CH1–CH6 QUIC unidirectional (server → client) 20 Hz × 32 KB Throughput channels; impairment targets
Control stream QUIC bidirectional On demand Accepts impair/heal JSON commands
Stats stream QUIC unidirectional (server → client) 10 Hz CWND, CPU%, MEM%, server RTT
Datagram loop QUIC unreliable datagrams 60 Hz RTT oscilloscope and jitter heatmap

Impairment Types and Patterns

Five impairment types are available, each implemented server-side inside the target stream’s tokio task:

  • Delay (0–2000 ms): tokio::time::sleep before each frame write. Frames arrive late; the stream stalls visibly while adjacent channels run at full speed.
  • Loss (0–80%): Frames are randomly skipped. A dropped:true marker is sent so the client can visualise loss without breaking stream framing.
  • Bandwidth cap (100–5000 Kbps): Token-bucket rate limiter holds frames until the budget refills. Simulates a constrained link while leaving all other streams at full rate.
  • Jitter (0–500 ms): Random per-frame sleep drawn uniformly from [0, intensity]. RTT oscilloscope and jitter heatmap react immediately.
  • Disconnect: Breaks the stream’s write loop. The server auto-heals after 600 ms by reopening the stream. All other channels are unaffected.

Seven temporal patterns modulate intensity over time: fixed, random, sine, square, burst, ramp, and cascade — allowing waveform-shaped impairments that drive visible patterns in the sparklines and oscilloscope.

What the Panel Grid Shows

  • RTT Oscilloscope: Each point is a datagram echo round-trip at 60 Hz. A delay impairment shifts the baseline upward; jitter spreads the trace vertically. Healthy streams hold a flat line alongside the impaired one.
  • Jitter Heatmap: Standard deviation (σ) of RTT over a 10-sample sliding window, rendered as a column heatmap. Low jitter = cyan; high jitter = amber.
  • Loss Radar: Polar chart of datagram loss rate over 60 measurements. Green <1%, amber 1–5%, red >5%.
  • Stream Throughput: Six sparklines (one per channel) showing frame rate and instantaneous Mbps. An impaired channel’s sparkline collapses or jitters while the other five remain flat at peak rate.
  • CWND + Server Stats: Three-lane canvas chart showing QUIC congestion window bytes (cyan area), CPU% (amber line), and MEM% (purple line), each auto-scaled to its own range so small fluctuations are visible.

Why This Matters: TCP Cannot Do This

Under TCP, a single delayed or lost segment stalls every byte behind it in the same connection — head-of-line blocking. WebSocket multiplexing layers on top of a TCP stream and inherits this property. QUIC gives each stream its own retransmission and flow-control state, so a 500 ms delay on CH1 has no effect on the sequence numbers, acknowledgement windows, or delivery timing of CH2–CH6. The Telemetry Wall makes this difference directly visible in a browser, with no install, no account, and no synthetic conditions — the impairments are applied to a live connection to a production server.

Browser requirement: The Telemetry Wall requires Chrome 97+ or Edge 97+ for WebTransport support. Firefox and Safari do not implement the WebTransport API as of 2026.

Post-Quantum Cryptography in Transport: Research Context

The integration of post-quantum cryptography into transport protocols is happening now. This section situates the academic research against what PQ Crypta’s tools measure and enforce. All cited papers have been verified against their published venues. Where academic findings intersect directly with what the Streaming Encryption Tester measures over live QUIC streams, those connections are noted explicitly.

QUIC’s Cryptographic Architecture Advantage

QUIC’s mandatory TLS 1.3 makes PQC integration architecturally cleaner than TCP’s bolt-on TLS. In QUIC, TLS 1.3 is not optional — every connection uses it, fused with the transport handshake. When you replace X25519 with X25519MLKEM768, you change one configuration parameter; the handshake structure, packet format, and loss recovery all remain unchanged. In TCP, the TLS library, the TCP stack’s interaction with it, and the application configuration all require coordination.

Cryptographic Overhead: Verified Research Findings

Kempf et al. [2] (IFIP Networking 2024)

Quantified QUIC’s per-packet cryptographic overhead by experimentally removing AEAD packet protection. Result: throughput increased 10–20% without encryption. Header protection overhead was negligible for AES-based ciphers (AES-128-GCM, AES-256-GCM) thanks to AES-NI acceleration. This quantifies the cost of QUIC’s always-on encryption design: a 10–20% throughput tax on every packet.

Raavi et al. [3] (ISC 2022)

Demonstrated that Dilithium 2 and Falcon 512 handshakes are faster than RSA 3072 in both QUIC and TCP/TLS configurations. The common assumption that post-quantum is slower is wrong for many key exchange and signature operations — lattice-based operations on modern CPUs are competitive with or faster than RSA at equivalent security levels.

Zheng et al. [4] (ESORICS 2024)

Showed ML-KEM with AVX-512 achieves a 1.64× speedup over AVX2 implementations, with batch key generation 3.5–4.9× faster. Hardware acceleration makes PQC overhead negligible on modern server CPUs with AVX-512 support, which includes current Intel Xeon and AMD EPYC processors.

Gómez-Cambronero et al. [5] (arXiv 2025)

Conducted layered performance analysis of TLS 1.3 handshakes across classical, hybrid, and pure ML-KEM configurations. Key finding: all configurations showed Glass’s Delta below 0.33, indicating no practically meaningful performance difference at the TLS handshake layer. Measurable overhead, when present, appears at the network layer due to larger key shares requiring more packets.

Performance Formulas

The following formulas are the theoretical foundations behind what the Speed Test measures empirically. They are standard transport-layer performance models with notation aligned to PQ Crypta’s measurement methodology.

Bandwidth-Delay Product

BDP Formula BDP = Bandwidth × RTT

The BDP determines optimal buffer and congestion window size. The Speed Test’s steady-state trimming (discarding the first 1.5–2 seconds) exists to let the congestion window grow to BDP before measurement begins.

TCP Throughput — Mathis Formula

Mathis Formula Throughput ≈ (MSS / RTT) × (C / √p)     [C ≈ 1.22, p = packet loss rate]

TCP throughput degrades with the square root of loss rate. At 1% loss, throughput drops to roughly 12% of theoretical maximum. The Speed Test’s 200-datagram loss probing quantifies the “p” in this formula for the user’s actual connection.

QUIC Effective Goodput

QUIC Goodput Formula Goodput(QUIC) = (Payload_bytes / Total_bytes) × Link_rate × (1 − loss_rate)

Total_bytes includes: QUIC header (1–20 B) + AEAD tag (16 B) + UDP header (8 B) + IP header (20–40 B). On small packets (voice, gaming), the AEAD overhead is proportionally large. On large packets (bulk transfer), it is negligible — which is why the Speed Test’s bulk measurements operate in the large-packet regime.

PQC Handshake Overhead

PQC Handshake Overhead Total_handshake = RTT_base + T_keygen + T_encaps + T_sign + T_verify + Σ(fragmentation_delay)

For X25519MLKEM768 hybrid: T_keygen ≈ 0.02 ms, T_encaps ≈ 0.03 ms (combined classical + PQ). Total PQC overhead: ∼1–4 ms — negligible on most links. For SLH-DSA-256s: T_sign ≈ 50 ms, plus multi-packet fragmentation for the 29,792-byte signature. FALCON-512’s 666-byte signature fits in a single packet and avoids fragmentation entirely.

BBR Congestion Control

BBR Parameters target_inflight = max_bw × min_rtt    pacing_rate = pacing_gain × max_bw    cwnd = cwnd_gain × BDP

BBR probes bandwidth and RTT independently, unlike CUBIC which relies on loss signals to infer congestion. A lost packet on QUIC stream 5 does not trigger a congestion response that throttles streams 1–4. BBR’s model works naturally with QUIC’s multiplexed stream architecture.

Algorithm Type Public Key Sig/Ciphertext QUIC Handshake Impact NIST Level
X25519 (classical) KEM 32 B 32 B Baseline N/A
ML-KEM-768 (Kyber) KEM 1,184 B 1,088 B Low (+1–3 ms) 3
ML-KEM-1024 KEM 1,568 B 1,568 B Low-moderate (+2–5 ms) 5
X25519MLKEM768 (hybrid) Hybrid KEM 1,216 B 1,120 B Low (+1–4 ms) 3
ML-DSA-65 (Dilithium) Signature 1,952 B 3,293 B Moderate 3
ML-DSA-87 Signature 2,592 B 4,595 B Moderate-high 5
FALCON-512 Signature 897 B 666 B Low (fits one packet) 1
SLH-DSA-SHA2-256s Signature 64 B 29,792 B High — multi-packet fragmentation 5

What the Measurements Actually Show

Handshake Latency: QUIC Wins

2 RTT
TCP + TLS 1.3 connection
1 RTT
QUIC combined handshake
~1 RTT
QUIC RTT advantage (scales with base RTT)
1–4 ms
PQC overhead (X25519MLKEM768)

TCP + TLS 1.3 requires a minimum of 2 RTT for connection establishment: one RTT for the TCP three-way handshake, one for the TLS handshake. QUIC 1-RTT combines transport and cryptographic handshakes into a single round trip, saving one full base-RTT on every cold connection. The Speed Test consistently shows lower observed application-level latency for QUIC than for TCP. Two factors contribute: (1) the QUIC measurement uses a bare DATAGRAM echo — a 64-byte UDP round trip with minimal protocol overhead; (2) the TCP measurement uses an HTTP fetch over an established keep-alive connection, which adds HTTP framing overhead and is subject to TCP delayed-ACK effects. The observed advantage reflects both the QUIC handshake saving on cold connections and application-level overhead differences on warm connections. Observed range across tested residential and cloud paths: 5–35 ms.

With PQC (X25519MLKEM768), the handshake adds 1–4 ms of computational overhead. On a 25 ms RTT link this represents a 4–16% increase — measurable in a lab, unnoticeable to a user. On a 100 ms RTT intercontinental link, PQC overhead is 1–4% — noise.

Throughput: TCP Wins on Clean Links

TCP with kernel offloading approaches line rate on clean, low-loss links. Six parallel HTTP/1.1 streams with TSO/GSO doing the heavy lifting can saturate most residential connections. QUIC throughput is bounded by userspace crypto overhead.

Key insight: TCP runs in kernel space, QUIC runs in the browser. Every QUIC packet cycles through the browser’s JavaScript event loop, the WebTransport API, Chromium’s internal QUIC implementation, userspace AEAD encryption, and then a sendmsg() syscall to the kernel’s UDP socket. TCP skips all of this. QUIC’s advantages are not throughput on clean links — they are latency, stream multiplexing without HOL blocking, and connection migration for mobile clients.

The HTTP/3 Deployment Gap

Many CDN-fronted sites report HTTP/3 support via Alt-Svc headers in their HTTP/2 responses. Native QUIC probing with quinn/h3 reveals the reality: the CDN edge terminates QUIC, converts to TCP, and proxies to a TCP-only origin. The server “supports” HTTP/3 in the same way a person who hires a translator “speaks” Mandarin.

PQC Deployment Reality

The PQC Scanner’s data shows fewer than 5% of scanned sites achieve A+ (PQC with TLS 1.3 only). Major platforms — Google, Cloudflare, Baidu — show “A” grades: PQC is supported, but TLS 1.2 remains enabled as a fallback. This creates a downgrade attack surface: an active man-in-the-middle can strip the TLS 1.3 ClientHello extensions and force a TLS 1.2 connection, bypassing PQC entirely.

Government and institutional sites overwhelmingly score “F” — no PQC whatsoever. This includes fbi.gov and cia.gov, despite NSA CNSA 2.0 mandating PQC migration timelines. The gap between announced PQC readiness and actual deployment is as wide as the HTTP/3 gap. Policy has outrun deployment by years.

WebTransport as a Crypto Channel: What the Streaming Tester Shows

The academic literature quantifies PQC overhead at the handshake and packet levels. The Streaming Encryption Tester adds a third measurement point: overhead at the application operation level, over a persistent WebTransport session. Several findings emerge from running all 31 algorithms over live QUIC bidirectional streams:

  • Compact algorithms have near-zero QUIC overhead. Classical (X25519 + AES-256-GCM) and FN-DSA-512 operations complete within a single QUIC packet exchange. The operation latency tracks the network RTT with <2 ms of crypto computation added — consistent with Raavi et al.’s finding that lattice-based operations are competitive with RSA at equivalent security levels [3].
  • Large-signature algorithms expose fragmentation cost directly. SLH-DSA-SHA2-256s (29,792-byte signature) requires multi-packet QUIC fragmentation and reassembly. The operation latency is measurably higher than compact algorithms on the same connection — the latency difference is the direct cost of QUIC fragment reassembly, which benchmarks do not capture.
  • ML compression amplifies throughput for compressible payloads. Text and structured data compress 2–4× before encryption, reducing ciphertext size and transmission time. The Streaming Tester reports compression algorithm (ZSTD, LZ4, Brotli), ratio, and whether it exceeded the 150-byte minimum threshold for net benefit. Random or already-compressed data correctly bypasses compression.
  • Persistent session reuse eliminates the dominant overhead. After the initial QUIC handshake (∼50–100 ms), subsequent operations over the same session take only the crypto processing time plus one QUIC stream RTT. For sequences of encrypt/decrypt operations, the amortised per-operation cost of WebTransport is lower than REST, which pays TCP connection and TLS overhead on every request.

How the Tools Work Together

Although each tool is independent — separate codebases, separate URLs, separate purposes — they form a complementary diagnostic chain covering the full stack from transport performance to cryptographic readiness to infrastructure deployment:

  • Speed Test answers: “What is the real-world performance difference between QUIC and TCP on my specific connection, right now?”
  • HTTP/3 Scanner answers: “Does this target server actually implement QUIC at the transport layer, or is it claiming support it doesn’t have?”
  • PQC Scanner answers: “Is this server’s cryptography quantum-resistant, and is the TLS 1.2 downgrade path closed?”
  • pqcrypta-proxy answers: “Here is open-source infrastructure that implements all of the above natively, with production-grade operations.”
  • Streaming Encryption Tester answers: “Can WebTransport carry real post-quantum cryptographic workloads at acceptable latency, and which of the 31 algorithms performs best over the wire?”
Example Workflow

An operator runs the HTTP/3 Scanner on their production site and discovers a “C” grade — the CDN is masking a TCP-only origin. They deploy pqcrypta-proxy as their origin reverse proxy, configuring HTTP/3, WebTransport, and PQC TLS. Re-scanning with the HTTP/3 Scanner yields “A++.” The PQC Scanner confirms A+ (PQC negotiated, TLS 1.2 disabled). The Speed Test establishes baseline QUIC performance metrics for ongoing monitoring. Finally, the Streaming Encryption Tester validates that the WebTransport endpoint correctly carries post-quantum cryptographic operations end-to-end — confirming the application layer works, not just the transport layer.

The tools are independently useful. A developer benchmarking their home network uses only the Speed Test. A security auditor assessing PQC readiness uses only the PQC Scanner. An infrastructure team evaluating HTTP/3 deployment uses only the HTTP/3 Scanner. A DevOps engineer deploying a reverse proxy uses only pqcrypta-proxy. A developer evaluating which post-quantum algorithm to adopt for a QUIC-based application uses only the Streaming Encryption Tester. Complementary value emerges when used together; no tool requires another to function.

Architectural Recommendations

Based on measurements from all six tools and the verified research literature, the following recommendations are ordered by impact:

  1. Tune UDP buffer sizes immediately. Set net.core.rmem_max and net.core.wmem_max to at least 2.5 MB (10× the Linux default) for any QUIC deployment. This is the single highest-impact, lowest-effort QUIC performance optimisation. Failing to do this guarantees packet drops under load.
  2. Deploy PQC now. X25519MLKEM768 hybrid key exchange adds 1–4 ms per handshake — negligible in practice. Waiting creates harvest-now-decrypt-later exposure. The pqcrypta-proxy provides a drop-in solution with automatic algorithm negotiation and classical fallback.
  3. Measure at the transport layer, not the HTTP layer. Alt-Svc headers lie. Use native QUIC probing to verify actual protocol support. If you cannot complete a QUIC handshake with the server, the server does not support QUIC, regardless of what its HTTP headers claim.
  4. Choose QUIC implementations carefully. The 50× throughput variance between implementations (90–4,900 Mbit/s on 10G links) [1] is the largest single QUIC performance variable. Quinn (Rust) and MsQuic (C) consistently benchmark at the top.
  5. Expect TCP to win on raw throughput — and plan accordingly. Use QUIC where its architectural advantages matter: multiple concurrent streams, lossy links, mobile clients, and latency-sensitive applications.
  6. Disable 0-RTT in security-sensitive deployments. The replay attack surface is real. 0-RTT data can be replayed by an attacker who captures the initial flight. For financial services, healthcare, and government applications, the latency savings do not justify the replay risk.
  7. Enforce TLS 1.3-only on PQC deployments. The PQC Scanner shows that A vs A+ is the difference between having PQC and having PQC that cannot be downgrade-attacked. If you deploy PQC but leave TLS 1.2 enabled, an active attacker can force a downgrade and bypass your post-quantum protection entirely.
  8. Benchmark algorithm selection over the wire, not in a local benchmark. Per-algorithm throughput and latency measurements change under real QUIC transport conditions — QUIC fragmentation for large keys, congestion control interaction with bulk crypto payloads, and compression effectiveness vary with data type and algorithm. Use the Streaming Encryption Tester to measure your target algorithm on your actual connection before committing to an implementation.
  9. Conclusion

    Six tools, six layers, one mission: expose the gap between what protocol specifications promise and what production deployments deliver.

    The Speed Test proves QUIC’s latency advantage while honestly showing TCP’s throughput advantage on clean links. Both results are architecturally expected — QUIC saves round trips at handshake time but pays a per-packet userspace tax on bulk transfer. Neither result is a failure; they are the predictable consequences of where each protocol lives in the operating system.

    The HTTP/3 Scanner reveals that most “HTTP/3 support” is CDN veneer over TCP-only origins. The “C” grade — servers claiming HTTP/3 but failing at the QUIC transport layer — is the most common surprising result. Real HTTP/3 deployment requires QUIC capability at the origin, not just at the CDN edge.

    The PQC Readiness Scanner shows fewer than 5% of scanned sites are properly quantum-ready with TLS 1.3-only enforcement. Government agencies — the organisations most vocal about PQC migration — overwhelmingly score “F.” Policy has outrun deployment by years.

    The pqcrypta-proxy provides the open-source infrastructure to close all three gaps simultaneously: native HTTP/3 and WebTransport, hybrid PQC TLS with X25519MLKEM768, and the operational features required for production deployment.

    The Streaming Encryption Tester closes the question the other tools leave open: does WebTransport actually work as a delivery channel for post-quantum cryptographic workloads? Running 31 algorithms over real QUIC bidirectional streams, with server-reported throughput and integrity verification on every decryption, it answers that question with measured evidence rather than architectural assertion.

    The WebTransport Telemetry Wall makes QUIC stream isolation tangible. Six independent server-side streams run continuously at 20 Hz each. Injecting 500 ms of delay into CH1 leaves CH2–CH6 visibly unaffected. The RTT oscilloscope, jitter heatmap, and loss radar update at 60 Hz over datagrams — showing the transport layer reacting in real time. No other tool in the suite shows stream isolation this directly.

    Core Finding

    The industry needs more tools that probe at the protocol level, not the HTTP level. Headers can be forged. Alt-Svc can be injected by intermediaries. TLS version claims can be misleading. The only reliable measurement is a real handshake, a real datagram, a real QUIC connection — and real cryptographic operations over real WebTransport streams. PQ Crypta’s six tools deliver exactly that — transport-layer truth, measured and graded, with zero accounts and zero data retention.

    References

    1. B. Jaeger, J. Zirngibl, M. Kempf, K. Ploch, and G. Carle, “QUIC on the Highway: Evaluating Performance on High-rate Links,” Proc. IFIP Networking Conference, Barcelona, June 2023. [Verified: IFIP Networking 2023 proceedings, dblp, arXiv:2309.16395]
    2. M. Kempf, N. Gauder, B. Jaeger, J. Zirngibl, and G. Carle, “A Quantum of QUIC: Dissecting Cryptography with Post-Quantum Insights,” Proc. IFIP Networking Conference, 2024. [Verified: IFIP Networking 2024 proceedings, ResearchGate]
    3. M. Raavi, S. Wuthier, P. Chandramouli, Y. Balytskyi, X. Zhou, and S. Y. Chang, “QUIC Protocol with Post-Quantum Authentication,” Proc. Information Security Conference (ISC), 2022. [Verified: ISC 2022, cited in Kempf et al. 2024]
    4. J. Zheng, H. Xiao, and S. Xu, “Faster Post-Quantum TLS 1.3 Based on ML-KEM,” Proc. ESORICS, 2024. [Verified: ESORICS 2024 proceedings]
    5. D. Gómez-Cambronero, D. Munteanu, and A. I. González-Tablas, “Layered Performance Analysis of TLS 1.3 Handshakes: Classical, Hybrid, and Pure Post-Quantum Key Exchange,” arXiv preprint, 2025. [Verified: arXiv preprint; venue unconfirmed as of March 2026]
    6. IETF, “QUIC: A UDP-Based Multiplexed and Secure Transport,” RFC 9000, May 2021.
    7. IETF, “Using TLS to Secure QUIC,” RFC 9001, May 2021.
    8. IETF, “HTTP/3,” RFC 9114, June 2022.
    9. IETF, “An Unreliable Datagram Extension to QUIC,” RFC 9221, March 2022.
    10. W3C, “WebTransport Working Draft,” February 2026.
    11. NIST, “FIPS 203: Module-Lattice-Based Key-Encapsulation Mechanism Standard (ML-KEM),” 2024.
    12. NIST, “FIPS 204: Module-Lattice-Based Digital Signature Standard (ML-DSA),” 2024.
    13. NIST, “FIPS 205: Stateless Hash-Based Digital Signature Standard (SLH-DSA),” 2024.
    14. All external references verified March 2026. The Späth et al. ‘Kernel Bypass Surgery’ (IEEE/IFIP NOMS 2026) paper cited in a prior draft has been removed — it could not be located in any authoritative index including IEEE Xplore, arXiv, TUM mediaTUM, or author CVs.