AUTONOMY DIRECTORATE

⟨ QUANTUM ERROR PORTAL ⟩

Navigate the Error Dimensions

PQ Crypta
PQ Crypta Post-Quantum Network

🌐 Live Speed + Latency Explorer

QUIC · WebTransport · Real UDP Datagrams · 2 Server Locations

QUIC WebTransport HTTP/3 UDP Native
Connecting…
Connecting…

📡 Latency (RTT)

ms

〰️ Jitter

ms σ

⚠️ Packet Loss

%

⬇️ Download · Steady-State

Mbps

⬆️ Upload · Server-Measured

Mbps

How It Works

📡 True QUIC Datagrams

RTT and jitter are measured via UDP datagram echo — not TCP round-trips or HTTP polling. Each ping embeds a high-resolution timestamp and sequence number for sub-millisecond accuracy.

📦 Packet Loss Probing

200 datagrams are fired concurrently; any that fail to echo within 2 seconds count as lost. Rolling 20-probe windows reveal per-burst loss patterns across your connection path.

⬇️ Download — Trimmed Steady-State

QUIC: 12 concurrent WebTransport streams download data over a time-bounded window. The first 1.5 seconds are discarded to exclude congestion-control ramp-up; only the steady-state window is averaged.

TCP: 6 parallel HTTP/1.1 streams, each on an independent TCP connection. HTTP/1.1 is negotiated (not HTTP/2) so the browser cannot coalesce all streams onto one TCP pipe — a single stalled HTTP/2 connection would drag all 12 streams to zero simultaneously. The server sends pseudo-random bytes to defeat ISP compression inflation.

⬆️ Upload — Steady-State

QUIC: 16 WebTransport streams each send data to the server. The server independently times each stream from first byte to FIN, discards the first 2 seconds (slow-start ramp-up), and reports steady-state Mbps. The 16 per-stream values are summed for the total. Chrome serialises writes within a single QUIC stream, so parallel streams are required to fill the shared CWND.

TCP: 6 parallel XHR streams each POST a 50 MB body over independent HTTP/1.1 connections. xhr.upload.onprogress fires as bytes leave the TCP socket; bytes are summed across all streams and divided by the fixed test-duration window: Up_Mbps = Σ B_u,i × 8 / T_ms / 1000 (T = 8 s standard · 12 s deep). XHR is used (not fetch()) because Chrome throws a TypeError when a streaming ReadableStream body with duplex:'half' is sent over an HTTP/1.1 connection.

🗺 Live Path Traceroute

A background traceroute streams hop-by-hop data to the UI in real time as the test runs, using ICMP probes from the server. Each visible hop is reverse-geolocated via GeoLite2 to show city and ISP. ICMP-filtered hops show as gaps in the chain.

🔬 Network Fingerprint

After the test, a composite stability score is computed from RTT, jitter, loss, and throughput — classifying your link as suitable for gaming, video calls, or general browsing, with ranked issue diagnosis and percentile comparison against real users in the last 30 days.

Why TCP can match or beat QUIC on raw throughput — and why that's expected

⚠️ QUIC is not raw UDP — it's TCP rebuilt in userspace

QUIC uses UDP as its carrier, but it reimplements everything TCP does on top of it: reliability, retransmission, congestion control, flow control, packet ordering. The "UDP is fast" reputation applies to unreliable protocols (DNS, game state updates) that skip all that work. QUIC skips none of it — and then adds TLS 1.3 encryption per-packet entirely in the browser process, not the kernel. For reliable bulk transfer, QUIC and TCP carry the same logical overhead.

🧠 TCP runs in kernel space — QUIC runs in your browser

TCP is implemented inside the OS kernel and benefits from hardware offload (TSO, GSO, GRO) that processes thousands of bytes per CPU instruction. QUIC runs inside the browser's process — every packet cycles through userspace code before touching the kernel's UDP socket. This CPU overhead is why TCP wins on raw upload throughput. Your RTT result tells the real story: QUIC observed latency is typically lower than TCP — QUIC measures a bare UDP datagram echo; TCP measures an HTTP fetch over a keep-alive connection, which adds framing overhead and delayed-ACK effects. The latency advantage is real, but partly reflects measurement methodology.

📡 Where QUIC actually wins — and why it exists

Google, YouTube, and Cloudflare use QUIC not for raw speed but because it eliminates head-of-line blocking (a slow stream can't stall all others), survives IP changes without reconnecting (mobile handoffs), and establishes connections faster with 0-RTT resumption. On lossy or congested paths, QUIC's per-stream recovery dramatically outperforms TCP. These are the use cases QUIC was designed for — not beating TCP on a clean link throughput test.

Note on 0-RTT: This server has 0-RTT (early data) disabled on the WebTransport endpoint. Every QUIC session shown here is a full 1-RTT handshake — no session ticket shortcut. The RTT you see is the true baseline cost of a QUIC connection to this server, not a resumed session.

🔎 How to read your comparison table

Both download and upload are comparable. Download measures bytes received at the browser (steady-state, post-1.5s trim); QUIC upload measures bytes received at the server per stream (steady-state, post-2s trim); TCP upload measures bytes leaving the client socket (via xhr.upload.onprogress) divided by the fixed test-duration window (8 s standard · 12 s deep). All three methods report the connection's real sustained capacity, not a burst artifact. A QUIC value more than 30–35% below TCP on either direction suggests ISP protocol differentiation (UDP deprioritisation).

Upload gap: QUIC vs TCP. QUIC upload uses server-measured steady-state throughput with a 2-second warmup exclusion. TCP upload uses client-side bytes reported by xhr.upload.onprogress (bytes leaving the socket) divided by the fixed test-duration window. A gap between the two reflects real protocol efficiency differences — QUIC upload runs over userspace QUIC in the browser process; TCP upload runs over kernel TCP with hardware offload (TSO/GSO), giving TCP a structural advantage on raw upload throughput.

QUIC upload faster than TCP upload: Unusual — would suggest the ISP is actively prioritising UDP, or the TCP measurement is running over a congested path.

The RTT gap — QUIC typically lower than TCP — is a reliable signal that both tests ran cleanly. QUIC measures a raw datagram echo; TCP measures an HTTP round-trip including framing overhead. If TCP RTT were substantially lower than QUIC RTT, it would suggest unusual network conditions or a measurement anomaly.