Project Information
- Project:
- PQCrypta Proxy
- Version:
- 0.2.2
- Language:
- Rust
- License:
- MIT / Apache-2.0
- Repository:
- GitHub
Key Highlights
- ✓ HTTP/1.1, HTTP/2, HTTP/3 (QUIC)
- ✓ WebTransport (bidirectional) with per-origin rate limiting
- ✓ Post-Quantum TLS (X25519MLKEM768) + PQC downgrade detection
- ✓ JA3/JA4 Fingerprinting + replay & drift detection
- ✓ WAF β injection & traversal patterns (SQLi, XSS, path traversal, SSRF, CMDi, XXE) + scanner probe blocking (.git, .env, .aws, wp-login, terraform state, SSH keys, CI/CD files, 40+ recon targets); X-Forwarded-For exempt from SSRF patterns
- ✓ Three TLS Modes (terminate/re-encrypt/passthrough)
- ✓ 6 Load Balancing Algorithms + per-backend retry & CB overrides
- ✓ Multi-Dimensional Rate Limiting (IP, JA3/JA4, JWT-verified) on all paths β TCP + QUIC/HTTP3; optional Redis backend for distributed cross-instance coordination
- ✓ ACME + OCSP Automation + Certificate Transparency
- ✓ 0-RTT Replay Protection (nonce store, strict/session/none)
- ✓ Structured Audit Logger (JSON β admin, WAF, auth, PQC, rate limit)
- ✓ Hot Reload + environment config overlay (--env)
- ✓ Prometheus Metrics + WAF block tracking
- ✓ GeoIP + Circuit Breaker + per-route security policy
- ✓ Zero-Trust Mode (HMAC proof-of-possession, nonce replay prevention, admin HMAC enforcement)
- ✓ Per-Route HMAC Signing (path+query signed, optional nonce for full replay prevention)
- ✓ OpenTelemetry Distributed Tracing β W3C TraceContext + B3 propagation, OTLP export, access-log trace-ID correlation
Application Summary
PQCrypta Proxy is a Rust implementation across 25 modules
providing HTTP/3/QUIC/WebTransport reverse proxy capabilities with native Post-Quantum
Cryptography. The codebase leverages tokio async runtime,
quinn/h3 for QUIC/HTTP/3,
and OpenSSL 3.5+ with native ML-KEM for PQC key exchange.
Zero unsafe code in application logicβall memory safety enforced at compile time.
The PQC implementation supports 9 key exchange algorithms: X25519MLKEM768 (IETF hybrid, default), SecP256r1MLKEM768, SecP384r1MLKEM1024, X448MLKEM1024, plus pure ML-KEM-512/768/1024 and legacy Kyber variants. Default configuration enforces NIST Level 3 (192-bit equivalent) minimum security. Algorithm negotiation is automatic with graceful fallbackβclients without PQC support connect via classical X25519 without service interruption.
JA3/JA4 TLS fingerprinting extracts ClientHello signatures during handshake, enabling client identification behind NAT/corporate proxies. Includes pre-populated database of known fingerprints: Chrome 100-120+, Firefox 90-120+, Safari 17+, Edge 120+, plus legitimate bots (Googlebot, Bingbot) and malicious patterns. Fingerprints feed into the multi-dimensional rate limiterβa hybrid sliding-window/token-bucket implementation inspired by Cloudflare, Envoy, and AWS API Gateway patterns. Supports composite keys (IP + JA3 + JWT claims), IPv6 /64 subnet grouping, and ML-inspired adaptive baseline learning for anomaly detection. Runs on all paths (TCP HTTP/1.1, HTTP/2, and QUIC/HTTP3) with an optional Redis backend for distributed quota sharing across proxy instances using atomic Lua scripts.
Load balancing implements 6 algorithms: least_connections (default),
round_robin, weighted_round_robin (nginx-style smooth), random, ip_hash (consistent),
and least_response_time (EMA tracking). Backend pools support slow-start ramp-up for
recovering servers, connection draining for graceful removal, priority-based failover,
and health-aware routing that automatically bypasses failed backends. Session affinity
via cookie, IP hash, or custom header ensures stateful application compatibility.
Canary deployments are first-class: mark a server with
canary = true and a percentage weight to route a
fraction of new traffic there while sticky cookie assignment keeps each user on the
same server; auto-rollback suspends the canary if its error rate exceeds a sliding-window
threshold, and the admin API provides live inspect / suspend / resume / re-weight control.
Canary routing is enforced on all transport paths: HTTP/1.1, HTTP/2, and HTTP/3/QUIC.
Traffic shadowing / mirroring is available per-route via
[routes.shadow]: a fire-and-forget async copy of
each request is sent to a secondary backend without affecting the client response β
configurable percentage, timeout, and marker header, zero overhead on non-shadow routes.
Shadow mirroring is active on HTTP/1.1, HTTP/2, and HTTP/3/QUIC.
RFC 9111 response caching is now built into all transport paths.
Cache-Control directives (max-age, s-maxage, no-cache, no-store, private, public) are
parsed from every backend response. Conditional requests using
If-None-Match / ETag
and If-Modified-Since / Last-Modified
return 304 Not Modified without a backend round-trip.
Vary: * responses bypass the cache.
The cache is disabled by default β enable it with
[cache] enabled = true in config.toml.
Active on HTTP/1.1, HTTP/2, and HTTP/3/QUIC.
Hop-by-hop header stripping removes HTTP/1.1 connection-specific
headers (Transfer-Encoding,
Connection,
Keep-Alive,
Upgrade, and related headers) from every backend
response at the proxy layer before caching or forwarding. These headers are forbidden
in HTTP/2 (RFC 9113 Β§8.2.2) and HTTP/3 (RFC 9114 Β§4.2) β forwarding them over QUIC
streams causes ERR_QUIC_PROTOCOL_ERROR in browsers.
Stripping is applied uniformly across HTTP/1.1, HTTP/2, HTTP/3/QUIC, and WebTransport.
OpenTelemetry distributed tracing provides end-to-end trace
propagation across all HTTP transports. The composite propagator extracts W3C
traceparent / tracestate
(RFC 9543) and B3 headers (x-b3-traceid /
x-b3-spanid / b3)
from every inbound request, then injects both formats into upstream requests so
Jaeger, Tempo, and any W3C-compatible backend can correlate the full call path.
Spans are exported via OTLP HTTP/JSON to any compatible collector.
The sampling ratio, OTLP endpoint, and service resource attributes are all
configurable; tracing is disabled by default and activated with
[otel] enabled = true.
Trace IDs are stamped into every access-log line for log β trace correlation.
Protocol Support
| Protocol | Status | Implementation |
|---|---|---|
| HTTP/1.1 | Active | hyper |
| HTTP/2 | Active | hyper + h2 |
| HTTP/3 | Active | h3 + quinn |
| QUIC | Active | quinn |
| WebTransport | Active | Native |
Codebase Statistics
| Source Files | 24 Rust modules |
| Dependencies | 90+ crates |
| Feature Flags | 8 compile-time features |
| Tests | 142 passing |
| Documentation | 1,300+ lines README, 620+ lines SECURITY.md |
Complete Feature Set
- ✓ Load Balancing (6 algorithms)
- ✓ Circuit Breaker + per-backend overrides
- ✓ Per-Backend Retry (exponential backoff)
- ✓ Advanced Rate Limiting (IP/JA3/JWT/composite) β TCP + QUIC/HTTP3; optional Redis distributed backend
- ✓ WAF β injection & traversal pattern detection + scanner probe blocking (detect/block modes)
- ✓ Audit Logger (async structured JSON)
- ✓ DoS Protection + body size limit
- ✓ GeoIP Blocking
- ✓ JA3/JA4 Fingerprinting
- ✓ JA3/JA4 Replay Detection
- ✓ JA3/JA4 Drift Detection
- ✓ PQC Downgrade Detection (block/log/allow)
- ✓ PQC + Fingerprinting Combined
- ✓ 0-RTT Replay Protection (nonce store)
- ✓ Priority Hints (RFC 9218)
- ✓ Request Coalescing
- ✓ Early Hints (103)
- ✓ Compression (Brotli/Zstd/Gzip)
- ✓ Security Headers
- ✓ Per-Route Security Policy
- ✓ Zero-Trust Mode (mTLS + no-CIDR + admin HMAC enforcement)
- ✓ Per-Route HMAC Signing (path+query, nonce replay prevention)
- ✓ Admin HMAC Proof-of-Possession (path+query, nonce replay prevention)
- ✓ PQC TLS (X25519MLKEM768)
- ✓ Background Cleanup
- ✓ ACME Automation
- ✓ Certificate Transparency log submission
- ✓ OCSP Stapling
- ✓ Prometheus Metrics + WAF block tracking
- ✓ PROXY Protocol v2
- ✓ Access Logging (log-injection sanitized)
- ✓ Custom Header Injection (per-route)
- ✓ Per-Route Timeout Overrides
- ✓ Multiple Listener Ports (additional_ports)
- ✓ QUIC Connection Migration (configurable)
- ✓ IP Blocklists (DB-synced)
- ✓ WebTransport Origin Validation (SR-02)
- ✓ WebTransport Per-Origin Rate Limiting
- ✓ WebTransport JSON Operations
- ✓ Config Schema Versioning + Conflict Validation
- ✓ Environment Config Overlay (--env)
Directory Structure
pqcrypta-proxy/
βββ src/ # Main source code (24 modules)
β βββ http_listener.rs # HTTP/1.1/2 TCP listener + PQC TLS
β βββ rate_limiter.rs # Multi-dimensional rate limiting; JWT HMAC verification
β βββ config.rs # Configuration, hot-reload, schema versioning, conflict validation, env overlay
β βββ fingerprint.rs # JA3/JA4 fingerprinting, replay cache, drift detector
β βββ metrics.rs # Prometheus metrics registry
β βββ security.rs # Security middleware; WAF hook; body size limit
β βββ acme.rs # ACME automation; Certificate Transparency; domain path validation
β βββ load_balancer.rs # 6 LB algorithms; canary routing; per-backend circuit breaker overrides
β βββ main.rs # Entry point, CLI, --env overlay
β βββ http3_features.rs # Early Hints, Priority, Request Coalescing
β βββ quic_listener.rs # QUIC/HTTP3 listener; configurable connection migration
β βββ pqc_extended.rs # Extended PQC configuration and capabilities
β βββ ocsp.rs # OCSP stapling automation
β βββ pqc_tls.rs # Post-Quantum TLS provider; downgrade detection
β βββ webtransport_server.rs # WebTransport sessions; per-origin rate limiting
β βββ admin.rs # Admin API; audit logging; /health/quic; /health/webtransport
β βββ proxy.rs # Backend pool; per-backend retry with exponential backoff
β βββ tls.rs # TLS configuration; PQC session tickets
β βββ compression.rs # Brotli/Zstd/Gzip/Deflate
β βββ waf.rs # WAF engine β injection/traversal pattern detection (A01/A03/A08/A10), detect/block modes
β βββ audit_logger.rs # Async structured JSON audit logger
β βββ tls_acceptor.rs # Custom TLS acceptor; 0-RTT nonce store; HMAC nonce store
β βββ access_logger.rs # HTTP access logging; log-injection sanitization
β βββ lib.rs # Library exports
β βββ handlers/ # WebTransport stream handlers
βββ config/
β βββ proxy-config.toml # Production config template
βββ docs/
β βββ SECURITY.md # Security hardening guide
β βββ CONFIG.md # Configuration reference
βββ packaging/
β βββ systemd/ # Linux systemd service
β βββ macos/ # macOS launchd plist
β βββ windows/ # Windows service docs
βββ data/geoip/ # MaxMind GeoIP databases
β βββ GeoLite2-ASN.mmdb
β βββ GeoLite2-City.mmdb
β βββ GeoLite2-Country.mmdb
βββ vendor/
β βββ wtransport/ # WebTransport protocol impl
β βββ wtransport-proto/ # Protocol definitions
βββ Cargo.toml # 90+ dependencies, 8 features
βββ Cargo.lock # Locked dependency versions
βββ README.md # Main documentation
Quick Start
Prerequisites
- Rust 1.75+ (install via rustup)
- TLS certificates (Let's Encrypt recommended)
Build & Run
# Clone repository
git clone https://github.com/PQCrypta/pqcrypta-proxy.git
cd pqcrypta-proxy
# Build release binary
cargo build --release
# Validate configuration
./target/release/pqcrypta-proxy --config /etc/pqcrypta/proxy-config.toml --validate
# Run
./target/release/pqcrypta-proxy --config /etc/pqcrypta/proxy-config.toml
Docker
# Build Docker image
docker build -t pqcrypta-proxy .
# Run container
docker run -p 80:80 -p 443:443/tcp -p 443:443/udp \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-v ./config:/etc/pqcrypta:ro \
pqcrypta-proxy
Complete Feature Reference
PQCrypta Proxy includes 175+ distinct features across protocol support, security, load balancing, TLS/PQC, and operational management. This comprehensive reference documents every capability.
Protocol Support
| Protocol | Implementation | Features |
|---|---|---|
| HTTP/1.1 | hyper | Keep-alive, chunked encoding, pipelining |
| HTTP/2 | hyper + h2 | Multiplexing, server push, header compression |
| HTTP/3 | h3 + quinn | QUIC transport, 0-RTT, connection migration |
| QUIC | quinn (RFC 9000) | UDP transport, stream multiplexing, flow control |
| WebTransport | Native | Bidirectional streams, datagrams, session handling |
| PROXY Protocol v2 | Native | Client IP preservation, TLV extensions |
Transport Layer Features
- UDP Multiport: Primary port + additional_ports for QUIC
- TCP Support: HTTP/1.1 and HTTP/2 over TCP
- Connection Pooling: Idle timeout management, per-host limits
- Keep-Alive: Configurable interval (default 15s)
- Max Connections: Global and per-IP limits (default 10,000)
- Stream Limits: Max streams per connection (default 1,000)
- Idle Timeout: Configurable (default 120s)
- IPv6 Dual-Stack: Optional IPv6 binding
Reverse Proxy Capabilities
Domain-Based Routing
Route requests based on hostname with wildcard support (*.example.com).
Multiple domains per route, :authority pseudo-header handling.
Path-Based Routing
Prefix matching (/api/*), exact path matching,
regex patterns. Path normalization and query string preservation.
TLS Termination
Decrypt TLS at proxy, forward plain HTTP to backend. ACME automation, OCSP stapling, certificate hot-reload without restart.
TLS Re-encryption
Decrypt at proxy, re-encrypt to backend via HTTPS. mTLS support, custom CA verification, client certificate forwarding.
TLS Passthrough
SNI-based routing without decryption. Wildcard SNI support, PROXY Protocol v2, configurable timeout per route.
HTTP Redirect Server
Automatic HTTP to HTTPS redirect on port 80. 301 (permanent) or 308 (preserve method) responses.
Header Manipulation
- X-Forwarded Headers: X-Real-IP, X-Forwarded-For, X-Forwarded-Proto automatic injection across all transports (HTTP/1.1, HTTP/2, HTTP/3/QUIC, WebTransport); WAF scans XFF for injection attacks but skips SSRF rules β loopback/RFC1918 IPs added by proxy hops are not SSRF indicators
- Custom Header Injection: Add headers per route via
[routes.add_headers] - Header Removal: Strip headers via
[routes.remove_headers] - Request ID: Auto-generate X-Request-ID with
${request_id}variable - Server Branding: Replace Server header with PQCProxy branding
- Backend Identity Hiding: Strip backend-specific headers from responses
Load Balancing
Six load balancing algorithms with enterprise-grade features:
| Algorithm | Description | Use Case |
|---|---|---|
| least_connections | Routes to server with fewest active connections | Default - optimal for varied request durations |
| round_robin | Simple rotation through servers | Uniform requests, equal server capacity |
| weighted_round_robin | nginx-style smooth weighted distribution | Heterogeneous server capacities |
| random | Random server selection | Simple distribution, stateless |
| ip_hash | Consistent hashing by client IP | Session persistence without cookies |
| least_response_time | Routes to fastest responding server (EMA tracking) | Latency-sensitive applications |
Backend Types
| Type | Configuration | Use Case |
|---|---|---|
| http1 | address = "127.0.0.1:8080" |
Standard HTTP/1.1 backend |
| http2 | address = "127.0.0.1:8443" |
Multiplexed HTTP/2 backend |
| http3 | address = "127.0.0.1:4443" |
QUIC to backend (experimental) |
| unix | socket = "/run/php/php-fpm.sock" |
PHP-FPM, local services (Unix only) |
| tcp | address = "127.0.0.1:3306" |
Raw TCP proxy (non-HTTP) |
Unix Socket Example (PHP-FPM)
# Backend: PHP-FPM via Unix socket
[backends.php]
name = "php"
type = "unix"
socket = "/run/php/php8.4-fpm.sock"
tls_mode = "terminate"
Backend Pool Features
- Multiple Servers: Multiple servers per pool with individual configuration
- Pool-Level Algorithm: Override default algorithm per pool
- Health-Aware Routing: Automatically skip unhealthy backends
- Health Check Path: Configurable per pool (default
/health) - Health Check Interval: Configurable (default 10s)
- Server Weighting: Weight per server for weighted algorithms
- Priority Failover: Primary (1), secondary (2), failover (3+) tiers
- Per-Server Limits: Max connections, timeout per server
- Per-Server TLS Mode: terminate, re-encrypt, or passthrough
- Unix Socket Support: PHP-FPM, local services (Linux/macOS)
- Canary Flag:
canary = truemarks a server as the canary target;canary_weight_percent(0β100) controls the share of new traffic it receives
Session Affinity (Sticky Sessions)
| Mode | Description | Configuration |
|---|---|---|
| cookie | Cookie-based affinity with configurable name/TTL; dedicated map with 1-hour entry eviction | Secure, HttpOnly, SameSite attributes |
| ip_hash | Client IP hash for session persistence; dedicated map with 1-hour entry eviction | No cookie required |
| header | Custom header-based affinity; own dedicated map prevents routing confusion when multiple modes active | X-Session-ID or custom header |
| none | No session affinity | Pure load balancing |
Advanced Load Balancer Features
- Proactive Health Checks: Background TCP-connect task checks each backend on a configurable interval β marks servers unreachable before traffic hits them, not only on real-request failures
- Slow Start: Gradually increase traffic to recovering servers (30s default, configurable initial weight %)
- Connection Draining: Graceful server removal without dropping connections (30s timeout)
- Request Queuing: Queue requests when all backends busy (max size, timeout configurable)
- Response Time Tracking: Exponential moving average (EMA) for least_response_time
- Pool Statistics: Real-time metrics per pool and server
- Canary / Traffic Splitting: Percentage-based canary routing with sticky
PQCPROXY_CANARYcookie assignment (configurable TTL), optional sticky header pre-assignment, auto-rollback on sliding-window error rate threshold, and live admin control viaGET /canary,POST /canary/suspend/:id,POST /canary/resume/:id, andPOST /canary/weight/:id - Traffic Shadowing / Mirroring: Per-route
[routes.shadow]block sends a fire-and-forget async copy of each request to a secondary backend; client only ever sees the primary response; configurable percentage (0β100), independent timeout, custom marker header name and value, and per-route response logging; zero allocation overhead on routes without shadow configured - RFC 9111 Response Cache: Full Cache-Control parsing (max-age, s-maxage, no-cache, no-store, private, public); ETag/If-None-Match β 304 Not Modified; Last-Modified/If-Modified-Since β 304 Not Modified; Vary: * bypass; TTL-based expiry; size-bounded DashMap store;
x-cache: HIT/MISSandAgeresponse headers; disabled by default, enable with[cache] enabled = true; active on HTTP/1.1, HTTP/2, and HTTP/3/QUIC - Hop-by-Hop Header Stripping: HTTP/1.1 connection-specific headers (
Transfer-Encoding,Connection,Keep-Alive,Proxy-Connection,Upgrade,TE,Trailer,Proxy-Authenticate,Proxy-Authorization) stripped from every backend response before caching or forwarding β forbidden in HTTP/2 (RFC 9113 Β§8.2.2) and HTTP/3 (RFC 9114 Β§4.2); forwarding them over QUIC causesERR_QUIC_PROTOCOL_ERROR; applied across HTTP/1.1, HTTP/2, HTTP/3/QUIC, and WebTransport
TLS & Certificate Management
TLS 1.3 Only
Modern TLS 1.3 enforced by QUIC. Strong cipher suites, forward secrecy, reduced handshake latency.
Certificate Hot-Reload
Reload certificates without restart via file watching
or /reload endpoint. Zero downtime.
ACME Automation
RFC 8555 ACME client supporting any CA (Let's Encrypt, ZeroSSL, Buypass, etc.) with HTTP-01 and DNS-01 challenges. Auto-renewal 30 days before expiry.
OCSP Stapling
Automated OCSP response fetching with caching. Background refresh 5 minutes before expiry.
Certificate Features
- PEM Format: Standard PEM certificate loading
- Key Types: ECDSA, Ed25519, RSA private keys
- Certificate Chain: Full chain validation
- Expiry Monitoring: Track and alert on expiring certs
- mTLS: Client certificate validation, optional CA path
- Key Permissions: Enforce 0600/0400 file permissions
- Self-Signed: Generate self-signed certs for testing
HTTP/3 Advanced Features
Early Hints (103)
Preload CSS/JS/fonts via Link headers before final response. Preconnect and DNS prefetch hints. Path-based hint rules.
Priority Hints (RFC 9218)
Extensible Priorities with urgency levels (urgent, high, medium, low). Incremental flag, weight 0-256, content-type based priority.
Request Coalescing
Deduplicate identical GET/HEAD requests in flight. Shared response broadcast, automatic cleanup on completion.
Alt-Svc Advertisement
Dynamic HTTP/3 upgrade headers on all ports β built from udp_port and additional_ports at startup so every listener advertises its actual configured address.
Performance Monitoring
Server-Timing header for DevTools metrics. Client Hints (Accept-CH) for adaptive content. NEL (Network Error Logging) for error reporting.
Early Hints Configuration
[early_hints]
enabled = true
auto_detect = true # Auto-detect CSS/JS from HTML
# Preconnect to external origins
preconnect_origins = [
"https://fonts.googleapis.com",
"https://fonts.gstatic.com",
"https://cdn.example.com"
]
# Path-based preload rules
[[early_hints.preload_rules]]
path_pattern = "/" # Match homepage
hints = [
{ type = "preload", href = "/css/main.css", as = "style" },
{ type = "preload", href = "/js/app.js", as = "script" },
{ type = "modulepreload", href = "/js/module.mjs" }
]
[[early_hints.preload_rules]]
path_pattern = "/blog/*" # Match blog pages
hints = [
{ type = "preload", href = "/css/blog.css", as = "style" },
{ type = "dns-prefetch", href = "https://analytics.example.com" }
]
Link Hint Types
| Type | Format | Use Case |
|---|---|---|
| preload | <href>; rel=preload; as=style |
Critical CSS, JS, fonts |
| preconnect | <href>; rel=preconnect |
External API origins |
| dns-prefetch | <href>; rel=dns-prefetch |
Third-party domains |
| modulepreload | <href>; rel=modulepreload |
ES modules |
| prerender | <href>; rel=prerender |
Speculative page prerender |
Compression Support
| Algorithm | Priority | Best For |
|---|---|---|
| Brotli | 1 (Highest) | Text, HTML, CSS, JS - best compression ratio |
| Zstandard (zstd) | 2 | General purpose - fast compression/decompression |
| gzip | 3 | Universal compatibility |
| deflate | 4 (Lowest) | Legacy support |
- Content Negotiation: Accept-Encoding parsing, client preference matching
- Content-Type Detection: Skip already-compressed content (images, videos)
- Async Compression: Non-blocking compression for high throughput
- Per-Content-Type Rules: Configure compression by MIME type
WebTransport Support
Bidirectional streaming over HTTP/3 with lower latency than WebSockets:
Bidirectional Streams
Full-duplex communication with independent send/receive. Multiple concurrent streams without head-of-line blocking.
Unidirectional Streams
One-way data flow for push notifications, telemetry, or asymmetric communication patterns.
Unreliable Datagrams
Optional unreliable delivery for latency-sensitive data. Game state, real-time sensors where old data is stale.
Session Affinity
Route WebTransport sessions to consistent backends. Connection migration support across network changes.
Origin Validation (SR-02)
Per-session Origin header enforcement against
webtransport_allowed_origins allowlist. Rejects
browser sessions from unlisted origins with 403 before stream setup.
JSON Operation Routing
WebTransport stream frames parsed as JSON operation objects
({"op":"encrypt",...}) and dispatched to the
appropriate backend handler, enabling structured crypto operations over HTTP/3.
Configurable Port
webtransport_port in [server] controls the dedicated WebTransport server bind port (default 4433), replacing the previously hardcoded value.
CORS Support
Per-route Cross-Origin Resource Sharing configuration:
[routes.cors]
allow_origin = "https://example.com" # Or "*" for any
allow_methods = ["GET", "POST", "PUT", "DELETE", "OPTIONS"]
allow_headers = ["Content-Type", "Authorization", "X-Request-ID"]
expose_headers = ["X-Request-ID", "X-RateLimit-Remaining"]
allow_credentials = true
max_age = 86400 # Preflight cache (24 hours)
- Preflight Handling: Automatic OPTIONS response for preflight requests
- Simple Requests: Direct CORS header injection
- Credentials: Cookie and Authorization header support
- Wildcard Origin: Support for
*or specific origins
PROXY Protocol v2
Preserve original client IP when behind load balancers or CDNs:
[proxy_protocol]
enabled = true
trusted_proxies = ["10.0.0.0/8", "172.16.0.0/12"]
require_proxy_protocol = false # Allow direct connections too
timeout_secs = 5
- Client IP Preservation: Original client IP through proxy chain
- TLV Extensions: AWS VPC, GCP, Azure metadata support
- Protocol Agnostic: Works with TCP and UDP (QUIC)
- Security: Only accept from trusted sources
Metrics & Monitoring
Prometheus Metrics
| Category | Metrics |
|---|---|
| Requests | Count, duration/latency, size, status codes, compression ratio |
| Connections | Active count, per-protocol breakdown, duration, rate, TLS handshake time |
| Routes | Per-route request count, latency, error rates, status codes |
| Backend Pools | Server health, request count, response time EMA, active connections |
| Rate Limiting | Hits per key, blocks, anomaly detections, blocked IPs/fingerprints; redis_connected status in admin stats snapshot |
| TLS | Version distribution, cipher usage, cert expiry, PQC adoption, JA3/JA4 |
| WAF | WAF-blocked requests tracked separately from failed requests (waf_blocked_requests) |
Latency Percentiles (p50 / p95 / p99)
Latency percentiles are computed from a double-buffered 5-minute sliding window rather than a cumulative histogram. The active buffer accumulates request durations; every 2.5 minutes the buffers rotate, so reported percentiles always reflect the last 2.5–5 minutes of live traffic. Historical outliers from startup or past load spikes do not pollute current readings.
Percentiles are interpolated using Prometheus-style linear interpolation within each bucket. The histogram uses 18 fine-grained buckets with boundaries chosen to match SLO thresholds: 5, 10, 25, 50, 75, 100, 150, 200, 300, 500, 750, 1000, 1500, 2000, 3000, 5000, 10000 ms, and +Inf. This eliminates the step-function snapping seen with coarse bucket boundaries.
Health Check Traffic Exclusion
Requests that carry the x-health-check-bypass: 1 request header are excluded from all metrics counters and the latency histogram:
- Not counted in
total_requests,successful_requests,failed_requests - Not added to the latency histogram (no impact on p50/p95/p99)
- Not tracked in
in_progressconnections - Not recorded as endpoint errors
This prevents the health check cron’s synthetic cryptographic workflows from appearing as real errors or skewing production latency percentiles. The API server’s tower_http::TraceLayer is configured with .on_failure(()) on all three router layers, suppressing default ERROR-level log entries for bypass 500s. Genuine non-bypass 5xx responses are still logged as ERROR by the metrics middleware.
WAF Blocked Requests
Requests rejected by the security IP-blocklist or bot-blocklist receive an x-waf-block: 1 response header. The collector tracks these separately in waf_blocked_requests (distinct from failed_requests) so that bot attack traffic cannot inflate error-rate SLOs or depress domain health scores.
Admin API Endpoints
| Endpoint | Method | Description |
|---|---|---|
| /health | GET | Public health check β no auth required (safe for LB probes) |
| /metrics | GET | Prometheus metrics (comprehensive) β auth required |
| /metrics/json | GET | JSON metrics snapshot |
| /metrics/errors | GET | Per-endpoint error counts; filter with ?type=client or ?type=server |
| /reload | POST | Reload configuration (zero-downtime) β audit logged |
| /reload?tls_only=true | POST | Reload TLS certificates only |
| /shutdown | POST | Graceful shutdown with connection draining β audit logged |
| /config | GET | Read-only configuration view |
| /backends | GET | Backend health status and stats |
| /tls | GET | TLS certificate information |
| /ocsp | GET | OCSP stapling status |
| /ocsp/refresh | POST | Force OCSP response refresh (5-min cooldown) |
| /acme | GET | ACME certificate status |
| /acme/renew | POST | Force certificate renewal (1-hour cooldown) |
| /ratelimit | GET | Rate limiter status and statistics |
| /health/quic | GET | QUIC listener health β port, migration status, active connections |
| /health/webtransport | GET | WebTransport health β active sessions, allowed origins, limits |
Configuration & Operations
Hot Reload
File watching for automatic config reload. SIGHUP signal support.
Zero-downtime updates via /reload endpoint.
Configuration Validation
--validate flag for syntax checking.
Schema validation, sensible defaults, clear error messages.
Environment Variables
Override config via PQCRYPTA_* variables. PQCRYPTA_CONFIG, PQCRYPTA_UDP_PORT, PQCRYPTA_LOG_LEVEL.
Graceful Shutdown
SIGTERM handling, connection draining, request completion. Configurable shutdown timeout.
Logging Features
- Structured Logging: JSON or text format for SIEM integration
- Log Levels: trace, debug, info, warn, error
- Access Logging: Separate access log stream with configurable fields
- Access Log Fields: Method, path, status, size, timing, client IP, User-Agent, Referrer, protocol, JA3/JA4, backend
- File Output: Log to file with path configuration
- OpenTelemetry Distributed Tracing: W3C TraceContext (
traceparent/tracestate, RFC 9543) + B3 multi-header + B3 single-header extraction and injection on all transports (HTTP/1.1, HTTP/2, HTTP/3/QUIC, WebTransport); composite propagator tries W3C first then B3; OTLP HTTP/JSON export to any compatible collector (Jaeger, Tempo, etc.);ParentBased(TraceIdRatioBased)sampler with configurable ratio; trace IDs stamped into access-log entries; disabled by default, enable with[otel] enabled = trueand setotlp_endpoint
Performance & Optimization
- Async/Await: Tokio runtime for non-blocking I/O
- Worker Threads: Auto-detect or manual configuration
- Connection Pooling: HTTP keep-alive with idle timeout
- Zero-Copy: Efficient memory operations where possible
- Memory-Efficient Streaming: Large file handling without buffering
- Buffer Configuration: Receive/send buffer sizes (1MB default)
- QPACK: HTTP/3 header compression with configurable table capacity
Safety & Hardening
Memory Safety
Rust guarantees, no unsafe in critical paths, bounds checking
Bounded Collections
DashMap size limits, LRU eviction, memory exhaustion prevention
Input Validation
Header validation, size limits, domain validation (RFC 1035)
ReDoS Prevention
Regex pattern size limits, validated patterns
Panic Prevention
No unwrap() in production, safe error handling throughout
Command Injection
ACME domain validation, safe path handling
Cross-Platform Support
| Platform | Architectures | Deployment |
|---|---|---|
| Linux | x86_64, ARM64 | Standalone, Docker, systemd |
| macOS | x86_64, Apple Silicon | Standalone, Docker, launchd |
| Windows | x86_64 | Standalone, Docker, Windows Service |
System Architecture
Architecture Diagram
Module Structure
The codebase is organized into focused, single-responsibility modules. Total: ~15,000+ lines of Rust code.
| Module | File | Lines | Responsibility |
|---|---|---|---|
| HTTP Listener | http_listener.rs | 3,061 | HTTP/1.1 + HTTP/2 over TCP with TLS termination, PQC, Performance Headers |
| Rate Limiter | rate_limiter.rs | 1,943 | Multi-dimensional rate limiting with JA3/JA4, JWT, API key, composite keys; optional Redis distributed backend |
| Configuration | config.rs | 1,573 | TOML parsing, validation, hot-reload, HTTP/3 headers config |
| Fingerprinting | fingerprint.rs | 1,525 | JA3/JA4 TLS ClientHello fingerprint extraction |
| Metrics | metrics.rs | 1,342 | Prometheus metrics: TLS, connections, requests, backends, errors |
| Security | security.rs | 1,204 | DoS protection, GeoIP blocking, circuit breaker, auto-blocking |
| ACME | acme.rs | 1,137 | ACME CA (RFC 8555) HTTP-01/DNS-01 automation, renewal |
| Load Balancer | load_balancer.rs | 1,821 | 6 algorithms, canary routing, session affinity, health checks, slow start |
| HTTP Listener | http_listener.rs | 2,800 | HTTP/1.1 + HTTP/2 reverse proxy, traffic shadowing, shadow request body buffering |
| Entry Point | main.rs | 1,012 | CLI parsing, server startup, signal handling |
| HTTP/3 Features | http3_features.rs | 879 | Early Hints (103), Priority (RFC 9218), Request Coalescing, Server-Timing |
| QUIC Listener | quic_listener.rs | 818 | QUIC/HTTP/3 UDP listener, stream multiplexing, flow control |
| PQC Extended | pqc_extended.rs | 792 | Algorithm detection, key security checks, provider verification |
| OCSP | ocsp.rs | 678 | OCSP stapling, response caching, background refresh |
| PQC TLS | pqc_tls.rs | 671 | Post-Quantum TLS provider (rustls + aws-lc-rs) |
| WebTransport | webtransport_server.rs | 617 | WebTransport sessions, bidirectional streams, datagrams |
| Admin | admin.rs | 598 | Admin HTTP API (health, metrics, reload, shutdown) |
| Proxy | proxy.rs | 576 | Backend connection pooling, request routing |
| TLS | tls.rs | 478 | TLS provider initialization, certificate loading |
| Compression | compression.rs | 451 | Brotli, Zstd, Gzip, Deflate with content negotiation |
| WT Handlers | handlers/webtransport.rs | 283 | WebTransport stream and datagram handlers |
| TLS Acceptor | tls_acceptor.rs | 258 | Custom TLS acceptor with fingerprint capture |
| OpenTelemetry | otel.rs | 321 | W3C TraceContext + B3 propagation, OTLP export, carrier implementations |
| Access Logger | access_logger.rs | 207 | HTTP access logging, W3C format, log-injection sanitization, trace-ID correlation |
| Library | lib.rs | 150 | Public API exports, module declarations |
Compile-Time Feature Flags
Build with specific features enabled via cargo build --features "...":
| Feature | Default | Description |
|---|---|---|
| pqc | Yes | PQC hybrid key exchange via OpenSSL 3.5+ with native ML-KEM |
| pqc-signatures | No | ML-DSA signatures (unstable aws-lc-rs API) |
| metrics | Yes | Prometheus metrics endpoint |
| admin-api | Yes | Admin HTTP API for health, reload, shutdown |
| geoip | Yes | GeoIP blocking via MaxMind DB |
| acme | Yes | ACME certificate automation (any ACME CA: Let's Encrypt, ZeroSSL, Buypass, etc.) |
| fips | No | FIPS 140-3 mode via aws-lc-rs (requires FIPS-validated build) |
| windows-service | No | Windows service support |
FIPS Mode
For environments requiring FIPS 140-3 compliance:
# Build with FIPS-validated cryptography
cargo build --release --features "fips"
# Requires FIPS-validated aws-lc-rs build
export AWS_LC_FIPS_SYS_STATIC=1
Request Flow
- Connection Established: Client connects via TCP (HTTP/1.1, HTTP/2) or UDP (QUIC/HTTP/3)
- TLS Handshake: PQC hybrid key exchange (X25519MLKEM768) negotiated
- JA3/JA4 Extraction: TLS fingerprint captured from ClientHello
- Security Check: Rate limiting, GeoIP, circuit breaker evaluation
- Route Matching: Domain and path matching against configured routes
- Load Balancing: Backend server selected based on algorithm
- Request Forwarding: Headers modified, request proxied to backend
- Response Processing: Compression applied, security headers added
- Metrics Recording: Prometheus metrics updated
Dependencies (90+ crates)
Complete dependency list from Cargo.toml - all versions pinned for reproducible builds:
Async Runtime
tokio1.43tokio-util0.7tokio-stream0.1futures0.3futures-util0.3pin-project-lite0.2
QUIC & HTTP/3
quinn0.11quinn-proto0.11h30.0.8h3-quinn0.0.10wtransport(vendored)
HTTP
hyper1.5hyper-util0.1hyper-rustls0.27http1.2http-body-util0.1bytes1.9axum0.7axum-server0.7tower0.5tower-http0.6reqwest0.12
TLS (rustls)
rustls0.23rustls-post-quantum0.2rustls-pemfile2.2rustls-native-certs0.8rustls-pki-types1.13tokio-rustls0.26webpki-roots0.26rcgen0.13x509-parser0.16pem3.0instant-acme0.7 (ACME)
PQC & Crypto
aws-lc-rs1.15openssl0.10 (optional)openssl-sys0.9openssl-probe0.1tokio-openssl0.6sha20.10sha10.10md-50.10
Rate Limiting
governor0.7nonzero_ext0.3failsafe1.3redis0.27 (distributed backend)maxminddb0.27dashmap6.1ipnetwork0.20ipnet2.10
Compression
async-compression0.4brotli7.0zstd0.13flate21.0
Config & Parsing
serde1.0serde_json1.0toml0.8clap4.5regex1.11url2.5
Logging, Metrics & Tracing
tracing0.1tracing-subscriber0.3tracing-appender0.2tracing-opentelemetry0.28opentelemetry0.27opentelemetry_sdk0.27opentelemetry-otlp0.27prometheus0.14
Utilities
anyhow1.0thiserror2.0uuid1.11chrono0.4time0.3base640.22hex0.4rand0.8tempfile3.14octets0.3
Concurrency
arc-swap1.7parking_lot0.12notify7.0
Platform-Specific
nix0.29 (Unix)hyperlocal0.9 (Unix)signal-hook0.3 (Unix)signal-hook-tokio0.3windows-service0.7socket20.5
WebTransport
wtransport0.6 (vendored)wtransport-proto0.6 (vendored)httlib-huffman0.3
Dev & Testing
criterion0.5tokio-test0.4assert_matches1.5
Prometheus Metric Names
Complete list of exported Prometheus metrics for monitoring integration:
| Metric | Type | Description |
|---|---|---|
| pqcrypta_uptime_seconds | gauge | Server uptime in seconds |
| pqcrypta_requests_total | counter | Total requests received |
| pqcrypta_requests_success_total | counter | Successful requests (2xx) |
| pqcrypta_requests_client_errors_total | counter | Client errors (4xx) |
| pqcrypta_requests_server_errors_total | counter | Server errors (5xx) |
| pqcrypta_requests_in_progress | gauge | Current requests in progress |
| pqcrypta_bytes_received_total | counter | Total bytes received |
| pqcrypta_bytes_sent_total | counter | Total bytes sent |
| pqcrypta_request_latency_seconds | histogram | Request latency distribution |
| pqcrypta_connections_active | gauge | Active connections |
| pqcrypta_connections_total | counter | Total connections established |
| pqcrypta_tls_handshake_duration_seconds | histogram | TLS handshake duration |
| pqcrypta_tls_version{version="1.3"} | counter | TLS version distribution |
| pqcrypta_pqc_connections_total | counter | PQC hybrid key exchange connections |
| pqcrypta_backend_requests_total{backend="..."} | counter | Requests per backend |
| pqcrypta_backend_latency_seconds{backend="..."} | histogram | Backend response time |
| pqcrypta_backend_health{backend="..."} | gauge | Backend health status (0/1) |
| pqcrypta_route_requests_total{route="..."} | counter | Requests per route |
| pqcrypta_rate_limit_hits_total | counter | Rate limit hits |
| pqcrypta_rate_limit_blocks_total | counter | Requests blocked by rate limiting |
| pqcrypta_circuit_breaker_state{backend="..."} | gauge | Circuit breaker state (0=closed, 1=open, 2=half-open) |
| pqcrypta_geoip_blocks_total{country="..."} | counter | GeoIP blocks by country |
| pqcrypta_compression_ratio | histogram | Compression ratio distribution |
Build Profile
Release build optimizations in Cargo.toml:
[profile.release]
lto = "thin" # Link-time optimization (thin for faster builds)
codegen-units = 1 # Single codegen unit for better optimization
panic = "abort" # Abort on panic (smaller binary)
strip = true # Strip symbols from binary
opt-level = 3 # Maximum optimization level
[lints.rust]
unsafe_code = "warn" # Warn on unsafe code usage
[lints.clippy]
all = "warn" # All clippy lints as warnings
pedantic = "warn" # Pedantic lints enabled
nursery = "warn" # Experimental lints enabled
Configuration Reference
Server Configuration
[server]
bind_address = "0.0.0.0" # IP address to bind
udp_port = 443 # Primary UDP port for QUIC
additional_ports = [4433, 4434] # Additional ports
max_connections = 10000 # Max concurrent connections
max_streams_per_connection = 1000 # Max streams per connection
keepalive_interval_secs = 15 # Keep-alive interval
max_idle_timeout_secs = 120 # Idle timeout
enable_ipv6 = true # Enable IPv6 dual-stack
worker_threads = 0 # Worker threads (0 = auto)
Minimal Configuration
# /etc/pqcrypta/proxy-config.toml
[server]
bind_address = "0.0.0.0"
udp_port = 443
additional_ports = [4433, 4434]
[tls]
cert_path = "/etc/letsencrypt/live/example.com/fullchain.pem"
key_path = "/etc/letsencrypt/live/example.com/privkey.pem"
# ca_cert_path = "/path/to/ca.pem" # CA cert for mTLS
# require_client_cert = true # Require client certs
alpn_protocols = ["h3", "h2", "http/1.1"]
min_version = "1.3" # TLS 1.3 only
ocsp_stapling = true
cert_reload_interval_secs = 3600 # Hot-reload certs
enable_0rtt = false # Disabled by default (security)
[http_redirect]
enabled = true
port = 80
# Backend: Apache on port 8080
[backends.apache]
name = "apache"
type = "http1"
address = "127.0.0.1:8080"
tls_mode = "terminate"
# Backend: Rust API on port 3003
[backends.api]
name = "api"
type = "http1"
address = "127.0.0.1:3003"
tls_mode = "terminate"
# Route: api.example.com -> API backend
[[routes]]
name = "api-route"
host = "api.example.com"
path_prefix = "/"
backend = "api"
forward_client_identity = true
priority = 100
# Route: example.com -> Apache backend
[[routes]]
name = "main-site"
host = "example.com"
path_prefix = "/"
backend = "apache"
forward_client_identity = true
priority = 100
Security Configuration
[security]
max_request_size = 10485760 # Max request size (10MB)
max_header_size = 65536 # Max header size (64KB)
connection_timeout_secs = 30 # Connection timeout
dos_protection = true # Enable DoS protection
blocked_ips = [] # Manually blocked IPs
allowed_ips = [] # IP allowlist (empty = all)
max_connections_per_ip = 100 # Max connections per IP
# GeoIP blocking
geoip_db_path = "/var/lib/pqcrypta/geoip/GeoLite2-City.mmdb"
blocked_countries = ["CN", "RU", "KP"] # ISO country codes
# Auto-blocking behavior
auto_block_threshold = 10 # Patterns before auto-block
auto_block_duration_secs = 300 # Block duration
# Error rate monitoring
error_4xx_threshold = 100 # 4xx errors before check
min_requests_for_error_check = 200 # Min requests for check
error_rate_threshold = 0.7 # Error rate threshold (70%)
error_window_secs = 60 # Error tracking window
# Zero-trust mode β startup aborts if any constraint is violated:
# all backends must use TLS, trusted_internal_cidrs must be empty,
# require_client_cert must be true, admin.hmac_secret must be set
zero_trust_mode = false # Enable zero-trust startup validation
[security.rate_limit]
requests_per_second = 100
burst_size = 200
auto_block_threshold = 1000
block_duration_secs = 3600
max_connections_per_ip = 100
Advanced Rate Limiting
Multi-dimensional rate limiting inspired by Cloudflare, Envoy, and HAProxy. Solves the corporate NAT problem where many users share one gateway IP. Runs on all listener paths β HTTP/1.1, HTTP/2 (TCP), and HTTP/3 (QUIC). An optional Redis backend distributes per-key counters across all proxy instances using atomic Lua scripts, so quotas are shared cluster-wide rather than per-node.
Request arrives (HTTP/1.1, HTTP/2 via TCP OR HTTP/3 via QUIC)
β
ββ [QUIC/HTTP3 only] Simple per-IP SecurityState governor + auto-block
β
ββ Advanced rate limiter (ALL paths: TCP HTTP/1.1 + HTTP/2 + QUIC HTTP/3)
β β
β ββ Global token-bucket (IN-MEMORY β per-node DDoS protection)
β ββ Resolve key (IP / API key / JA3 / JWT / composite)
β ββ Redis available?
β β YES β Lua token-bucket (per-second, distributed, atomic EVAL)
β β Lua fixed-window (per-minute, distributed, INCR+EXPIRE)
β β Lua fixed-window (per-hour, distributed, INCR+EXPIRE)
β β ββ any command timeout β local DashMap fallback ββ
β β NO β local DashMap bucket (in-process, original behaviour)
β ββ Adaptive anomaly detection (always local, per-node)
[advanced_rate_limiting]
enabled = true
# Key resolution strategy
# Options: source_ip, xff_trusted, ja3_fingerprint, jwt_subject, composite
key_strategy = "composite"
# X-Forwarded-For trust configuration
xff_trust_depth = 1
trusted_proxies = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
# IPv6 subnet grouping (prevents per-host evasion)
ipv6_prefix_length = 64
# Global rate limits
[advanced_rate_limiting.global_limits]
requests_per_second = 10000
burst_size = 2000
# Per-IP limits
[advanced_rate_limiting.global_limits.per_ip]
requests_per_second = 100
burst_size = 200
requests_per_minute = 1000
requests_per_hour = 10000
# Per-JA3 fingerprint limits (NAT-friendly client identification)
[advanced_rate_limiting.global_limits.per_fingerprint]
requests_per_second = 100
burst_size = 50
requests_per_minute = 3000
requests_per_hour = 50000
# Per-API-key limits
[advanced_rate_limiting.global_limits.per_api_key]
requests_per_second = 500
burst_size = 250
requests_per_minute = 15000
requests_per_hour = 250000
# Per-composite-key limits (IP + JA3 + path)
[advanced_rate_limiting.global_limits.per_composite]
requests_per_second = 200
burst_size = 100
requests_per_minute = 6000
requests_per_hour = 100000
# Adaptive baseline learning (ML-inspired anomaly detection)
[advanced_rate_limiting.adaptive]
enabled = false
baseline_window_secs = 3600
sensitivity = 0.7
std_dev_multiplier = 3.0
min_samples = 1000
# Distributed rate limiting via Redis (optional β omit section to use in-process only)
[advanced_rate_limiting.redis]
url = "redis://127.0.0.1:6379"
key_prefix = "pqcp" # Namespace prefix for all Redis keys
connect_timeout_ms = 2000
command_timeout_ms = 50 # Per-command; on timeout β silent local fallback
distribute_per_second = true # Include per-second token bucket in Redis
# Route-specific overrides
[[advanced_rate_limiting.route_overrides]]
route_name = "api-route"
per_ip_rps = 200
per_ip_burst = 400
[[advanced_rate_limiting.route_overrides]]
route_name = "login-route"
per_ip_rps = 10
per_ip_burst = 20
Load Balancer Configuration
[load_balancer]
enabled = true
default_algorithm = "least_connections"
[load_balancer.session_affinity]
cookie_name = "PQCPROXY_BACKEND"
cookie_ttl_secs = 3600
cookie_secure = true
cookie_httponly = true
cookie_samesite = "lax"
[load_balancer.queue]
enabled = true
max_size = 1000
timeout_ms = 5000
[load_balancer.slow_start]
enabled = true
duration_secs = 30
initial_weight_percent = 10
[load_balancer.connection_draining]
enabled = true
timeout_secs = 30
# Backend pool with multiple servers
[backend_pools.api]
name = "api"
algorithm = "least_connections"
health_aware = true
affinity = "cookie"
health_check_path = "/health"
health_check_interval_secs = 10
# Primary server
[[backend_pools.api.servers]]
address = "127.0.0.1:3003"
weight = 100
priority = 1
max_connections = 100
timeout_ms = 30000
tls_mode = "terminate"
# Secondary server
[[backend_pools.api.servers]]
address = "127.0.0.1:3004"
weight = 100
priority = 1
max_connections = 100
# Failover server
[[backend_pools.api.servers]]
address = "10.0.0.5:3003"
weight = 50
priority = 2
max_connections = 50
Canary / Percentage Traffic Splitting
Route a configurable percentage of new traffic to a canary server. The
[backend_pools.NAME.canary] section must appear
before the first [[NAME.servers]] entry.
Mark the canary server with canary = true and set
canary_weight_percent.
# Pool-level canary settings (place before [[servers]] entries)
[backend_pools.api-pool.canary]
enabled = true
sticky = true # keep each client on the same canary
sticky_cookie_name = "PQCPROXY_CANARY" # cookie set on first assignment
sticky_cookie_ttl_secs = 3600 # lifetime in seconds
sticky_header = "X-Canary-Group" # optional: pre-assign via request header
auto_rollback = true # suspend on high error rate
rollback_error_rate = 0.05 # suspend at 5 % errors
rollback_window_secs = 60 # sliding window length (seconds)
rollback_min_requests = 10 # minimum requests before rollback triggers
# Canary server β receives 5 % of new traffic
[[backend_pools.api-pool.servers]]
address = "127.0.0.1:3005"
canary = true
canary_weight_percent = 5
weight = 100
priority = 1
max_connections = 100
timeout_ms = 30000
tls_mode = "terminate"
# Stable servers β receive the remaining 95 %
[[backend_pools.api-pool.servers]]
address = "127.0.0.1:3003"
weight = 100
priority = 1
max_connections = 100
timeout_ms = 30000
tls_mode = "terminate"
Admin API endpoints for live canary control (all require Bearer token auth):
| Endpoint | Description |
|---|---|
| GET /canary | Status of all canary servers across all pools β shows weight, suspended flag, error rate, and window counters |
| POST /canary/suspend/:id | Suspend a canary server immediately; all traffic falls back to stable servers |
| POST /canary/resume/:id | Re-enable a suspended canary and reset its error window |
| POST /canary/weight/:id | Adjust canary_weight_percent at runtime without restarting β body: {"percent": 10} |
Traffic Shadowing / Mirroring
Add a [routes.shadow] block to any route to mirror
traffic to a secondary backend. The client only sees the primary response; the shadow
response is logged and discarded. All parameters are configurable β nothing is hardcoded.
[[routes]]
name = "api-route"
host = "api.example.com"
backend = "api-stable"
[routes.shadow]
backend = "api-canary" # Must be a key in [backends.*]
percent = 10 # 0-100 % of requests to mirror (default 100)
timeout_ms = 5000 # Abandon shadow after this many ms (default 5000)
shadow_header = "X-Shadow-Request" # Header injected on shadow requests
shadow_header_value = "1" # Value for that header (default "1")
log_responses = true # Log shadow status + latency at INFO (default true)
Shadow requests carry the configurable marker header so the shadow backend can identify mirror traffic. Shadow errors and timeouts are logged as warnings and never affect the client. Body buffering is performed only on routes with shadow configured β no overhead on standard routes.
Connection Pool & Circuit Breaker
# HTTP Connection Pool
[connection_pool]
idle_timeout_secs = 90 # Idle connection timeout
max_idle_per_host = 10 # Max idle connections per host
max_connections_per_host = 100 # Max total connections per host
acquire_timeout_ms = 5000 # Connection acquire timeout
# Circuit Breaker (backend protection)
[circuit_breaker]
enabled = true
half_open_delay_secs = 30 # Open to Half-Open delay
half_open_max_requests = 3 # Max test requests in Half-Open
failure_threshold = 5 # Failures to trigger Open state
success_threshold = 2 # Successes to close breaker
stale_counter_cleanup_secs = 300 # Counter cleanup interval
TLS Configuration Modes
TLS Terminate (Default)
[backends.apache]
name = "apache"
type = "http1"
address = "127.0.0.1:8080"
tls_mode = "terminate" # Decrypt at proxy, plain HTTP to backend
TLS Re-encrypt
[backends.internal-api]
name = "internal-api"
type = "http1"
address = "internal.example.com:443"
tls_mode = "reencrypt"
tls_cert = "/path/to/ca.pem" # Optional: custom CA
tls_client_cert = "/path/to/client.pem" # Optional: mTLS client cert
tls_client_key = "/path/to/client.key" # Optional: mTLS client key
tls_skip_verify = false # DANGEROUS if true
tls_sni = "internal.example.com" # Optional: custom SNI
TLS Passthrough (SNI Routing)
[[passthrough_routes]]
name = "external-service"
sni = "external.example.com" # Supports wildcards: *.example.com
backend = "10.0.0.5:443"
proxy_protocol = false
timeout_ms = 30000
Route Configuration
[[routes]]
name = "api-route"
host = "api.example.com" # Host pattern (supports wildcards)
path_prefix = "/api" # Path prefix matching
# path_exact = "/health" # Exact path matching
# path_regex = "^/v[0-9]+/.*" # Regex matching
backend = "api" # Backend name or pool
webtransport = false # Enable WebTransport
forward_client_identity = true # Forward client IP
client_identity_header = "X-Client-IP" # Custom header name
priority = 100 # Route priority (higher = first)
# redirect = "https://example.com" # Redirect URL
# redirect_permanent = true # 301 vs 302
allow_http11 = false # Allow HTTP/1.1 upgrade
skip_bot_blocking = false # Skip bot blocking
timeout_override_ms = 60000 # Route-specific timeout
# Add/remove headers
[routes.add_headers]
X-Forwarded-Proto = "https"
X-Request-ID = "${request_id}"
[routes.remove_headers]
X-Internal-Token = ""
# CORS configuration
[routes.cors]
allow_origin = "https://example.com" # Or "*" for any
allow_methods = ["GET", "POST", "PUT", "DELETE", "OPTIONS"]
allow_headers = ["Content-Type", "Authorization", "X-Request-ID"]
expose_headers = ["X-Request-ID"]
allow_credentials = true
max_age = 86400 # Preflight cache (seconds)
# Override global headers per-route
[routes.headers_override]
x_frame_options = "SAMEORIGIN"
ACME Certificate Automation
[acme]
enabled = true
domains = ["example.com", "api.example.com"]
email = "admin@example.com"
directory_url = "https://acme-v02.api.letsencrypt.org/directory"
challenge_type = "http-01" # http-01 or dns-01
certs_path = "/etc/pqcrypta/certs"
renewal_days = 30 # Renew 30 days before expiry
check_interval_hours = 12 # Check interval
http_port = 80 # HTTP-01 challenge port
Admin API Configuration
[admin]
enabled = true
bind_address = "127.0.0.1" # Bind to localhost only
port = 8082
require_mtls = false # Require mTLS for admin
auth_token = "your-secret-token" # Bearer token auth
allowed_ips = ["127.0.0.1", "::1"] # IP allowlist
OCSP Stapling
[ocsp]
enabled = true
cache_duration_secs = 3600
refresh_before_expiry_secs = 300
timeout_secs = 10
max_retries = 3
retry_delay_ms = 1000
JA3/JA4 Fingerprinting
[fingerprinting]
enabled = true
ja3_enabled = true
ja4_enabled = true
# Classification for rate limiting
[fingerprinting.classification]
browsers = ["Chrome", "Firefox", "Safari", "Edge"]
bots = ["Googlebot", "Bingbot", "curl", "wget"]
malware = ["known-c2-fingerprint"]
# Actions per category
[fingerprinting.actions]
browsers = "allow"
bots = "rate_limit"
malware = "block"
unknown = "rate_limit"
Logging Configuration
[logging]
level = "info" # trace, debug, info, warn, error
format = "json" # json or text (for SIEM integration)
output = "stdout" # stdout, stderr, or file path
access_log = true
access_log_path = "/var/log/pqcrypta/access.log"
security_log_path = "/var/log/pqcrypta/security.log"
# What to include in access logs (W3C Extended format)
[logging.access]
include_headers = false # Request headers (privacy consideration)
include_body = false # Request body (size/privacy)
include_timing = true # Request/response timing
include_ja3 = true # JA3 fingerprint
include_ja4 = true # JA4 fingerprint
include_backend = true # Backend server selected
include_geo = true # Country/city from GeoIP
include_error_details = true # Error information
# Access log fields (W3C format)
# - timestamp, client_ip, method, path, status, size
# - timing (request_time_ms, backend_time_ms)
# - user_agent, referrer, protocol
# - ja3, ja4, backend, country, city
# Security-specific logging
[logging.security]
log_blocked_requests = true # Log all blocked requests
log_rate_limited = true # Log rate limit hits
log_fingerprints = true # Log JA3/JA4 fingerprints
log_geoip = true # Log geographic data
log_circuit_breaker = true # Log breaker state changes
Access Log Example (JSON format)
{
"timestamp": "2026-01-30T12:34:56.789Z",
"client_ip": "203.0.113.42",
"method": "GET",
"path": "/api/v1/users",
"status": 200,
"size": 1542,
"request_time_ms": 23,
"backend_time_ms": 18,
"user_agent": "Mozilla/5.0 ...",
"protocol": "HTTP/3",
"ja3": "771,4865-4866-4867...",
"ja4": "t13d1516h2_002...",
"backend": "api-server-1",
"country": "US",
"city": "San Francisco"
}
OpenTelemetry Configuration
[otel]
# Enable distributed tracing export (disabled by default)
enabled = false
# Service name reported to the collector
service_name = "pqcrypta-proxy"
# OTLP HTTP endpoint (Jaeger, Tempo, OpenTelemetry Collector, etc.)
otlp_endpoint = "http://localhost:4318"
# Sampling: 1.0 = always sample, 0.0 = never, 0.1 = 10 % of root spans
# Uses ParentBased(TraceIdRatioBased) β child spans follow parent decision
sample_ratio = 1.0
# Optional resource attributes added to every span
# [otel.resource_attributes]
# deployment.environment = "production"
# host.name = "proxy-1"
Performance Tuning
[performance]
max_concurrent_connections = 10000
max_streams_per_connection = 1000
keep_alive_interval_secs = 15
max_idle_timeout_secs = 120
# Buffer sizes
receive_buffer_size = 1048576
send_buffer_size = 1048576
# HTTP/3 specific
[performance.http3]
max_field_section_size = 65536
qpack_max_table_capacity = 4096
enable_0rtt = false # Disabled by default for security
Linux Kernel Tuning (Recommended)
# /etc/sysctl.conf
# UDP Buffer sizes for QUIC performance
net.core.rmem_max = 26214400 # 25MB receive buffer
net.core.wmem_max = 26214400 # 25MB send buffer
net.core.rmem_default = 1048576 # 1MB default receive
net.core.wmem_default = 1048576 # 1MB default send
# UDP memory allocation
net.ipv4.udp_mem = 65536 131072 262144
# Network backlog for high connection rates
net.core.netdev_max_backlog = 65536
net.core.somaxconn = 65536
# TCP tuning (for HTTP/1.1 and HTTP/2)
net.ipv4.tcp_max_syn_backlog = 65536
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_rmem = 4096 87380 26214400
net.ipv4.tcp_wmem = 4096 65536 26214400
# File descriptor limits
fs.file-max = 1048576
# Apply changes
sudo sysctl -p
File Descriptor Limits
# /etc/security/limits.conf
pqcrypta soft nofile 65535
pqcrypta hard nofile 65535
pqcrypta soft nproc 4096
pqcrypta hard nproc 4096
Build Optimizations
# Build with native CPU optimizations
RUSTFLAGS="-C target-cpu=native" cargo build --release
# Build with maximum optimization (slower compile)
RUSTFLAGS="-C target-cpu=native -C opt-level=3" cargo build --release
# Profile-guided optimization (PGO)
# 1. Build instrumented binary
RUSTFLAGS="-Cprofile-generate=/tmp/pgo-data" cargo build --release
# 2. Run with typical workload
./target/release/pqcrypta-proxy --config config.toml &
# ... run benchmark traffic ...
# 3. Merge profile data
llvm-profdata merge -o /tmp/pgo-data/merged.profdata /tmp/pgo-data
# 4. Build optimized binary
RUSTFLAGS="-Cprofile-use=/tmp/pgo-data/merged.profdata" cargo build --release
Headers Configuration
[headers]
# HSTS (2-year preload)
hsts = "max-age=63072000; includeSubDomains; preload"
# Security headers
x_frame_options = "DENY"
x_content_type_options = "nosniff"
referrer_policy = "strict-origin-when-cross-origin"
permissions_policy = "camera=(), microphone=(), geolocation=(), interest-cohort=()"
# Cross-origin isolation
cross_origin_opener_policy = "same-origin"
cross_origin_embedder_policy = "require-corp"
cross_origin_resource_policy = "same-origin"
# Additional security
x_permitted_cross_domain_policies = "none"
x_download_options = "noopen"
x_dns_prefetch_control = "off"
# PQC branding (advertise capabilities)
x_quantum_resistant = "ML-KEM-1024, ML-DSA-87, X25519MLKEM768"
x_security_level = "Post-Quantum Ready"
# ============================================
# HTTP/3 Performance & Monitoring Headers
# ============================================
# Server-Timing header (RFC 6797) - Performance metrics
# Shows proxy processing time in browser DevTools Network tab
server_timing_enabled = true
# Accept-CH header - Client Hints for adaptive content delivery
# Enables browsers to send device/network info for responsive content
accept_ch = "DPR, Viewport-Width, Width, ECT, RTT, Downlink, Sec-CH-UA-Platform, Sec-CH-UA-Mobile"
# NEL (Network Error Logging) - Client-side error reporting
# JSON policy defining how browsers should report network errors
nel = '{"report_to":"default","max_age":86400,"include_subdomains":true}'
# Report-To header - Defines reporting endpoint for NEL
# JSON configuration for the Reporting API endpoint
report_to = '{"group":"default","max_age":86400,"endpoints":[{"url":"https://api.pqcrypta.com/reports"}]}'
# Priority header (RFC 9218) - HTTP/3 response prioritization
# u=0-7 (urgency, 0=highest), i (incremental delivery flag)
priority = "u=3"
CLI Arguments
pqcrypta-proxy [OPTIONS]
Options:
-c, --config <PATH> Configuration file [default: /etc/pqcrypta/config.toml]
--udp-port <PORT> Override UDP port for QUIC
--admin-port <PORT> Override admin API port
--log-level <LEVEL> Log level [default: info]
--json-logs Enable JSON log format
--no-pqc Disable PQC hybrid key exchange
--watch-config Watch config file for changes [default: true]
--validate Validate configuration only
-h, --help Print help
-V, --version Print version
Environment variables: PQCRYPTA_CONFIG, PQCRYPTA_UDP_PORT, PQCRYPTA_ADMIN_PORT, PQCRYPTA_LOG_LEVEL, PQCRYPTA_JSON_LOGS
Deployment Examples
Linux - Systemd Service
[Unit]
Description=PQCrypta Proxy - QUIC/HTTP3/WebTransport Proxy with PQC TLS
Documentation=https://github.com/PQCrypta/pqcrypta-proxy
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
User=pqcrypta
Group=pqcrypta
# Configuration
Environment=PQCRYPTA_CONFIG=/etc/pqcrypta/config.toml
Environment=PQCRYPTA_LOG_LEVEL=info
Environment=PQCRYPTA_JSON_LOGS=true
ExecStart=/usr/local/bin/pqcrypta-proxy --config ${PQCRYPTA_CONFIG}
ExecReload=/bin/kill -HUP $MAINPID
# Graceful shutdown
TimeoutStopSec=30
KillSignal=SIGTERM
KillMode=mixed
# Restart policy
Restart=on-failure
RestartSec=5
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
PrivateTmp=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectKernelModules=true
MemoryDenyWriteExecute=true
# Read-only paths
ReadOnlyPaths=/etc/letsencrypt
ReadOnlyPaths=/etc/pqcrypta
ReadWritePaths=/var/log/pqcrypta-proxy
# Network capability
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
# Resource limits
LimitNOFILE=65535
LimitNPROC=4096
StandardOutput=journal
StandardError=journal
SyslogIdentifier=pqcrypta-proxy
[Install]
WantedBy=multi-user.target
# Installation
sudo cp pqcrypta-proxy.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable pqcrypta-proxy
sudo systemctl start pqcrypta-proxy
journalctl -u pqcrypta-proxy -f
macOS - launchd Service
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.pqcrypta.proxy</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/pqcrypta-proxy</string>
<string>--config</string>
<string>/usr/local/etc/pqcrypta/config.toml</string>
</array>
<key>EnvironmentVariables</key>
<dict>
<key>PQCRYPTA_LOG_LEVEL</key>
<string>info</string>
<key>PQCRYPTA_JSON_LOGS</key>
<string>true</string>
</dict>
<key>WorkingDirectory</key>
<string>/usr/local/var/pqcrypta-proxy</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
<key>NetworkState</key>
<true/>
</dict>
<key>ThrottleInterval</key>
<integer>5</integer>
<key>StandardOutPath</key>
<string>/usr/local/var/log/pqcrypta-proxy/stdout.log</string>
<key>StandardErrorPath</key>
<string>/usr/local/var/log/pqcrypta-proxy/stderr.log</string>
<key>SoftResourceLimits</key>
<dict>
<key>NumberOfFiles</key>
<integer>65535</integer>
</dict>
</dict>
</plist>
# Installation
cp com.pqcrypta.proxy.plist ~/Library/LaunchAgents/
launchctl load ~/Library/LaunchAgents/com.pqcrypta.proxy.plist
launchctl start com.pqcrypta.proxy
Windows - NSSM Service
# Run as Administrator
# Install NSSM from https://nssm.cc/
# Create service
nssm install pqcrypta-proxy "C:\Program Files\pqcrypta-proxy\pqcrypta-proxy.exe"
nssm set pqcrypta-proxy AppParameters "--config C:\ProgramData\pqcrypta\config.toml"
nssm set pqcrypta-proxy AppDirectory "C:\Program Files\pqcrypta-proxy"
# Environment variables
nssm set pqcrypta-proxy AppEnvironmentExtra "PQCRYPTA_LOG_LEVEL=info" "PQCRYPTA_JSON_LOGS=true"
# Logging with rotation
nssm set pqcrypta-proxy AppStdout "C:\ProgramData\pqcrypta\logs\stdout.log"
nssm set pqcrypta-proxy AppStderr "C:\ProgramData\pqcrypta\logs\stderr.log"
nssm set pqcrypta-proxy AppRotateFiles 1
nssm set pqcrypta-proxy AppRotateOnline 1
nssm set pqcrypta-proxy AppRotateBytes 10485760
# Restart policy
nssm set pqcrypta-proxy AppExit Default Restart
nssm set pqcrypta-proxy AppRestartDelay 5000
# Auto-start
nssm set pqcrypta-proxy Start SERVICE_AUTO_START
nssm set pqcrypta-proxy Description "PQCrypta Proxy - QUIC/HTTP3 with PQC TLS"
# Start service
nssm start pqcrypta-proxy
Windows - Native sc.exe (Alternative)
# Create service
sc.exe create pqcrypta-proxy binPath= "\"C:\Program Files\pqcrypta-proxy\pqcrypta-proxy.exe\" --config \"C:\ProgramData\pqcrypta\config.toml\"" DisplayName= "PQCrypta Proxy" start= auto
# Set description
sc.exe description pqcrypta-proxy "PQCrypta Proxy - QUIC/HTTP3/WebTransport with PQC TLS"
# Configure recovery (restart on failure)
sc.exe failure pqcrypta-proxy reset= 86400 actions= restart/5000/restart/10000/restart/30000
# Start service
sc.exe start pqcrypta-proxy
Docker
# Build image
docker build -t pqcrypta-proxy .
# Run container
docker run -d --name pqcrypta-proxy \
-p 80:80 \
-p 443:443/tcp \
-p 443:443/udp \
-p 8082:8082 \
-v /etc/letsencrypt:/etc/letsencrypt:ro \
-v ./config:/etc/pqcrypta:ro \
-v ./logs:/var/log/pqcrypta \
--ulimit nofile=65535:65535 \
pqcrypta-proxy
Docker Compose
version: '3.8'
services:
pqcrypta-proxy:
image: pqcrypta/proxy:latest
container_name: pqcrypta-proxy
ports:
- "80:80"
- "443:443/tcp"
- "443:443/udp"
- "8082:8082"
volumes:
- ./config:/etc/pqcrypta:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
- ./logs:/var/log/pqcrypta
- ./data/geoip:/var/lib/pqcrypta/geoip:ro
environment:
- PQCRYPTA_LOG_LEVEL=info
- PQCRYPTA_JSON_LOGS=true
restart: unless-stopped
ulimits:
nofile:
soft: 65536
hard: 65536
sysctls:
- net.core.rmem_max=26214400
- net.core.wmem_max=26214400
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8082/health"]
interval: 30s
timeout: 10s
retries: 3
Security Features
Complete Security Checklist
geoip_block_duration_secs (default 24 h)
192.0.2.0/24) and individual IPs both supported
OsRng (cryptographically secure); constant-time comparison; per-IP + global brute-force lockout
require_loopback = true (default) aborts startup when bind address is non-loopback to prevent plain-HTTP token exposure
SecurityState; blocked IPs and rate limiters are consistent across every port
waf.inspect() for complete payload inspection
openssl subprocesses call env_clear() to prevent PATH/LD_PRELOAD injection from a malicious environment
METHOD+PATH_AND_QUERY+TIMESTAMP (prevents query-parameter mutation attacks); optional X-Request-Nonce bound cryptographically into signature for full replay prevention within 300 s window; nonce store write guarded by prior signature validation
X-Admin-Signature covers METHOD+PATH_AND_QUERY+TIMESTAMP alongside bearer token; optional X-Admin-Nonce bound into signature for full replay prevention within 300 s window
zero_trust_mode startup enforcement — aborts if any backend is plaintext, trusted_internal_cidrs is non-empty, mTLS (require_client_cert) is not enabled, or admin.hmac_secret is not set (bearer-only admin auth is insufficient for zero-trust)
JA3/JA4 TLS Fingerprinting
JA3/JA4 fingerprinting identifies clients by their TLS ClientHello characteristics, enabling detection of:
- Legitimate browsers: Chrome, Firefox, Safari, Edge - unique signatures per browser/version
- Bots and scrapers: curl, wget, Python requests, Scrapy, Selenium
- Security scanners: Nmap, Nessus, Qualys, OWASP ZAP
- Malware signatures: Known C2 communication patterns, cryptocurrency miners
- Corporate NAT users: Distinguish individual users behind shared gateway IPs
JA3 vs JA4 Comparison
| Feature | JA3 | JA4 |
|---|---|---|
| Format | MD5 hash | Human-readable string |
| QUIC Support | No | Yes (JA4+) |
| Readability | Hash only | Protocol_Cipher_Extension format |
| Stability | High | Higher (version-aware) |
NAT Problem Solution
Traditional rate limiting fails when thousands of users share a single corporate gateway IP. JA3/JA4 fingerprints create unique client identifiers based on TLS handshake patterns, allowing per-client rate limiting even behind NAT. Combined with composite keys (IP + JA3 + Path), this provides fine-grained control without blocking legitimate users.
JA3/JA4 Configuration
[fingerprinting]
enabled = true
ja3_enabled = true
ja4_enabled = true
log_fingerprints = true
# Blocking rules
[[fingerprinting.block_rules]]
name = "known-malware"
ja3 = "abc123def456..."
action = "block"
log = true
[[fingerprinting.block_rules]]
name = "scraper-pattern"
ja4_prefix = "t13d" # TLS 1.3, specific cipher pattern
action = "rate_limit"
rate_limit_rps = 10
Security Headers
[headers]
hsts = "max-age=63072000; includeSubDomains; preload"
x_frame_options = "DENY"
x_content_type_options = "nosniff"
referrer_policy = "strict-origin-when-cross-origin"
permissions_policy = "camera=(), microphone=(), geolocation=()"
cross_origin_opener_policy = "same-origin"
cross_origin_embedder_policy = "require-corp"
cross_origin_resource_policy = "same-origin"
# Custom branding headers
x_quantum_resistant = "ML-KEM-1024, ML-DSA-87, X25519MLKEM768"
x_security_level = "Post-Quantum Ready"
mTLS Configuration
[tls]
ca_cert_path = "/etc/pqcrypta/client-ca.pem"
require_client_cert = true
[admin]
require_mtls = true
DoS Protection
Comprehensive multi-layered DoS protection prevents service disruption through connection management, request validation, and automatic threat response.
Connection Limits
- Global max connections: Configurable limit (default 10,000)
- Per-IP connection limits: Prevent single-source flooding
- Per-backend limits: Protect backend servers
- Stream limits: Max streams per connection (default 1,000)
Request Validation
- Max request size: 10MB default (413 response)
- Max header size: 64KB default (431 response)
- Connection timeout: 30 seconds default
- Idle timeout: 120 seconds default
Memory Exhaustion Prevention
- Bounded collections: All DashMaps have size limits
- Eviction policies: LRU eviction for caches
- Request body limits: Streaming with size checks
- Header count limits: Maximum headers per request
Auto-Blocking
- Pattern detection: Identify attack patterns
- Auto-block threshold: 10 patterns trigger block
- Block duration: Configurable (default 5 minutes)
- Auto-expiration: Background cleanup of expired blocks
DoS Configuration
[security]
dos_protection = true
max_request_size = 10485760 # 10MB max body
max_header_size = 65536 # 64KB max headers
connection_timeout_secs = 30 # Connection timeout
max_connections_per_ip = 100 # Per-IP limit
# Auto-blocking behavior
auto_block_threshold = 10 # Patterns before block
auto_block_duration_secs = 300 # Block duration (5 min)
# Error rate monitoring (detects broken clients/attacks)
error_4xx_threshold = 100 # 4xx errors before check
min_requests_for_error_check = 200 # Min requests to evaluate
error_rate_threshold = 0.7 # 70% error rate triggers action
error_window_secs = 60 # Error tracking window
Multi-Dimensional Rate Limiting
Advanced rate limiting solves the corporate NAT problem where thousands of users share a single gateway IP. Multiple key strategies can be combined for precise control.
Rate Limiting Key Strategies
| Strategy | Key | Use Case |
|---|---|---|
| source_ip | Client IP (or X-Forwarded-For) | Basic per-IP limiting |
| xff_trusted | Trusted X-Forwarded-For IP | Behind load balancer/CDN |
| ja3_fingerprint | JA3 TLS fingerprint hash | Per-client behind NAT |
| jwt_subject | JWT sub claim | Per-user rate limits |
| composite | IP + JA3 + Path | Fine-grained control |
Multi-Window Rate Limits
[advanced_rate_limiting]
enabled = true
key_strategy = "composite"
# Trust configuration for reverse proxy chains
xff_trust_depth = 1
trusted_proxies = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
# IPv6 subnet grouping (prevents /128 evasion)
ipv6_prefix_length = 64
# Global limits
[advanced_rate_limiting.global_limits]
requests_per_second = 10000
burst_size = 2000
# Per-IP multi-window limits
[advanced_rate_limiting.global_limits.per_ip]
requests_per_second = 100
burst_size = 200
requests_per_minute = 1000
requests_per_hour = 10000
# Per-JA3 limits (NAT-friendly)
[advanced_rate_limiting.global_limits.per_ja3]
requests_per_second = 500
burst_size = 100
# Per-JWT user limits
[advanced_rate_limiting.global_limits.per_jwt_subject]
requests_per_second = 50
burst_size = 100
# Composite key limits
[advanced_rate_limiting.global_limits.composite]
requests_per_second = 200
burst_size = 50
# Adaptive baseline learning
[advanced_rate_limiting.adaptive]
enabled = true
learning_window_secs = 3600
anomaly_threshold = 3.0 # Standard deviations
block_anomalies = false # Log-only mode
min_samples = 100 # Min samples for baseline
Bot Detection & Classification
Automatic bot detection using TLS fingerprinting, user-agent analysis, and behavioral patterns.
Known Fingerprint Database
| Category | Examples | Default Action |
|---|---|---|
| Browsers | Chrome, Firefox, Safari, Edge, Opera | Allow |
| Good Bots | Googlebot, Bingbot, DuckDuckBot, Slurp | Allow (with verification) |
| Tools | curl, wget, HTTPie, Postman | Rate Limit |
| Scrapers | Scrapy, Selenium, Puppeteer, PhantomJS | Rate Limit / Challenge |
| Scanners | Nmap, Nessus, Qualys, OWASP ZAP, Nikto | Log / Block |
| Malware | Known C2 patterns, cryptominers | Block |
Bot Detection Configuration
[fingerprinting]
enabled = true
ja3_enabled = true
ja4_enabled = true
log_fingerprints = true
# Client classification
[fingerprinting.classification]
browsers = ["Chrome", "Firefox", "Safari", "Edge", "Opera"]
good_bots = ["Googlebot", "Bingbot", "DuckDuckBot"]
tools = ["curl", "wget", "HTTPie"]
scrapers = ["Scrapy", "Selenium", "Puppeteer"]
scanners = ["Nmap", "Nessus", "ZAP", "Nikto"]
malware = ["known-c2-fingerprint", "cryptominer-sig"]
# Actions per category
[fingerprinting.actions]
browsers = "allow"
good_bots = "verify_then_allow" # Verify via reverse DNS
tools = "rate_limit"
scrapers = "rate_limit"
scanners = "log"
malware = "block"
unknown = "rate_limit"
# Malware signature matching
[[fingerprinting.block_rules]]
name = "known-c2-pattern"
ja3 = "abc123def456..."
action = "block"
log = true
[[fingerprinting.block_rules]]
name = "scraper-pattern"
ja4_prefix = "t13d"
action = "rate_limit"
rate_limit_rps = 10
Threat Detection
Real-time threat detection identifies and responds to attack patterns before they impact services.
Pattern Detection
- Rapid request bursts
- High error rate clients
- Suspicious user-agents
- Path traversal attempts
- SQL injection signatures
Anomaly Detection
- Baseline traffic learning
- Statistical deviation alerts
- Time-of-day patterns
- Geographic anomalies
- Request size outliers
Response Actions
- Log: Record for analysis
- Rate Limit: Reduce allowed rate
- Challenge: Proof-of-work/CAPTCHA
- Block: Immediate rejection
- Alert: Notify administrators
GeoIP Blocking
- MaxMind GeoLite2 integration
- Country-level blocking
- Region/city blocking
- ASN blocking
- Allow/deny lists
GeoIP Configuration
[security]
geoip_db_path = "/var/lib/pqcrypta/geoip/GeoLite2-City.mmdb"
blocked_countries = ["CN", "RU", "KP", "IR"] # ISO codes
allowed_countries = [] # Empty = all allowed
block_anonymous_proxies = true
block_tor_exit_nodes = true
# ASN blocking (hosting providers, botnets)
blocked_asns = [12345, 67890]
Circuit Breaker
Protects backend servers from cascade failures and enables graceful degradation.
Circuit Breaker States
| Closed | Normal operation - requests flow to backend |
| Open | Backend unhealthy - requests fail fast (503) |
| Half-Open | Testing recovery - limited requests allowed |
[circuit_breaker]
enabled = true
failure_threshold = 5 # Failures to trip breaker
success_threshold = 2 # Successes to close breaker
half_open_delay_secs = 30 # Open to Half-Open delay
half_open_max_requests = 3 # Test requests in Half-Open
timeout_ms = 30000 # Request timeout
stale_counter_cleanup_secs = 300 # Counter cleanup interval
# Per-backend overrides
[circuit_breaker.overrides.api]
failure_threshold = 3 # Stricter for critical API
half_open_delay_secs = 15
Admin API Security
The admin API provides operational control and monitoring. Secure it appropriately for your environment.
Security Endpoints
| Endpoint | Method | Description |
|---|---|---|
| /security/status | GET | Security subsystem status |
| /security/blocked | GET | List blocked IPs |
| /security/blocked/{ip} | DELETE | Unblock an IP |
| /security/block/{ip} | POST | Manually block an IP |
| /security/ratelimit/status | GET | Rate limiter statistics |
| /security/ratelimit/reset/{key} | POST | Reset rate limit for key |
| /security/fingerprints | GET | Recent JA3/JA4 fingerprints |
| /security/threats | GET | Detected threat summary |
| /security/geoip/{ip} | GET | Lookup GeoIP for IP |
Admin API Security Configuration
[admin]
enabled = true
bind_address = "127.0.0.1" # Localhost only
port = 8082
# Authentication options
auth_token = "your-secret-token" # Bearer token
require_mtls = false # Require client certificate
allowed_ips = ["127.0.0.1", "::1"] # IP allowlist
# Sensitive endpoint protection
[admin.protected_endpoints]
shutdown = true # Require auth for /shutdown
reload = true # Require auth for /reload
block_ip = true # Require auth for blocking
Per-Route HMAC Signing
Routes can require clients to prove possession of a shared secret by including an HMAC-SHA256 signature on every request. The signature covers the full path and query string, preventing query-parameter mutation attacks that pass with path-only signing.
Signature Format
The signed message is:
METHOD\nPATH_AND_QUERY\nTIMESTAMP (without nonce)
METHOD\nPATH_AND_QUERY\nTIMESTAMP\nNONCE (with X-Request-Nonce)
The client sends three headers:
X-Request-Signature— hex-encoded HMAC-SHA256 of the message aboveX-Request-Timestamp— Unix timestamp (seconds); rejected if outside ±300 s of server timeX-Request-Nonce— optional unique value; when present it is bound into the signature and checked for uniqueness within the 300 s window (full replay prevention)
Configuration
[[routes]]
name = "api"
host = "api.example.com"
path_prefix = "/"
backend = "api"
[routes.security]
# Shared HMAC secret (min 32 chars, generate with: openssl rand -base64 32)
hmac_secret = "your-route-hmac-secret-at-least-32-chars"
Nonce Replay Prevention
- Nonces stored as SHA-256 digests (32 bytes each) — arbitrarily long nonces cannot exhaust memory
- Nonce store write occurs after signature validation — prevents poisoning the store with bad-signature requests
- Lazy eviction on each check — entries expire after 300 s
- Backwards-compatible: absent nonce falls back to timestamp-only validation
Query-String Protection
- Signs
path_and_query()(e.g./transfer?amount=100) not just the path - Prevents parameter-injection attacks: altering
?amount=100to?amount=100000invalidates the signature - Constant-time comparison via the
subtlecrate (no timing side-channel)
Zero-Trust Mode
Setting zero_trust_mode = true in [security] enables strict startup validation that enforces a zero-trust posture. The proxy aborts at startup if any of the following constraints are violated:
- All backends must use
tls_mode = "reencrypt"or"passthrough"— no plaintext backend connections trusted_internal_cidrsmust be empty — no CIDR-based implicit trusttls.require_client_cert = truemust be set — mTLS required on all client connectionsadmin.hmac_secretmust be set — bearer-token-only admin authentication is insufficient for zero-trust
[security]
zero_trust_mode = true # Startup aborts if any constraint above is violated
[tls]
require_client_cert = true
ca_cert_path = "/etc/pqcrypta/client-ca.pem"
[admin]
hmac_secret = "your-admin-hmac-secret-at-least-32-chars"
auth_token = "your-admin-bearer-token-at-least-32-chars"
Request Validation
Defense-in-depth request validation prevents malformed requests from reaching backends.
HTTP Validation
Method validation, URI length limits, header name/value validation
Path Validation
Path traversal prevention, null byte rejection, encoding validation
Header Validation
Host header validation, header injection prevention, size limits
Body Validation
Content-Length validation, chunked encoding limits, body size limits
[security.validation]
# Request limits
max_uri_length = 8192
max_header_name_length = 256
max_header_value_length = 8192
max_headers_count = 100
# Path security
reject_path_traversal = true # Block ../
reject_null_bytes = true # Block %00
normalize_paths = true # Normalize //
# Method restrictions
allowed_methods = ["GET", "POST", "PUT", "DELETE", "PATCH", "HEAD", "OPTIONS"]
# Host header validation
require_host_header = true
validate_host_format = true
Compliance & Standards
PQCrypta Proxy supports compliance with major security frameworks:
NIST Post-Quantum
- FIPS 203: ML-KEM (Key Encapsulation)
- FIPS 204: ML-DSA (Digital Signatures)
- FIPS 205: SLH-DSA (Stateless Hash)
- X25519MLKEM768 hybrid (IETF draft)
FIPS 140-3
- Optional FIPS mode via aws-lc-rs
- FIPS-validated cryptographic modules
- Build with
--features fips
NSA CNSA 2.0
- Hybrid classical + PQC approach
- NIST Level 3+ security
- Transition timeline compliance
Industry Standards
- PCI-DSS: TLS 1.3, strong ciphers
- HIPAA: Encryption in transit
- SOC 2: Access logging, audit trails
Logging & Audit Trail
Comprehensive logging for security monitoring, incident response, and compliance.
[logging]
level = "info" # trace, debug, info, warn, error
format = "json" # json or text for SIEM
output = "stdout" # stdout, stderr, or file
access_log = true
access_log_path = "/var/log/pqcrypta/access.log"
security_log_path = "/var/log/pqcrypta/security.log"
# Security-specific logging
[logging.security]
log_blocked_requests = true # Log all blocked requests
log_rate_limited = true # Log rate limit hits
log_fingerprints = true # Log JA3/JA4
log_geoip = true # Log geographic data
log_circuit_breaker = true # Log breaker state changes
# Access log fields
[logging.access]
include_headers = false # Request headers (privacy)
include_body = false # Request body (size/privacy)
include_timing = true # Request/response timing
include_ja3 = true # JA3 fingerprint
include_ja4 = true # JA4 fingerprint
include_backend = true # Backend server selected
include_geo = true # Country/city
include_error_details = true # Error information
Admin API Authentication
The admin API requires at least one of the following to be configured, or the proxy refuses to start:
auth_tokenset in[admin]— Bearer token required on every admin request, orallowed_ipsrestricted to loopback addresses (127.x.x.x,::1)
[admin]
enabled = true
bind_address = "127.0.0.1"
port = 8082
auth_token = "your-strong-secret-token-at-least-32-chars" # required unless allowed_ips is loopback-only
allowed_ips = ["127.0.0.1", "::1"]
Token Requirements
- Minimum 32 characters — the proxy rejects shorter tokens at startup
- Generate a strong token with
openssl rand -base64 48 - Token comparison uses constant-time equality to prevent timing side-channel attacks
Brute-Force Protection
- Per-IP: 10 failures per 60-second window triggers a 429 lockout
- Distributed: 50 total failures across all IPs triggers a global 5-minute cooldown (base). Each successive trigger doubles the cooldown (5→10→20→30 min max)
- Endpoint cooldowns:
/acme/renewlimited to once per hour;/ocsp/refreshto once per 5 minutes
HMAC Proof-of-Possession
- Optional
X-Admin-Signatureheader — HMAC-SHA256 overMETHOD+PATH_AND_QUERY+TIMESTAMP, alongside the bearer token - Optional
X-Admin-Nonceheader — bound into the signature for full replay prevention within 300 s window - Configure with
admin.hmac_secret(min 32 chars); required whenzero_trust_mode = true - Constant-time comparison (no timing side-channel)
[admin]
enabled = true
bind_address = "127.0.0.1"
port = 8082
auth_token = "your-strong-bearer-token-at-least-32-chars"
# Optional HMAC proof-of-possession (required if zero_trust_mode = true)
hmac_secret = "your-admin-hmac-secret-at-least-32-chars"
JWT Rate Limiting
Per-subject JWT rate limiting verifies the token’s HMAC-SHA256 signature before trusting the sub claim. Configure a shared signing secret that matches the upstream token issuer:
[advanced_rate_limiting]
key_strategy = "jwt_subject"
jwt_secret = "your-hmac-sha256-secret-at-least-32-bytes"
Without jwt_secret, the jwt_subject strategy is disabled and falls back to the next configured key strategy.
Algorithm restriction: By default only HS256 is accepted. To allow additional HMAC variants:
[advanced_rate_limiting]
jwt_secret = "your-hmac-sha256-secret-at-least-32-bytes"
jwt_algorithms = ["HS256"] # Only HS256/HS384/HS512 are valid; non-HMAC strings are rejected
Insecure Backend TLS
tls_skip_verify = true on a backend completely disables certificate and signature verification for that upstream connection, enabling man-in-the-middle attacks on the proxy↔backend leg. The proxy logs a loud warning for every such backend at startup.
Production deployments reject tls_skip_verify at config load. Production is detected automatically when:
- ACME is enabled (
[acme] enabled = true), or PQCRYPTA_ENV=productionis set in the environment
To use tls_skip_verify in a development environment where neither condition applies, set PQCRYPTA_ENV=development:
PQCRYPTA_ENV=development pqcrypta-proxy --config config.toml
# Only valid when PQCRYPTA_ENV=development and acme.enabled = false
[backends.dev-backend]
name = "dev-backend"
tls_mode = "reencrypt"
address = "localhost:8443"
tls_skip_verify = true
Replace self-signed backend certificates with CA-signed ones before enabling ACME or moving to production.
0-RTT Early Data
0-RTT (TLS 1.3 early data) is disabled by default. When enabled, the proxy detects early-data connections at the TLS accept layer by inspecting the ClientHello and enforces per-route replay protection at the HTTP dispatch layer.
[tls]
enable_0rtt = true
# Methods safe for 0-RTT forwarding (idempotent, no side effects)
zero_rtt_safe_methods = ["GET", "HEAD"]
Per-Route Enforcement (RFC 8470)
Every route has an allow_0rtt flag that defaults to false. When a request arrives as TLS 1.3 early data on a route where allow_0rtt = false, the proxy responds with 425 Too Early and does not forward the request to the backend. This prevents replay attacks on non-idempotent operations (POST, PUT, DELETE, PATCH, etc.).
[[routes]]
name = "static-assets"
host = "cdn.example.com"
path_prefix = "/static/"
backend = "cdn"
allow_0rtt = true # safe: static files, no side effects
[[routes]]
name = "api"
host = "api.example.com"
path_prefix = "/"
backend = "api"
# allow_0rtt = false β default; early-data requests receive 425 Too Early
The x-tls-early-data header used internally to propagate the early-data flag is stripped from all incoming requests before being set by the accept loop, and is removed from every outgoing backend request, so it cannot be forged by clients or leaked to backends.
Post-Quantum Cryptography
Why Post-Quantum?
Quantum computers pose an existential threat to current public-key cryptography. Shor's algorithm can factor large integers and compute discrete logarithms in polynomial time, breaking RSA, DSA, ECDSA, and ECDH. Even data encrypted today can be harvested and decrypted later when quantum computers become available ("harvest now, decrypt later" attacks).
PQCrypta Proxy Solution
PQCrypta Proxy implements hybrid key exchange combining classical X25519 with NIST-standardized ML-KEM-768. This provides security against both classical and quantum attacks - if either algorithm is broken, the other still protects the connection.
Supported KEMs (Key Encapsulation Mechanisms)
| Algorithm | Security Level | Type | Status |
|---|---|---|---|
| X25519MLKEM768 | NIST Level 3 | Hybrid | Recommended |
| SecP256r1MLKEM768 | NIST Level 3 | Hybrid | Available |
| SecP384r1MLKEM1024 | NIST Level 5 | Hybrid | Available |
| X448MLKEM1024 | NIST Level 5 | Hybrid | Available |
| ML-KEM-512 | NIST Level 1 | Pure PQC | Available |
| ML-KEM-768 | NIST Level 3 | Pure PQC | Available |
| ML-KEM-1024 | NIST Level 5 | Pure PQC | Available |
PQC Providers
| Provider | Backend | Features |
|---|---|---|
| rustls | aws-lc-rs | Pure Rust, QUIC-native, memory-safe (Default) |
| openssl3.5 | OpenSSL 3.5+ | Native ML-KEM, broader algorithm compatibility |
| auto | Auto-detect | Selects best available provider at runtime |
PQC Configuration
[pqc]
enabled = true
provider = "auto" # auto, rustls, or openssl3.5
preferred_kem = "X25519MLKEM768" # IETF standard (recommended)
fallback_to_classical = true # Fallback for old clients
min_security_level = 3 # NIST Level 3 minimum (1-5)
additional_kems = ["SecP256r1MLKEM768", "SecP384r1MLKEM1024"]
require_hybrid = false # Allow pure classical fallback
verify_provider = true # Verify PQC at startup
enable_signatures = false # ML-DSA (requires pqc-signatures feature)
# Security hardening
check_key_permissions = true # Check TLS key file permissions
strict_key_permissions = false # Fail startup on insecure permissions
# OpenSSL 3.5+ paths (when provider = "openssl3.5")
openssl_path = "/usr/local/openssl-3.5/bin/openssl"
openssl_lib_path = "/usr/local/openssl-3.5/lib64"
PQC Signatures (ML-DSA)
Enable ML-DSA digital signatures with the pqc-signatures feature:
# Build with ML-DSA signature support
cargo build --release --features "pqc-signatures"
| Signature | Security Level | Status |
|---|---|---|
| ML-DSA-44 | NIST Level 2 | Available (unstable) |
| ML-DSA-65 | NIST Level 3 | Available (unstable) |
| ML-DSA-87 | NIST Level 5 | Available (unstable) |
Security Levels
| NIST Level | Classical Equivalent | Quantum Resistance |
|---|---|---|
| Level 1 | AES-128 | GROVER: 2^64 operations |
| Level 3 | AES-192 | GROVER: 2^96 operations |
| Level 5 | AES-256 | GROVER: 2^128 operations |
NIST Standards Reference
- FIPS 203: Module-Lattice-Based Key-Encapsulation Mechanism (ML-KEM)
- FIPS 204: Module-Lattice-Based Digital Signature (ML-DSA)
- FIPS 205: Stateless Hash-Based Digital Signature (SLH-DSA)
Hybrid Mode Benefits
Defense in Depth
If either classical or post-quantum algorithm is compromised, the other still protects the connection. Both must be broken simultaneously for an attack to succeed.
Backward Compatibility
Clients that don't support PQC can still connect using classical cryptography. The proxy negotiates the best available security.
Future-Proof
As post-quantum standards mature and new algorithms are standardized, the proxy can be updated without breaking existing deployments.
Compliance Ready
Meets emerging regulatory requirements for quantum-resistant cryptography. NSA CNSA 2.0 recommends hybrid approaches during transition.
Test Suite (130 Tests)
PQCrypta Proxy maintains comprehensive test coverage with 130 passing tests across unit tests, integration tests, and configuration validation. All tests are automated via CI/CD pipeline.
ACME Tests (5)
| Test Name | Description |
|---|---|
| test_acme_config_defaults | Validates default ACME configuration values |
| test_acme_service_creation | Tests ACME service initialization |
| test_challenge_type_serialization | HTTP-01/DNS-01 challenge serialization |
| test_get_cert_paths | Certificate path resolution |
| test_stored_account_serialization | Account credential persistence |
Compression Tests (3)
| Test Name | Description |
|---|---|
| test_compress_bytes | Brotli/Zstd/Gzip compression output |
| test_parse_accept_encoding | Accept-Encoding header parsing |
| test_should_compress_content_type | Content-type compression eligibility |
Config Tests (3)
| Test Name | Description |
|---|---|
| test_config_parsing | TOML configuration parsing |
| test_default_config | Default configuration values |
| test_host_matching | Domain/host matching logic |
Fingerprint Tests (36)
| Test Name | Description |
|---|---|
| test_classify_api_client_fingerprints | API client detection |
| test_classify_bot_fingerprints | Bot fingerprint classification |
| test_classify_browser_fingerprints | Browser fingerprint detection |
| test_classify_malicious_fingerprints | Malware/malicious client detection |
| test_classify_scanner_fingerprints | Security scanner detection |
| test_extract_ja3_grease_filtering | GREASE value filtering |
| test_extract_ja3_invalid_record_type | Invalid TLS record handling |
| test_extract_ja3_minimal_client_hello | Minimal ClientHello parsing |
| test_extract_ja3_not_client_hello | Non-ClientHello message handling |
| test_extract_ja3_too_short | Truncated message handling |
| test_extract_ja3_with_sni | SNI extraction in JA3 |
| test_fingerprint_extractor_creation | Extractor initialization |
| test_grease_detection | GREASE extension detection |
| test_ja3_hash_determinism | JA3 hash consistency |
| test_ja4_hash_generation | JA4 fingerprint generation |
| test_ja4_no_sni | JA4 without SNI extension |
| test_known_fingerprints | Known fingerprint database lookup |
| test_parse_alpn_empty | Empty ALPN handling |
| test_parse_alpn_h3 | HTTP/3 ALPN parsing |
| test_parse_alpn_single_protocol | Single protocol ALPN |
| test_parse_alpn_valid | Valid ALPN extension |
| test_parse_ec_point_formats_empty | Empty EC point formats |
| test_parse_ec_point_formats_single | Single EC point format |
| test_parse_ec_point_formats_valid | Valid EC point formats |
| test_parse_signature_algorithms_empty | Empty signature algorithms |
| test_parse_signature_algorithms_too_short | Truncated signature algorithms |
| test_parse_signature_algorithms_valid | Valid signature algorithms |
| test_parse_sni_empty | Empty SNI handling |
| test_parse_sni_too_short | Truncated SNI |
| test_parse_sni_truncated_name | Truncated hostname in SNI |
| test_parse_sni_valid | Valid SNI parsing |
| test_parse_sni_wrong_name_type | Invalid SNI name type |
| test_parse_supported_groups_empty | Empty supported groups |
| test_parse_supported_groups_too_short | Truncated supported groups |
| test_parse_supported_groups_valid | Valid supported groups |
| test_parse_supported_groups_with_grease | Supported groups with GREASE |
HTTP/3 Features Tests (4)
| Test Name | Description |
|---|---|
| test_coalesce_key | Request coalescing key generation |
| test_link_hint_to_header | Early Hints Link header |
| test_parse_client_priority | RFC 9218 priority parsing |
| test_priority_header | Priority header generation |
HTTP Listener Tests (19)
| Test Name | Description |
|---|---|
| test_proxy_v2_constants | PROXY protocol v2 constants |
| test_proxy_v2_header_ipv4 | IPv4 PROXY header generation |
| test_proxy_v2_header_ipv6 | IPv6 PROXY header generation |
| test_proxy_v2_header_mixed_ipv4_to_ipv6 | Mixed IP version handling |
| test_proxy_v2_header_mixed_ipv6_to_ipv4 | Mixed IP version handling |
| test_proxy_v2_signature_constant | PROXY v2 signature validation |
| test_sni_extraction_empty_data | Empty ClientHello handling |
| test_sni_extraction_long_hostname | Long hostname extraction |
| test_sni_extraction_long_session_id | Long session ID handling |
| test_sni_extraction_no_extensions | ClientHello without extensions |
| test_sni_extraction_not_client_hello | Non-ClientHello handling |
| test_sni_extraction_not_handshake | Non-handshake message |
| test_sni_extraction_punycode_hostname | IDN/Punycode hostname |
| test_sni_extraction_simple | Simple SNI extraction |
| test_sni_extraction_subdomain | Subdomain SNI extraction |
| test_sni_extraction_too_short | Truncated message handling |
| test_sni_extraction_truncated_handshake | Truncated handshake |
| test_sni_extraction_with_grease_extensions | GREASE extension handling |
| test_sni_extraction_with_port_like_name | Hostname with port-like pattern |
Load Balancer Tests (5)
| Test Name | Description |
|---|---|
| test_backend_health_tracking | Backend health state management |
| test_effective_weight_with_slow_start | Slow start weight calculation |
| test_extract_session_cookie | Session affinity cookie extraction |
| test_least_connections_selection | Least connections algorithm |
| test_session_cookie_generation | Session cookie creation |
Metrics Tests (7)
| Test Name | Description |
|---|---|
| test_latency_histogram | Latency histogram recording |
| test_metrics_registry | Prometheus registry initialization |
| test_pool_metrics | Connection pool metrics |
| test_rate_limiter_metrics | Rate limiter metrics |
| test_request_metrics | Request counter metrics |
| test_route_metrics | Per-route metrics |
| test_tls_metrics | TLS handshake metrics |
OCSP Tests (4)
| Test Name | Description |
|---|---|
| test_ocsp_config_defaults | Default OCSP configuration |
| test_ocsp_status_info_serialization | OCSP status JSON serialization |
| test_wrap_sequence_long | Long ASN.1 sequence wrapping |
| test_wrap_sequence_short | Short ASN.1 sequence wrapping |
PQC Extended Tests (5)
| Test Name | Description |
|---|---|
| test_default_config | Default PQC configuration |
| test_kem_hybrid_detection | Hybrid KEM detection |
| test_kem_parsing | KEM algorithm parsing |
| test_kem_security_levels | NIST security level mapping |
| test_security_level_filter | Security level filtering |
PQC TLS Tests (4)
| Test Name | Description |
|---|---|
| test_hybrid_detection | Hybrid key exchange detection |
| test_kem_algorithm_names | KEM algorithm name mapping |
| test_kem_parsing | KEM string parsing |
| test_security_levels | Security level validation |
Rate Limiter Tests (8)
| Test Name | Description |
|---|---|
| test_adaptive_baseline | ML-inspired baseline learning |
| test_composite_key | Multi-dimensional key composition |
| test_ipv6_subnet_normalization | IPv6 /64 subnet grouping |
| test_jwt_subject_extraction | JWT subject claim extraction |
| test_rate_limit_key | Rate limit key generation |
| test_rate_limiter_allows_normal_traffic | Normal traffic allowed |
| test_rate_limiter_blocks_excess_traffic | Excess traffic blocked |
| test_sliding_window_counter | Sliding window algorithm |
Security Tests (3)
| Test Name | Description |
|---|---|
| test_blocked_ip_expiration | Blocked IP TTL expiration |
| test_circuit_breaker | Circuit breaker state transitions |
| test_ja3_classification | JA3 fingerprint classification |
TLS Tests (2)
| Test Name | Description |
|---|---|
| test_pqc_kem_names | PQC KEM algorithm names |
| test_pqc_kem_parsing | PQC KEM parsing |
TLS Acceptor Tests (1)
| Test Name | Description |
|---|---|
| test_fingerprinted_connection | Connection with fingerprint capture |
Integration Tests (10)
| Test Name | Description |
|---|---|
| test_backend_config_tls_options | Backend TLS configuration options |
| test_backend_type_variants | Backend type variant handling |
| test_config_loading | Full configuration loading |
| test_passthrough_route_config | TLS passthrough route configuration |
| test_pqc_capabilities | PQC capability detection |
| test_pqc_config_loading | PQC configuration loading |
| test_pqc_default_values | PQC default value validation |
| test_pqc_extended_types | Extended PQC type handling |
| test_route_config_cors_headers | CORS header configuration |
| test_socket_addr_parsing | Socket address parsing |
Unit Config Tests (11)
| Test Name | Description |
|---|---|
| test_admin_socket_addr | Admin API socket address |
| test_backend_types | Backend type enumeration |
| test_config_parsing_minimal | Minimal config parsing |
| test_default_config | Default config values |
| test_passthrough_route_parsing | Passthrough route parsing |
| test_pqc_config_defaults | PQC config defaults |
| test_pqc_provider_options | PQC provider options |
| test_rate_limit_parsing | Rate limit config parsing |
| test_server_socket_addr | Server socket address |
| test_tls_mode_defaults | TLS mode defaults |
| test_tls_mode_parsing | TLS mode parsing |
Running Tests
# Run all tests
cargo test
# Run tests with output
cargo test -- --nocapture
# Run specific module tests
cargo test fingerprint::
cargo test rate_limiter::
cargo test pqc_tls::
# Run integration tests only
cargo test --test integration
# Run with coverage (requires cargo-tarpaulin)
cargo tarpaulin --out Html
CI/CD Pipeline
All tests run automatically on every push via GitHub Actions:
- Clippy - Rust linter for code quality
- Rustfmt - Code formatting validation
- Tests (Ubuntu) - Full test suite on Linux
- Tests (macOS) - Cross-platform validation
- Build (All platforms) - Linux, macOS, Windows builds
- Security Audit - Dependency vulnerability scanning
- Documentation - API docs generation