Site Infrastructure  «Prev  Next»
Lesson 3Web server hardware requirements
ObjectiveIdentify the hardware and software factors that drive real-world web performance, then choose safe, modern defaults.

Web Server Hardware Requirements (2025 Guidance)

Key idea: For most web workloads, I/O, networking, and memory constrain throughput and tail latency more than raw CPU frequency. Prioritize fast storage (NVMe), sufficient RAM (to avoid swapping), and high-throughput NICs with the right offloads. Then align the software stack (HTTP/2/3, TLS, caching) and OS tuning.

This page modernizes an older checklist that referenced legacy buses (e.g., “ultra-wide SCSI”), 10/100 Mbps NICs, and very small RAM sizes. Below you’ll find updated guidance that fits contemporary ecommerce stacks, containers, and CDNs.

What Actually Constrains a Web Server

Web Server Hardware Components (Modernized)

I/O subsystem priority for web servers
1) I/O subsystem first. Web activity is I/O intensive. Use NVMe SSDs on PCIe 4.0/5.0, separate OS/logs/data volumes, and a modern filesystem (XFS/ext4 or ZFS where appropriate). Keep queue depth healthy, enable TRIM/discard, and monitor latency (p99/p999), not just throughput.
Network card considerations for web servers
2) Network subsystem. Choose 10/25/40/100 GbE NICs with multi-queue, RSS, and sensible offloads (TSO/LRO/GRO). Consider SR-IOV or DPUs/SmartNICs for noisy multi-tenant hosts. Tune MTU consistently end-to-end if using jumbo frames. Aim for high throughput with minimal CPU.
Memory sizing for web servers
3) Memory. Size RAM so the OS rarely swaps. Use ECC DDR5 (or DDR4 ECC on older platforms), populate channels evenly, and be NUMA-aware on multi-socket systems. Typical starting points: 16–32 GB for small nodes, 64–128 GB for medium, 256 GB+ for heavy cache or DB co-location.
CPU role in web servers
4) CPU. Don’t undersell it: TLS, compression, SSR, and WAF rules are CPU-heavy. Favor modern server CPUs (e.g., AMD EPYC, Intel Xeon) with plentiful cores, strong per-core performance, and AES-NI/VAES for crypto. Match worker counts to core topology; pin interrupts and hot threads to reduce cross-NUMA chatter.
Single vs multi-socket and scaling
5) Scale up vs. scale out. A single-socket, high-core CPU often beats dual-socket for latency simplicity. If you do run multi-socket, tune for NUMA (IRQ affinity, memory locality). Prefer horizontal scaling behind a load balancer/CDN rather than chasing ever-larger machines.
Server software impact on performance
6) Server software matters most. Use a modern reverse proxy (NGINX/Envoy/Apache/Caddy), enable HTTP/2 and HTTP/3/QUIC, terminate TLS with strong ciphers, and add a caching layer (e.g., CDN + Varnish or NGINX micro-caching). Keep the kernel current and instrument everything.

Baseline Sizing Recipes

OS & Stack Tuning Quick Wins

# Linux networking (apply cautiously; validate for your distro/version)
sysctl -w net.core.somaxconn=4096
sysctl -w net.core.netdev_max_backlog=32768
sysctl -w net.ipv4.tcp_fastopen=3
sysctl -w net.ipv4.tcp_fin_timeout=15
sysctl -w net.ipv4.tcp_mtu_probing=1
# Keepalive/timeouts should align with proxy and CDN settings
# NGINX sketches (enable HTTP/2/3 if built with QUIC-capable branch)
worker_processes auto;
events { worker_connections 4096; }

http {
  sendfile on; tcp_nopush on; tcp_nodelay on;
  keepalive_timeout  20s;
  types_hash_max_size 4096;

  # TLS (example only; maintain your own policy and cert automation)
  ssl_protocols TLSv1.2 TLSv1.3;

  # Micro-caching for dynamic-but-cacheable responses
  proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=micro:100m max_size=5g inactive=60m use_temp_path=off;

  server {
    listen 443 ssl http2;  # add 'http3' if supported
    location / {
      proxy_pass http://app;
      proxy_cache micro;
      proxy_cache_valid 200 1m;
    }
  }
}

Checklist Before Going Live

Where the Original Content Fell Short (Self-Critique)


Site Connectivity

Right-size connectivity for today’s traffic. Instead of legacy T-carriers (T-1/T-3), plan capacity in Gbps with headroom for peaks and TLS/HTTP overhead. Most ecommerce sites terminate user traffic at a CDN/edge and send a much smaller, cache-miss stream to the origin. Provision links/NICs for the origin based on uncached request rates and average response size, not total page weight.

  • Rule of thumb: Size for the 95th/99th percentile and keep 2× headroom for promos and incidents.
  • Origin vs edge: Push static assets and long-TTL content to CDN; keep origin focused on dynamic requests and APIs.
  • NICs & paths: Prefer 10/25/40/100 GbE with multi-queue/RSS. If on-prem, use redundant uplinks, diverse paths/ISPs, and LACP where appropriate.

Quick Planning Math

Estimate origin egress to validate link/NIC sizing:

# Example: 200 requests/sec, 120 KB avg uncached response
rps=200
avg_kb=120
egress_mbps = rps * avg_kb * 8 / 1024
# ≈ 187.5 Mbps → provision ≥ 400 Mbps effective headroom

Edge & Load Balancing

DNS, TLS & Certificates

Choosing an HTTP Server (Concurrency Reality)

“How many concurrent connections can it handle?” depends on kernel/network tuning, memory, and workload more than the brand. Today’s leading servers—NGINX, Envoy, Apache httpd (event MPM), and Caddy—can each sustain tens of thousands of keep-alive connections with proper tuning.

Practical guidance

Observability & Resilience

Web Server Hardware - Quiz

Click the Quiz link below to take a multiple-choice quiz about Web server requirements.
Web Server Hardware - Quiz

SEMrush Software 3 SEMrush Banner 3