| Lesson 1 | Planning for Network Requirements |
| Objective | How to Plan for Network Requirements in a Modern Web Deployment |
In the previous module, you learned about the components needed for network communication. This module shifts focus from components to planning — how a web design and technology team evaluates, specifies, and implements the network requirements that support a successful web deployment. At this stage in the development process, the technology team moves to the foreground. They are responsible for translating business goals and traffic expectations into concrete network architecture decisions that determine whether the site performs reliably under real-world conditions.
When you have completed this module, you will be able to:
Every network requirement decision — bandwidth, latency, protocol support, quality of service, redundancy, security policy, and scalability — is grounded in the behavior of the TCP/IP stack. The TCP/IP model is not a theoretical abstraction reserved for textbooks. It is the actual protocol architecture implemented in virtually every modern network: enterprise LANs and WANs, data centers, cloud environments, IoT deployments, and the public internet. When a technology team writes network requirements, they are directly or indirectly specifying behavior that emerges from the TCP/IP stack.
Understanding how the TCP/IP stack influences planning decisions is therefore the foundation of this module. The following sections examine each major planning dimension and how TCP/IP behavior shapes the requirements in that area.
The first planning decision concerns what kinds of traffic the network must support and what delivery guarantees each traffic type requires. This determination maps directly onto the Transport layer of the TCP/IP stack, where the choice between TCP and UDP governs how data is delivered.
TCP (Transmission Control Protocol) provides connection-oriented, reliable, ordered delivery with retransmission, flow control, and congestion avoidance. It is the correct choice for bulk data transfer, file sharing, web page delivery, database transactions, and any application where data integrity matters more than delivery speed.
UDP (User Datagram Protocol) is connectionless and does not guarantee delivery or ordering. It is the correct choice for real-time applications — VoIP, video conferencing, live streaming, and online gaming — where a retransmitted packet arriving late is worse than a dropped packet. UDP combined with RTP (Real-time Transport Protocol) is the standard pattern for real-time media delivery.
QUIC represents a newer pattern that runs over UDP but provides TCP-like reliability, multiplexing, and built-in TLS encryption. HTTP/3 uses QUIC as its transport, and modern CDN and cloud providers increasingly support it for reduced connection latency compared to TCP with TLS.
A practical example illustrates why traffic type analysis matters at planning time. If a business needs a network for video conferencing, file servers, and IoT sensors simultaneously, each traffic class has different requirements: video conferencing needs low jitter and latency tolerance for packet loss; file servers need reliable ordered delivery and sustained throughput; IoT sensors may need low bandwidth but high device density with minimal protocol overhead. Planning the network without distinguishing these traffic classes produces a one-size-fits-all architecture that underserves each use case.
The Internet layer of the TCP/IP stack governs addressing and routing — two decisions that must be made early in network planning because they affect every other layer of the infrastructure.
IPv4 vs IPv6: IPv4 address exhaustion has made IPv6 adoption a planning requirement rather than an option for new deployments. Modern network planning should include dual-stack support — running IPv4 and IPv6 simultaneously — and transition mechanisms for environments where legacy systems still depend on IPv4. Cloud providers including AWS, Google Cloud, and Azure all support IPv6 natively, and content delivery networks increasingly prefer IPv6 for its larger address space and improved routing efficiency.
Subnetting and hierarchical addressing: Effective IP address planning uses hierarchical subnetting to organize network segments by function, security zone, or geographic location. A web deployment might separate public-facing web servers, application servers, database servers, and management infrastructure into distinct subnets with controlled routing between them. This segmentation supports both security policy enforcement and traffic management.
Routing protocol selection: Internal routing between network segments typically uses OSPF for its fast convergence and link-state visibility. BGP governs routing between autonomous systems and is the protocol used for internet peering and multi-homed connectivity. MTU planning — ensuring consistent Maximum Transmission Unit sizes across the path — prevents fragmentation that degrades performance on TCP connections.
Bandwidth planning begins with measuring current traffic, projecting growth, and identifying peak demand periods. Bandwidth is measured in bits per second — kilobits (Kbps), megabits (Mbps), or gigabits (Gbps) — and must be evaluated separately for inbound and outbound traffic since many deployments have asymmetric requirements.
Legacy bandwidth planning relied on dedicated circuits — T1 lines at 1.5 Mbps and T3 lines at 45 Mbps — that were expensive and inflexible. These have been replaced by modern alternatives that deliver far greater capacity at lower cost:
For cloud-hosted web deployments, bandwidth planning shifts from physical circuit sizing to cloud egress cost modeling. Cloud providers charge for data transferred out of their networks. A high-traffic site with large media assets must account for egress costs as a significant operational expense and plan content delivery network (CDN) usage to minimize origin egress.
Performance planning requires specifying acceptable thresholds for latency, jitter, and packet loss for each traffic class. These thresholds are enforced through Quality of Service (QoS) mechanisms that prioritize traffic at the network layer.
TCP performance is affected by several factors that network planners must account for. TCP window scaling governs how much unacknowledged data can be in flight — insufficient window sizes throttle throughput on high-latency links. Congestion control algorithms — Cubic, BBR (Bottleneck Bandwidth and RTT), and others — determine how TCP responds to network congestion. BBR in particular is increasingly deployed in cloud environments because it achieves higher throughput on paths with moderate packet loss compared to Cubic. Bufferbloat — excessive latency caused by oversized network buffers — degrades interactive traffic and can be addressed through Active Queue Management techniques such as ECN (Explicit Congestion Notification) and WRED (Weighted Random Early Detection).
For real-time applications, jitter budgets must be specified during planning. Video conferencing typically requires end-to-end latency below 150ms and jitter below 30ms for acceptable call quality. Exceeding these thresholds produces audible and visible degradation that cannot be corrected in post-processing.
Modern network architecture for web deployment is defined by several structural features that did not exist in the dedicated-circuit era:
Redundancy and failover: Production networks require redundant paths at every layer — redundant internet connections from multiple providers, redundant switches and routers, and redundant links between network segments. TCP connection persistence during failover depends on how quickly routing converges after a failure — OSPF typically converges in under a second with modern tuning, while BGP failover may take longer without aggressive timer configuration.
Container and Kubernetes networking: Modern application deployments use containerized workloads orchestrated by Kubernetes. Kubernetes networking introduces pod-to-pod communication across nodes, service discovery through internal DNS, and CNI (Container Network Interface) plugins that implement network policies. Planning a Kubernetes deployment requires understanding overlay networks, pod CIDR ranges, service IP ranges, and how network policies enforce traffic isolation between namespaces.
Multiple TCP/IP stacks per host: Virtualized environments such as VMware vSphere use separate TCP/IP stacks for different traffic classes — management traffic, vMotion live migration, vSAN storage traffic, and virtual machine data traffic each run on isolated network paths to prevent contention. Planning these environments requires dedicated network segments and bandwidth reservations for each stack.
SD-WAN and overlay architectures: SD-WAN creates a software-defined overlay network on top of commodity underlay connections. Traffic is steered dynamically based on application type, link quality metrics, and policy — a video conference call may be steered to the lowest-latency path while bulk file transfers use a higher-bandwidth but higher-latency path. The underlay IP routing provides connectivity while the SD-WAN controller provides intelligence.
Security requirements are specified at every layer of the TCP/IP stack and must be integrated into network planning from the outset rather than added after the architecture is defined.
Stateful firewalls and ACLs operate at the Transport and Network layers, inspecting TCP state — SYN, established, FIN — to permit legitimate traffic and block anomalous connection patterns. SYN cookie protection defends against SYN flood denial of service attacks by deferring connection state allocation until the three-way handshake completes.
NAT (Network Address Translation) conserves IPv4 addresses and provides implicit security by hiding internal addressing from the public internet. NAT traversal complications affect peer-to-peer applications and VoIP — planning must account for STUN, TURN, and ICE mechanisms that enable connections through NAT boundaries.
IPsec and TLS provide encryption at the Network and Application layers respectively. IPsec secures site-to-site VPN tunnels and cloud connectivity. TLS secures application-layer communication — HTTPS, API calls, database connections. Planning must account for TLS termination points, certificate management, and the computational overhead of encryption at scale.
Network risk planning identifies threats and failure modes that must be mitigated before production deployment. Key risk categories include:
Planning for network requirements means translating business goals, traffic characteristics, and performance expectations into concrete TCP/IP-layer specifications. The TCP/IP stack governs every dimension of that planning: transport protocol selection drives bandwidth and latency requirements; IP addressing and routing determine network segmentation and path design; QoS mechanisms enforce performance guarantees across traffic classes; and security policy is enforced at multiple protocol layers simultaneously. Legacy bandwidth solutions — T1 and T3 dedicated circuits — have been replaced by fiber, cloud connectivity services, and SD-WAN. Modern deployments add containerized workloads, Kubernetes networking, and multi-stack virtualized environments to the planning scope. In the next lesson, you will learn who is responsible for determining the network architecture and how IT staff roles map to network planning decisions.