Computer Network Application Layer Explained: HTTP/1.1, HTTPS, HTTP/2, HTTP/3, RPC, and WebSocket

This article focuses on the computer network application layer and systematically explains the role boundaries of HTTP/1.1, HTTPS, HTTP/2, HTTP/3, RPC, and WebSocket. It addresses a common pain point: protocol evolution often feels fragmented, making interview topics hard to connect. Keywords: HTTP, HTTPS, HTTP/3.

Technical Specification Snapshot

Parameter Details
Domain Computer network application layer
Core Language Protocol specifications, with no specific implementation language
Transport Protocols TCP, UDP, TLS, QUIC
Focus Areas HTTP, HTTPS, RPC, WebSocket, caching
Star Count Not provided in the source material
Core Dependencies Browsers, web servers, TLS certificate infrastructure

HTTP is the most fundamental general-purpose application-layer protocol

HTTP stands for Hypertext Transfer Protocol. It handles request-response communication between clients and servers. Here, “hypertext” refers not only to plain text, but also to images, audio, links, and structured data.

HTTP became the de facto standard of the web mainly because it is simple enough: both request messages and response messages consist of headers and a message body, which keeps the cost of understanding and debugging low.

A minimal HTTP request message looks like this

GET /index.html HTTP/1.1
Host: example.com
Connection: keep-alive
Accept: text/html

This message shows the basic structure of HTTP/1.1: the request line defines the action and resource, while the headers describe communication constraints.

Status codes and headers jointly define communication semantics

Status codes provide a standardized expression of the server’s processing result. 1xx indicates an intermediate state, 2xx indicates success, 3xx indicates redirection, 4xx indicates a client error, and 5xx indicates a server error.

In day-to-day development, the most common codes are 200, 301, 304, 400, 403, 404, 500, 502, and 503. In interviews, if you only memorize the numbers without understanding their semantics, you usually cannot answer questions about caching, authentication, or gateway failures.

Common status codes can be summarized quickly as follows

2xx  Success: for example, 200 OK
3xx  Redirection: for example, 301 Moved Permanently, 304 Not Modified
4xx  Client errors: for example, 400, 403, 404
5xx  Server errors: for example, 500, 502, 503

This quick reference is useful for building a first-layer understanding of how to locate the source of an error.

HTTP/1.1 has both clear strengths and clear limitations

HTTP/1.1 has three major advantages: simplicity, extensibility, and broad adoption. It allows business capabilities to be extended through custom headers, which makes it well suited to fast-evolving internet scenarios.

Its problems are equally typical: statelessness means continuous business flows require additional session mechanisms, plaintext transmission creates security risks, repeatedly sending headers introduces redundancy, and serialized responses can cause application-layer head-of-line blocking.

These header fields are common entry points for troubleshooting

Content-Length: 348             # Indicates the message body length and helps the receiver identify boundaries
Content-Type: application/json  # Declares the data type and guides parsing behavior
Connection: keep-alive          # Reuses the TCP connection and reduces repeated handshake overhead

These three fields cover three common categories of issues: boundary detection, content parsing, and connection reuse.

HTTP caching significantly reduces unnecessary requests

HTTP caching mainly consists of strong caching and conditional caching. When strong caching hits, the browser uses the local copy directly and does not contact the server, so it delivers the highest performance gain.

Conditional caching works differently. When a resource may have expired, the browser asks the server to confirm whether the content has changed. If it has not changed, the server returns 304. If it has changed, the server returns 200 along with the updated resource. Common validators include Last-Modified and ETag.

The conditional caching flow can be abstracted as follows

Browser requests a resource
→ Server returns ETag / Last-Modified
→ On the next request, the browser sends If-None-Match / If-Modified-Since
→ Server determines whether the resource has changed
→ If unchanged, return 304; if changed, return 200 + updated content

Its essence is to replace full resource transfer with metadata comparison.

HTTPS uses TLS to address the core risks of plaintext HTTP

The essential difference between HTTP and HTTPS is not “an extra S,” but whether a TLS security layer is introduced. HTTPS addresses three critical risks: eavesdropping, tampering, and impersonation.

The costs are also clear: the handshake is more complex, a certificate infrastructure is required, and encryption introduces additional compute overhead. For public internet services, however, this is almost always a necessary cost.

A typical TLS 1.2 handshake works as follows

1. The client sends a random value, TLS version, and supported cipher suites   # Initiate capability negotiation
2. The server returns a random value, certificate, and selected algorithm       # Confirm the encryption scheme
3. The client verifies the certificate, generates a pre-master secret, and encrypts it with the public key  # Securely transmit key material
4. Both parties derive session keys from the random values and pre-master secret # Generate symmetric encryption keys
5. Subsequent HTTP data is transmitted with symmetric encryption                 # Improve transmission efficiency

The handshake phase relies on asymmetric encryption for identity verification and key exchange, while the data transfer phase relies on symmetric encryption for better efficiency.

HTTP/2 and HTTP/3 both evolved to improve transmission efficiency

HTTP/2 focuses its optimizations on four areas: header compression, binary framing, multiplexing, and server push. HPACK compresses repeated headers through indexed tables, significantly reducing header redundancy.

Multiplexing allows multiple streams to share one TCP connection, which solves the serialized blocking problem of HTTP/1.1 at the application layer. However, as long as the underlying transport is TCP, transport-layer head-of-line blocking still exists when packet loss occurs.

The evolution of HTTP versions can be memorized with this quick table

HTTP/1.0  Primarily short-lived connections
HTTP/1.1  Persistent connections + pipelining
HTTP/2    Binary framing + multiplexing + HPACK
HTTP/3    Built on QUIC, with UDP as the underlying transport

This evolution chain reflects not a simple replacement relationship, but a step-by-step repair of performance bottlenecks.

HTTP/3 redefines the low-latency transport path through QUIC

HTTP/3 no longer runs on top of TCP. Instead, it runs directly on top of UDP and uses QUIC to provide reliable transport, congestion control, and connection migration.

Its key benefits are faster connection establishment and reduced transport-layer head-of-line blocking, especially in weak-network conditions, mobile networks, and high-concurrency scenarios. This is why modern browsers and CDNs are aggressively adopting HTTP/3.

RPC, WebSocket, and TCP solve problems at different layers

The core idea of RPC is not “a specific protocol,” but making remote calls feel as natural as local function calls. It is commonly used for internal communication in microservices and emphasizes interface semantics, serialization efficiency, and governance capabilities.

WebSocket solves the problem that HTTP’s one-way request-response model is not suitable for real-time bidirectional communication. It usually starts with an HTTP handshake and then upgrades to a persistent full-duplex channel, making it well suited to chat, push notifications, collaborative editing, and similar scenarios.

You can distinguish their boundaries as follows

HTTP       General resource access, suitable for browsers and public APIs
RPC        Service invocation, suitable for efficient internal communication in microservices
WebSocket  Bidirectional real-time communication, suitable for long-lived online connection scenarios
TCP        Reliable byte-stream transport, the foundation of many upper-layer protocols

Understanding layered responsibilities matters more than memorizing definitions.

TCP remains irreplaceable because it provides the foundation of reliable transport

Even with HTTP, RPC, and WebSocket, these protocols still often rely on TCP because TCP provides reliable, ordered, lossless byte-stream transport.

But TCP only provides a “stream.” It does not define clear business message boundaries. Upper-layer protocols must define their own request lines, headers, frame structures, or message lengths, which is the fundamental reason application-layer protocols exist.

Illustration of the relationship between application-layer protocols and the transport layer AI Visual Insight: This image comes from an ad placement on the page rather than a protocol architecture diagram. It does not directly provide technical details such as packet structure, connection topology, or handshake timing, so it should not be used as a learning reference for application-layer protocols. When reading this type of page screenshot, prioritize extracting protocol definitions, layering relationships, and optimization points from the article body rather than from the visual element itself.

This knowledge is best understood through the lens of protocol evolution

If you only memorize the conclusions about HTTP, HTTPS, HTTP/2, and HTTP/3, it is easy to confuse the specific problem each version solves. A more effective method is to remember them through a problem chain: plaintext insecurity, insufficient connection reuse, header redundancy, head-of-line blocking, and high latency in weak-network environments.

In interviews or system design, what truly matters is not repeating definitions, but being able to answer: Why is it needed? What problem does it fix? What problems does it still leave unresolved?

FAQ

1. HTTP/2 already has multiplexing, so why is HTTP/3 still needed?

HTTP/2 only solves serialized blocking at the application layer, but the underlying transport is still TCP. Once TCP experiences packet loss, subsequent data on the same connection may still wait for retransmission. HTTP/3 is built on QUIC, which avoids this bottleneck.

2. HTTPS is slower than HTTP, so why is it still mandatory?

HTTPS does add handshake and encryption overhead, but it provides authentication, confidentiality, and integrity protection. It prevents eavesdropping, tampering, and impersonation. For public-facing systems, this is not an optimization option but a baseline requirement.

3. Are RPC and HTTP competing technologies?

No. HTTP is better suited to public APIs and the browser ecosystem, while RPC is better suited to efficient service-to-service calls. Many RPC frameworks are even built directly on top of HTTP/2, such as gRPC.

Core Summary

This article systematically reviews the core concepts of the computer network application layer. It covers HTTP status codes, request headers, caching mechanisms, HTTPS handshakes, the optimization paths of HTTP/2 and HTTP/3, and the relationships among RPC, WebSocket, and TCP. It is especially useful for backend engineers and interview preparation.