RPC is best for high-performance internal service calls, HTTP is best for general-purpose APIs and resource access, and WebSocket is best for real-time bidirectional communication. This article addresses the engineering pain point of protocol selection and focuses on three core themes: microservices, real-time communication, and protocol choice.
The technical snapshot highlights the essential differences
| Parameter | RPC | HTTP | WebSocket |
|---|---|---|---|
| Protocol role | Remote invocation paradigm / framework protocol | Application-layer request-response protocol | Persistent bidirectional communication protocol |
| Common language support | Java, Go, Python, Node.js | Universal across languages | Universal across languages |
| Typical transport | TCP, HTTP/2 | TCP | TCP |
| Common serialization | Protobuf, Thrift, JSON | JSON, XML, HTML | Text frames, binary frames |
| Connection characteristics | Short-lived or long-lived | Request-response by default, with Keep-Alive support | Long-lived, full-duplex |
| Core ecosystem dependencies | gRPC, Dubbo, Thrift | Nginx, browsers, REST frameworks | ws, Socket.IO, Netty |
| Public documentation popularity | High | Very high | High |
| Source platform metrics | CSDN views: 264, likes: 9, bookmarks: 8 | CSDN views: 264, likes: 9, bookmarks: 8 | CSDN views: 264, likes: 9, bookmarks: 8 |
The first major difference lies in the communication goal
RPC, HTTP, and WebSocket all run on top of TCP, but they operate at different abstraction layers. HTTP is a standard protocol. WebSocket extends HTTP to support real-time bidirectional communication. RPC, by contrast, is more like an engineering paradigm that makes remote calls look like local function calls.
That means they are not simple substitutes for one another. Each optimizes for a different problem: HTTP prioritizes universality, RPC prioritizes invocation efficiency, and WebSocket prioritizes real-time interaction.
A minimal decision rule for protocol selection
High-frequency internal service calls -> RPC
Browser-facing or public platform APIs -> HTTP
Server push and real-time collaboration -> WebSocket
You can use this rule as an initial screening standard in system design.
The communication model determines who can initiate messages
HTTP and most RPC systems follow a request-response model by default. The client sends a request first, and the server returns a result. Control primarily stays with the caller. Without a request, the server usually does not push messages proactively to the client.
WebSocket works differently. Once the connection is established, both the client and the server can send messages at any time, forming a true full-duplex channel. This capability is the foundation for chat systems, market data streaming, and collaborative editing.
Pseudocode makes the three interaction patterns easy to understand
# HTTP: the client sends one request, and the server returns one response
response = http_get("/news/list") # Control stays with the client
# RPC: call a remote method as if it were a local function
user = user_service.get_user(1001) # The framework hides network details
# WebSocket: once connected, both sides can send messages proactively
ws.send("subscribe:btc") # The client subscribes to a topic
# The server can then continue pushing the latest market data
This code shows the fundamental difference in interaction control.
Message format and protocol overhead directly affect performance
Common RPC implementations such as gRPC and Thrift usually rely on binary serialization formats like Protobuf. Their payloads are more compact, field encoding is more stable, and both CPU and network overhead are lower. That makes RPC well suited for low-latency, high-throughput scenarios.
HTTP headers and text-based formats are heavier, but HTTP provides excellent readability, compatibility, and debuggability. Browsers, gateways, proxies, firewalls, and CDNs all support HTTP natively, which is why it continues to dominate external interfaces.
WebSocket starts with an HTTP Upgrade handshake and then switches to a lightweight frame-based transport. After the connection is established, the per-message header overhead is much lower than that of a normal HTTP request.
A gRPC-style interface definition shows strong contracts
syntax = "proto3";
service UserService {
rpc GetUser(GetUserRequest) returns (GetUserResponse); // Define a remote procedure
}
message GetUserRequest {
int64 id = 1; // User ID
}
message GetUserResponse {
string name = 1; // Username
}
This definition shows how RPC uses a strongly typed IDL to enforce interface contracts.
Connection management and state semantics shape system complexity
HTTP is typically stateless. Even though HTTP/1.1 supports Keep-Alive, the protocol still treats requests as logically independent interactions that reuse the same TCP connection. Business sessions usually rely on separate mechanisms such as cookies, sessions, or tokens.
RPC connections can be short-lived or long-lived. For example, gRPC runs on HTTP/2 and can multiplex multiple requests over a single connection, reducing handshake overhead and improving service-to-service call efficiency. Many service governance capabilities are built around this type of long-lived connection.
WebSocket is stateful by design. A single connection often represents an active online session, which makes it ideal for persistent-context scenarios such as online users, room broadcasts, and subscription relationships. At the same time, it introduces pressure around connection scaling and heartbeat maintenance.
A WebSocket server example illustrates stateful message handling
socket.on("message", (msg) => {
const data = JSON.parse(msg); // Parse the client message
if (data.action === "subscribe") {
roomMap.join(socket, data.topic); // Maintain the subscription relationship
}
});
This code reflects how WebSocket connections usually require persistent state management.
Performance differences make sense only in business context
In terms of per-message size, RPC and WebSocket usually outperform HTTP. RPC benefits from efficient serialization and strong interface contracts. WebSocket benefits from extremely low interaction cost after a long-lived connection is established. HTTP benefits from a mature ecosystem and cross-platform compatibility.
If your scenario involves high-frequency communication inside microservices, RPC is often more stable than REST. If your scenario involves public APIs, browser requests, or third-party integrations, HTTP remains the default choice. If you need proactive server push, HTTP polling is usually less efficient than WebSocket.
Engineering decisions should follow clear boundary separation
A mature system often uses all three together: an external gateway exposes HTTP APIs, internal core paths use RPC, and real-time notification channels rely on WebSocket. This approach provides universal access while preserving internal performance and real-time user experience.
A practical protocol selection matrix helps with architecture reviews
Standardized external APIs -> HTTP/REST
Strongly typed service calls -> RPC/gRPC
Online push and interaction -> WebSocket
This matrix is suitable for direct use in architecture review documents.
AI Visual Insight: This image shows a screenshot of a content aggregation page centered on RPC, HTTP, and WebSocket. It suggests that protocol selection is a common topic in interviews, architecture design, and backend performance optimization. The page structure reinforces the conceptual relationship between the three protocols in the broader knowledge graph, but the image itself does not present low-level technical details such as packet structure or connection flow.
A one-sentence conclusion works well in interviews and architecture reviews
RPC solves the problem of calling remote services efficiently, HTTP solves the problem of standardized and universal resource access, and WebSocket solves the problem of long-lived real-time bidirectional communication. The best choice depends on your system boundaries, latency targets, and interaction model.
FAQ provides structured answers to common design questions
FAQ 1: If gRPC can run on HTTP/2, why should we still distinguish RPC from HTTP?
Because HTTP is a protocol-layer standard, while RPC is a call abstraction. gRPC uses HTTP/2 as its transport channel, but its interface definition, serialization model, flow control model, and service governance goals all belong to the RPC ecosystem.
FAQ 2: Can WebSocket completely replace HTTP APIs?
No. WebSocket is suitable for real-time messaging, but it is not naturally suited for caching, idempotent resource access, SEO crawling, or standardized public APIs. In most systems, HTTP handles resource interfaces while WebSocket handles event streams.
FAQ 3: Must internal microservice communication always use RPC?
Not necessarily. If your team is small, your interface count is limited, and ease of debugging is the priority, HTTP/REST can be a good starting point. RPC shows its full advantage only when you have clear requirements for high concurrency, low latency, strong contracts, and cross-language governance.
Core Summary: This article compares RPC, HTTP, and WebSocket across five dimensions: communication model, protocol format, connection management, state semantics, and performance scenarios. The goal is to help developers make accurate architecture decisions across microservice communication, public APIs, and real-time bidirectional systems.