Build a Linux C++ UDP Chat Room with Multi-Client Broadcasting and a Thread Pool

This article breaks down a Linux/C++ UDP chat server whose core capabilities include multiple clients online at the same time, server-side message broadcasting, and asynchronous processing through a thread pool. It solves two common problems in single-threaded UDP services: the inability to handle concurrent send/receive workloads and tight coupling between business logic and the network layer. Keywords: UDP chat room, thread pool, socket.

The technical specification snapshot outlines the project at a glance

Parameter Description
Language C++17
Platform Linux / POSIX Socket
Protocol UDP
Concurrency Model Main thread receives packets + thread pool handles broadcasts
Core Dependencies pthread, sys/socket.h, arpa/inet.h, functional
Code Structure UdpServer / Route / UserManager / InetAddr / ThreadPool
Stars Original data not provided

This chat room architecture upgrades a serial UDP server into a concurrent broadcast service

This project is not just a padded version of a basic echo server. It represents a clear architectural upgrade. The goal is straightforward: allow the server to continuously receive messages from multiple clients and broadcast any message to all online users.

The original design had two major pain points. First, the network layer and business logic were tightly coupled, so every feature change required modifications to the core socket flow. Second, the server processed everything serially in a single thread. Once broadcasting became expensive, recvfrom could be blocked, causing message latency or even packet loss.

Module boundaries map directly to real business responsibilities

The project splits responsibilities into five layers: InetAddr describes the client, UserManager manages online users, Route handles broadcast logic, ThreadPool executes tasks asynchronously, and UdpServer focuses on network I/O.

The value of this design is clear: the network layer knows nothing about chat room details, the business layer does not care about low-level socket operations, and the thread pool does not participate in business decisions. The result is a UDP service skeleton that is maintainable, extensible, and concurrency-friendly.

// Core decoupling: the network layer only registers business callbacks
// and does not directly implement chat room logic
usvr.RegisterService(
    [&r](const InetAddr &addr){
        r.CheckUser(addr); // Register the online user synchronously
    },
    [&r, thread_pool](int sockfd, std::string msg){
        auto t = std::bind(&Route::Broadcast, &r, sockfd, msg); // Package the broadcast task
        thread_pool->Enqueue(t); // Submit it to the thread pool for asynchronous execution
    }
);

This code decouples user registration and message broadcasting from the network receive path.

InetAddr provides a stable abstraction for UDP client identity

UDP has no connection state, so the server can identify a user only by IP + port. The purpose of InetAddr is to wrap the native sockaddr_in structure into a business-friendly object while also handling byte-order conversion, logging, and address comparison.

It performs three key tasks: converting between network and host byte order during construction, overloading == to support user deduplication, and exposing GetNetAddress() and Len() for compatibility with sendto, bind, and recvfrom.

Object-oriented address handling makes management and broadcasting more reliable

If the business layer used raw sockaddr_in structures everywhere, the code would be filled with conversion, comparison, and formatting logic. After encapsulation, upper layers operate only on InetAddr, and the interface semantics become much more stable.

class InetAddr {
public:
    InetAddr(const struct sockaddr_in &address): _address(address), _len(sizeof(address)) {
        _ip = inet_ntoa(_address.sin_addr);      // Convert the network address to a readable IP string
        _port = ntohs(_address.sin_port);        // Convert the port to host byte order
    }

    bool operator==(const InetAddr &addr) {
        return _ip == addr._ip && _port == addr._port; // IP + port uniquely identifies a client
    }
};

This code converts the low-level address structure into a client identity object that is comparable, printable, and storable.

UserManager handles CRUD operations for the online user collection

The prerequisite for chat room broadcasting is not the ability to send messages, but knowing who should receive them. UserManager uses `vector

` to maintain the online user list and provides basic operations such as `AddUser`, `DelUser`, `SearchUser`, and `Users`. The most critical detail here is deduplication. Every client message triggers `recvfrom`, so without `SearchUser` and `operator==`, the same user could be inserted repeatedly, eventually causing duplicate broadcasts. ### This design stays simple while remaining sufficient for the current chat room scale Using a `vector` keeps the implementation direct and the traversal cost acceptable, which makes it suitable for teaching and for small to medium online user counts. If higher performance becomes necessary later, you can replace it with `unordered_set` or maintain a hash-based index. ## Route centralizes user registration and full-room broadcasting as the business hub `Route` is the real business core of the chat room. It holds a `UserManager` internally and exposes two primary actions: `CheckUser` for user registration and `Broadcast` for iterating over online users and calling `sendto` in a loop. More importantly, `Route` is already prepared for multithreaded execution with a mutex. After introducing the thread pool, multiple worker threads may broadcast at the same time while the main thread continues registering new users. Without locking, the system would face concurrent modification during iteration. “`cpp void Broadcast(int sockfd, std::string message) { LockGuard lockguard(_lock); // Protect the online user list auto &users = _uma->Users(); for (auto &user : users) { sendto(sockfd, message.c_str(), message.size(), 0, (struct sockaddr*)user.GetNetAddress(), user.Len()); // Broadcast to every online user } } “` This code forwards one client message to all currently online users. ## ThreadPool lets the main thread focus on receiving packets instead of heavy business execution The key architectural upgrade is not just having broadcast functionality, but ensuring that broadcasting does not block packet reception. That is exactly where the thread pool adds value: after the main thread receives a message through `recvfrom`, it performs only lightweight dispatch and submits the actual broadcast work to background worker threads. This cleanly separates network I/O from business execution. Even as the number of online users grows, the main thread can return to the receive loop quickly, which significantly improves throughput and responsiveness. ### UdpServer remains clean and only manages the socket lifecycle `UdpServer` is responsible for `socket()`, `bind()`, `recvfrom()`, and callback invocation. It does not implement chat room logic. Instead, it defines callback signatures and invokes them after receiving a message. This design is highly reusable. As long as the business function signatures remain consistent, `UdpServer` can also support other UDP-based scenarios such as log collection, command dispatching, or lightweight notification services. “`cpp ssize_t n = recvfrom(_sockfd, inbuffer, sizeof(inbuffer) – 1, 0, (struct sockaddr *)&peer, &len); if (n > 0) { inbuffer[n] = 0; InetAddr clientaddress(peer); // Wrap the client address std::string message = clientaddress.ToString() + inbuffer; _handler_addr(clientaddress); // Register the user _handler_msg(_sockfd, message); // Submit the broadcast task } “` This code transforms an incoming UDP message into a business event and passes it through the callback chain. ## The client uses a two-thread model to send and receive at the same time The client runs with two threads: one blocks while receiving server broadcasts, and the other continuously reads user input and sends messages. Because UDP send and receive paths are naturally independent, this model is simple and effective, and it prevents the input thread from being stalled by the receive flow. When the client comes online, it first sends an `online` message so the server can detect its address and register it in the online list. This step is simple, but it is essential in a connectionless UDP model. ## The runtime behavior confirms that the concurrent broadcast pipeline is complete Based on the original runtime results, multiple clients can come online one after another, and any later message sent by one user can be received by all connected clients. The server logs also show that different broadcast tasks are executed by different worker threads, which confirms that thread pool scheduling is working as intended. That said, this implementation is still a lightweight chat room rather than a production-grade IM system. It does not handle offline detection, message reliability, reordering, duplicate packets, or nickname conflicts. However, as a case study that combines Linux network programming with a thread pool, the overall structure is already complete and instructive. ## The most reusable value of this project is the layering strategy, not the chat room surface itself Three lessons transfer especially well to other systems: start with address abstraction, then build user collection management; use callbacks to isolate the network layer from the business layer; and offload expensive work to a thread pool so the I/O thread stays lightweight. If you continue evolving this design, the next recommended steps are heartbeat-based removal of inactive users, excluding the sender from broadcasts, nickname mapping, and upgrading from `vector` to a more efficient data structure. Beyond that, you can move to `epoll + task queue` or switch to TCP for stronger message delivery semantics. ## FAQ provides structured answers to the most practical design questions ### Why does a UDP chat room still need a thread pool? If the main thread both calls `recvfrom` and performs full-room broadcasting, the receive path will be blocked as the number of online users grows. A thread pool moves broadcasting into the background, so the main thread can focus on receiving and dispatching packets quickly. ### Why wrap InetAddr instead of using sockaddr_in directly? Because a chat room needs to manage client identity over time. `InetAddr` unifies byte-order handling, equality comparison, log output, and system call adaptation, which significantly reduces complexity in the business layer. ### What is still missing before this implementation is production-ready? It still lacks user timeout handling, message reliability, duplicate packet handling, nickname management, fault recovery, and performance monitoring. It is better suited as a practical teaching template for Linux sockets, callback-based decoupling, and thread-pool coordination. **AI Readability Summary:** This article refactors a Linux/C++ UDP chat room project and focuses on the decoupled collaboration among `InetAddr`, `UserManager`, `Route`, `UdpServer`, and `ThreadPool`. It explains how to support multiple online clients, message broadcasting, and thread safety in a modular server design.