Linux Pipe IPC Explained: Anonymous Pipes, Named Pipes, and a Hand-Built Process Pool

This article focuses on pipe-based inter-process communication (IPC) in Linux. It explains the implementation model, blocking behavior, and engineering boundaries of anonymous pipes and named pipes, and demonstrates the practical value of pipe in task dispatch through a hand-built process pool. It addresses three common pain points: the abstraction barrier for IPC beginners, error-prone APIs, and process-exit confusion.

Keywords: Linux IPC, Anonymous Pipe, Named Pipe

Technical Specifications at a Glance

Parameter Details
Topic Linux Pipe Communication and Process Pool
Language C, C++
Runtime Environment Linux / POSIX
Core Protocols / Standards POSIX, with System V as comparative background
Core APIs pipe, fork, read, write, mkfifo, open, unlink, stat, waitpid
Communication Model Unidirectional byte-stream communication
Typical Scenarios Parent-child process communication, local cross-process message passing, task dispatch
Star Count Not provided in the source
Core Dependencies unistd.h, sys/wait.h, fcntl.h, sys/stat.h, functional, vector

Pipe Communication Relies on a Shared Kernel Resource

The root reason IPC exists is that different processes have independent virtual address spaces and cannot directly access each other’s user-space memory. To exchange data, they must rely on a shared medium provided by the kernel.

A pipe is one of the most classic forms of IPC. You can think of it as a memory-buffered channel maintained by the kernel. Both the reader and the writer access the same buffer through file descriptors, allowing data to flow between processes.

Pipes Are Closely Tied to the File System Abstraction

Linux models much of I/O as files. Pipes reuse that abstraction: processes access them through read and write, while the kernel manages the buffer, reference counts, and block-and-wakeup behavior internally.

An anonymous pipe has no filesystem path because it does not exist on disk. A named pipe has a pathname, so unrelated processes can open it. That is the most fundamental difference between the two.

Anonymous Pipes Depend on fork and File Descriptor Inheritance

Anonymous pipes are typically used between a parent and child process. The reason is not an API limitation. The real reason is that an anonymous pipe has no global name, so the child process must inherit the already-open file descriptors after fork.

After fork, the parent and child get separate file descriptor tables, but the underlying file object and kernel buffer may still be shared. That is what makes writing on one side and reading on the other possible.

files AI Visual Insight: This diagram shows the mapping between a process-level file descriptor table and the kernel’s file-management structures. It highlights how a process can hold multiple file descriptor entries through the files structure, which is essential for understanding descriptor inheritance after fork and the sharing of pipe endpoints.

file AI Visual Insight: This diagram emphasizes the relationship between struct file and the underlying buffer, inode, and operation set. It shows that although the parent and child copy descriptor references, what they actually share is the underlying file object and its buffer—not independent copies of the data.

The pipe Call Returns a Read End and a Write End

#include <unistd.h>
#include <stdlib.h>

int main() {
    int pipefd[2] = {0};
    if (pipe(pipefd) == -1) return 1; // Create an anonymous pipe

    // pipefd[0] is the read end, pipefd[1] is the write end
    close(pipefd[0]); // Close the unused read end
    close(pipefd[1]); // Close the unused write end
    return 0;
}

This code shows the essence of pipe: it creates a pair of connected file descriptors in one step.

Closing Unused Ends Correctly Is Required for Anonymous Pipes to Work

In real-world engineering, the most common mistake is not failing to create the pipe, but failing to close unused ends promptly. The parent process should close the unused side, and the child process must do the same.

If both processes keep all descriptors open, EOF detection becomes unreliable, blocking behavior becomes harder to reason about, and you may even see cases where “the pipe was closed, but the child process still does not exit” during process-pool shutdown.

#include <unistd.h>
#include <sys/wait.h>
#include <stdio.h>
#include <string.h>

int main() {
    int pipefd[2];
    pipe(pipefd); // Create a pipe
    pid_t id = fork(); // Create a child process

    if (id == 0) {
        close(pipefd[0]); // Child closes the read end and only writes
        const char *msg = "hello pipe\n";
        write(pipefd[1], msg, strlen(msg)); // Write data into the pipe
        close(pipefd[1]); // Close immediately after writing to signal EOF
        return 0;
    }

    close(pipefd[1]); // Parent closes the write end and only reads
    char buf[64] = {0};
    ssize_t n = read(pipefd[0], buf, sizeof(buf) - 1); // Read the child's message
    if (n > 0) printf("%s", buf);
    close(pipefd[0]);
    waitpid(id, NULL, 0); // Reap the child process
    return 0;
}

This example demonstrates the standard unidirectional model for anonymous pipes: child writes, parent reads.

There Are Four Anonymous Pipe Behaviors You Must Remember

  1. If the reader is slow, the writer may block when the buffer becomes full.
  2. If the writer is slow, the reader blocks while waiting for data.
  3. After all write ends are closed, read returns , which means EOF.
  4. After all read ends are closed, continuing to write may trigger SIGPIPE.

These semantics mean pipes naturally provide synchronization effects. However, a pipe is not a message queue. It is byte-stream oriented and does not preserve application-level message boundaries.

Anonymous Pipes Make It Easy to Build a Lightweight Process Pool

The most valuable practice in the source material is creating one anonymous pipe between the parent process and each child process. The parent writes a task ID into a specific pipe, and the child reads that ID and executes the corresponding function.

This design works especially well for teaching because it cleanly separates the control plane from the execution plane: the parent is responsible for scheduling, while child processes consume tasks.

Child Processes Can Execute a Function Map by Task ID

#include <unistd.h>
#include 
<vector>
#include 
<functional>
#include 
<iostream>

void Download() { std::cout << "download\n"; }
void SyncDisk() { std::cout << "sync disk\n"; }
void UpdateUserData() { std::cout << "update user\n"; }

std::vector<std::function<void()>> tasks{Download, SyncDisk, UpdateUserData};

void DoTask(int fd) {
    while (true) {
        int task_code = 0;
        ssize_t n = read(fd, &task_code, sizeof(task_code)); // Read the task ID sent by the parent
        if (n == sizeof(task_code)) {
            tasks[task_code](); // Execute the task by ID
        } else if (n == 0) {
            break; // The write end is closed, so the child exits
        } else {
            break; // Read failed, terminate the loop
        }
    }
}

This code demonstrates a low-cost scheduling model: send task IDs through the pipe, and let each child process map the ID to a local function.

An Incorrect Shutdown Order Can Leave Process Pool Bugs Behind

If the parent creates multiple child processes but does not close inherited, unused write ends inside each child, later child processes may keep extra references to earlier pipes.

In that case, even if the parent closes one write end, the corresponding read end may still not observe EOF, because another child still holds a reference to that same write end. The reverse-order shutdown strategy in the source can mitigate the symptom, but the real fix is to clean up unused descriptors immediately after creation.

Execution Result AI Visual Insight: This output screenshot shows the console results after the parent polls and dispatches tasks to different child processes. You can observe the mapping among task IDs, channel IDs, and child PIDs, which confirms that the “one pipe per child” scheduling path works as intended.

version2 bug AI Visual Insight: This animation clearly shows the shutdown anomaly caused by file descriptor inheritance: some write ends appear closed on the parent side but are still held by child processes created later, so the read end does not receive EOF in time. It highlights why process-pool cleanup order and file descriptor hygiene are critical.

Named Pipes Turn an Anonymous Shared Channel into a Discoverable File Entry

The core value of a named pipe, or FIFO, is that it allows two unrelated processes to communicate. Because it has a pathname in the filesystem, any process that knows the path can connect to the same pipe through open.

One important detail is that a FIFO file itself usually has a size of . The actual data still lives in the kernel buffer and is not written to disk inside that “file.”

mkfifo Works Well for a Simple Local Client-Server Model

#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include 
<iostream>

int main() {
    const char *path = "./named_pipe";
    mkfifo(path, 0666); // Create a named pipe file

    int fd = open(path, O_WRONLY); // Blocks here if no reader has opened the pipe
    const char *msg = "hello fifo";
    write(fd, msg, 10); // Send data through the named pipe
    close(fd);
    unlink(path); // Remove the pathname
    return 0;
}

This code shows that the key property of a FIFO is not disk storage, but a kernel communication endpoint that can be discovered by path.

Named Pipes Naturally Fit Two-Program Collaboration Demos

The original example splits the programs into Server and Client. The server calls Open('r') and keeps reading, while the client calls Open('w'), collects strings from standard input, and writes them into the pipe.

This structure is ideal for verifying three facts: first, named pipes support communication between unrelated processes; second, open may block if the peer is not ready; third, reading still means all write ends have been closed.

Demo AI Visual Insight: This screenshot demonstrates the named-pipe workflow across two separate processes. The left and right terminals play the writer and receiver roles, showing how a FIFO creates a practical local data channel between independent executables.

Choosing Between Anonymous Pipes and Named Pipes Depends on Process Relationships and Discovery

If you are working with parent-child processes, task control, or one-off local scheduling, anonymous pipes are more direct. They have lower overhead and a clearer lifecycle. If two independent programs need a shared entry point, a named pipe is the better choice.

However, neither option is ideal for large-scale structured message distribution or cross-host communication. In those cases, you should consider message queues, shared memory, or sockets instead.

FAQ: The Three Questions Developers Ask Most Often

Q1: Why can anonymous pipes usually be used only between parent and child processes?
A: Because an anonymous pipe has no pathname and cannot be discovered proactively by other processes. In practice, it usually relies on fork to inherit already-open file descriptors.

Q2: Why might a child process still fail to read EOF after the parent closes its write end?
A: Because references to that same write end may still exist in other child processes. As long as any process still holds the write end, the reader will not treat the pipe as fully closed.

Q3: What is the biggest difference between a named pipe and a regular file?
A: A FIFO is only an entry point in the filesystem. The data does not persist on disk. Reads and writes go through a kernel buffer and follow synchronous blocking semantics.

Core Summary: This article systematically reconstructs the core concepts of Linux pipe-based IPC. It covers the underlying principles, blocking semantics, typical APIs, and practical boundaries of anonymous pipes and named pipes, and uses C/C++ examples to build a lightweight process pool with pipe. It also explains common bugs involving file descriptor inheritance and shutdown behavior.