How to Rebuild FastAPI Logging with Loguru: Async-Safe Writes, File Rotation, and Unified Uvicorn Logs

Loguru can dramatically simplify FastAPI logging configuration and address the complexity of standard logging in asynchronous applications, fragmented output, and blocking file writes. This article focuses on async-safe writes, unified Uvicorn log handling, and production configuration boundaries. Keywords: FastAPI, Loguru, Python logging.

Technical Specifications Snapshot

Parameter Description
Language Python
Framework FastAPI
Logging Stack Loguru + standard logging interception
Runtime Protocol ASGI / HTTP
Server Uvicorn
Core Dependencies loguru, fastapi, uvicorn, logging
Source Article Popularity Approximately 142 views
Best Fit Scenarios Async web services, API troubleshooting, production log persistence

Loguru works better with FastAPI because it reduces async logging complexity

FastAPI’s core strength is asynchronous concurrency, but standard logging in its default form comes with many configuration points and scattered output targets. In coroutine-based applications, it can also lead to out-of-order logs, empty log files, and more difficult troubleshooting.

The value of Loguru is not just that it looks cleaner. It consolidates log output, formatting, file rotation, and retention into a single entry point: logger.add(). For small and medium-sized FastAPI projects, that lower cognitive overhead delivers real practical value.

Start by installing Loguru and writing your first usable log

uv add loguru

After this step, you can use the global logger directly in your project without first defining a large set of handlers and formatters.

from fastapi import FastAPI
from loguru import logger

app = FastAPI()

@app.get("/")
async def hello():
    logger.info("Someone accessed the home page")  # Record one request access
    return {"msg": "Hello"}

This code lets FastAPI immediately emit structured logs when it receives a request.

In production, you should write logs to files and enable the async queue

Printing only to the console works in development, but not in production. A stable production setup should persist logs to files with rotation and retention enabled. Otherwise, logs will keep growing and may eventually affect disk usage and incident analysis.

Loguru makes file output configuration straightforward, but in asynchronous services such as FastAPI, the most important option is not rotation. It is enqueue=True, which determines whether file writes are handled asynchronously through a thread-safe queue.

from loguru import logger

logger.add(
    "app.log",
    rotation="10 MB",      # Split the log automatically when it reaches 10 MB
    retention="7 days",    # Keep only 7 days of history
    level="INFO",          # Record INFO and above only
    enqueue=True            # Write asynchronously to avoid blocking the event loop
)

This configuration establishes a practical baseline logging strategy for FastAPI in production.

enqueue=True is the most commonly overlooked switch in async services

If you omit enqueue=True, file writes can block the event loop under high concurrency. The symptoms may include slower endpoints, out-of-order logs, and in severe cases, log loss. For ASGI services, you should treat it as a near-default requirement.

Also, use absolute paths for log files whenever possible. This prevents logs from being written to unexpected locations when the service runs as a daemon, inside a container, or from different working directories.

You need to unify Uvicorn logs to avoid splitting application and server logs

Many projects adopt Loguru for application code through logger.info(), while Uvicorn continues to emit its own uvicorn.access and uvicorn.error logs. The result is two logging styles in both the console and log files, which makes troubleshooting much harder.

The standard solution is to use an InterceptHandler to forward records from the standard library logging module into Loguru. This lets application logs, access logs, and error logs flow through the same output pipeline.

import logging
from loguru import logger

class InterceptHandler(logging.Handler):
    def emit(self, record):
        logger.opt(depth=6, exception=record.exc_info).log(
            record.levelname,
            record.getMessage()
        )

logging.basicConfig(
    handlers=[InterceptHandler()],
    level=0
)

This code forwards logs produced by standard logging to Loguru for unified processing.

In reload mode, you need more complete control over the root and Uvicorn loggers

If you enable --reload in development, or start the app with fastapi dev, some logs may bypass your custom handler. A more reliable approach is to clear old handlers before creating the application instance and allow uvicorn-related loggers to propagate to the root logger.

import logging
from loguru import logger

class InterceptHandler(logging.Handler):
    def emit(self, record):
        try:
            level = logger.level(record.levelname).name  # Map the Loguru level
        except ValueError:
            level = record.levelno  # Fall back to the original level for unknown values

        frame, depth = logging.currentframe(), 2
        while frame and frame.f_code.co_filename == logging.__file__:
            frame = frame.f_back
            depth += 1

        logger.opt(depth=depth, exception=record.exc_info).log(
            level,
            record.getMessage()
        )

def setup_logging():
    logging.root.handlers = [InterceptHandler()]   # Replace root handlers
    logging.root.setLevel(logging.INFO)

    for name in ("uvicorn", "uvicorn.access", "uvicorn.error"):
        logging.getLogger(name).handlers = []      # Clear existing handlers
        logging.getLogger(name).propagate = True   # Let the root logger process them uniformly

This code makes Uvicorn, the root logger, and application logs converge into the same output pipeline.

Loguru is not always the final answer and works best as a high-productivity entry point

Loguru is excellent at solving fast setup and readable configuration, but it does not automatically replace every infrastructure-level requirement. In large systems, microservice architectures, or multi-team environments, standard logging + dictConfig still provides stronger consistency and control.

In practice, the best strategy is often not either-or, but a combination. Let application code use Loguru for a better developer experience, while the framework and infrastructure layers remain compatible with standard logging. That approach reduces development cost without sacrificing ecosystem compatibility.

In production, you also need to consider multiprocessing, exceptions, and data masking

In multiprocessing deployments, several workers writing to the same file can create contention. A safer approach is to split log file names by PID or instance identity.

For exception logging, prefer logger.exception() so you preserve the full traceback. If logs contain passwords, tokens, phone numbers, or similar sensitive fields, mask them before writing to disk so that the logging system itself does not become a source of data leakage.

import os
from loguru import logger

pid = os.getpid()
logger.add(
    f"/var/log/myapp/app_{pid}.log",
    rotation="10 MB",      # Control the size of each file
    retention="7 days",    # Clean up historical files regularly
    enqueue=True,
    backtrace=True          # Preserve a more complete exception call chain
)

try:
    1 / 0
except ZeroDivisionError:
    logger.exception("Request handling failed")  # Output the full exception stack

This example shows a standard pattern for isolating log files in multiprocessing deployments and preserving exception tracebacks.

The conclusion is that Loguru is an excellent starting point for FastAPI if you use it with clear boundaries

If your goal is to move FastAPI logging from merely printing messages to being operable, searchable, and debuggable, Loguru is a highly efficient starting point. It quickly addresses common pain points such as configuration complexity, asynchronous blocking, and fragmented Uvicorn output.

But once you move into production, the quality of your logging system depends less on the library name and more on whether you implement async writes, unified interception, file rotation, exception preservation, multiprocessing isolation, and sensitive data governance. When you get those fundamentals right, logs become the eyes of your production system.

FAQ: The three questions developers ask most often

1. Why is the default standard logging configuration not recommended in FastAPI?

Because the default configuration is not very friendly to asynchronous workloads. Common issues include verbose setup, scattered output, blocking file writes, and fragmented troubleshooting data. It is not unusable, but the default integration cost is higher.

2. What is the most important Loguru setting in FastAPI?

Usually enqueue=True. It turns log writes into asynchronous queue-based consumption, reduces pressure on the event loop, and is a critical setting for high-concurrency API services.

3. Can Loguru completely replace logging?

For small and medium-sized projects, it can often serve as the main entry point. For larger projects, a full replacement is not the safest approach. A more reliable pattern is to let Loguru improve the application logging experience while standard logging preserves ecosystem compatibility and infrastructure consistency.

AI Readability Summary

This article walks through a practical Loguru-based logging architecture for FastAPI, covering installation, file output, enqueue async writes, Uvicorn log interception, multiprocessing precautions in production, and a combined strategy with standard logging. The goal is to help you build a more stable and maintainable Python logging system.