In Spring Boot 3.x, TraceId propagation in message queues can break because of thread switches, missing message headers, and non-inherited MDC context. This article focuses on RabbitMQ and Kafka scenarios and explains how to use Micrometer Tracing auto-propagation, async thread-pool compensation, and trace verification. Keywords: Micrometer Tracing, TraceId, RabbitMQ.
Technical specifications at a glance
| Parameter | Description |
|---|---|
| Core language | Java 17+ |
| Framework version | Spring Boot 3.x |
| Tracing protocol | B3 / OpenTelemetry |
| Message middleware | RabbitMQ, Kafka |
| Tracing implementation | Micrometer Tracing + Brave / OTel |
| Observability components | Actuator, Zipkin, Jaeger |
| GitHub stars | Not provided in the source input |
| Core dependencies | spring-boot-starter-actuator, micrometer-tracing-bridge-brave, spring-boot-starter-amqp, spring-kafka |
Broken message trace continuity in Spring Boot 3.x is a classic observability problem
In microservices, HTTP requests usually propagate a TraceId naturally, but message queues do not run in the same thread or the same request context. After the producer sends a message, the consumer often executes in another thread and another process, so the trace naturally contains a break point.
In the Spring Boot 2.x era, many projects relied on Sleuth for tracing that worked “out of the box.” After upgrading to 3.x, the underlying implementation moved to Micrometer Tracing. If you keep using the old configuration, the most common result is that the TraceId in logs becomes empty or turns into a new random value.
These symptoms usually appear when the tracing context is lost
- Producer logs and consumer logs cannot be correlated through the same
traceId. @RabbitListeneror@KafkaListenercannot read MDC.- If the consumer switches again to
@Asyncor a thread pool internally, the trace breaks a second time. - In Zipkin or Jaeger, you can only see isolated spans instead of a complete parent-child relationship.
log.info("producer traceId={}", MDC.get("traceId"));
// Key symptom: the consumer-side output is empty or becomes a new value
log.info("consumer traceId={}", MDC.get("traceId"));
This log snippet helps you quickly determine whether the producer and consumer are still on the same trace before and after message delivery.
The root cause of tracing context failure comes from propagation media and thread model changes
The essence of distributed tracing is to propagate metadata such as traceId, spanId, and sampling flags from one execution unit to the next. In HTTP scenarios, this depends on request headers. In messaging scenarios, it depends on message headers.
If your project only passes traceId manually, but does not pass spanId and sampling information, the tracing system usually cannot reconstruct the correct parent-child span relationship. On top of that, message consumption often involves thread switching, and MDC, as a ThreadLocal-based container, does not automatically flow into child threads.
The correct solution in Spring Boot 3.x is to use Micrometer Tracing auto-propagation
Micrometer Tracing already provides Observation integrations for RabbitMQ and Kafka. As long as the dependencies are complete and the propagation type is correct, the framework can automatically inject and extract message headers. In most cases, your business code does not need to build tracing headers manually.
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
</dependencies>
These dependencies provide the foundation for message tracing, observability, and RabbitMQ integration in Spring Boot 3.x.
Using the auto-propagation mechanism is usually more reliable than manual propagation
When RabbitTemplate or KafkaTemplate sends a message, Micrometer writes the current context into the message headers. After the consumer receives the message, it restores the header values into the current tracing context and synchronizes them to MDC, so the logging framework can output them directly.
You should first verify whether B3 fields exist in the message headers, such as b3-traceid, b3-spanid, and b3-sampled. If these headers do not exist, the problem most likely comes from missing dependencies, an incorrect propagation type, or an integration that did not activate correctly.
@Service
public class OrderProducer {
@Autowired
private RabbitTemplate rabbitTemplate;
public void sendOrder(Order order) {
// Key logic: no need to set trace headers manually; the framework injects them automatically
rabbitTemplate.convertAndSend("order.exchange", "order.routing", order);
}
}
This code shows that once auto-propagation is enabled, the producer does not need to manage tracing headers explicitly.
As long as the consumer restores the context successfully, logs connect naturally
Inside the consumer listener method, you usually do not need to parse traceId explicitly. If the configuration is correct, the current thread already carries the restored context when the listener runs, and the log output automatically includes the same TraceId and SpanId for that trace.
@Component
@Slf4j
public class OrderConsumer {
@RabbitListener(queues = "order.queue")
public void receive(Order order, Message message) {
// Key logic: MDC should already contain traceId/spanId at this point
log.info("Received order: {}", order);
}
}
This code highlights that the key on the consumer side is not “reading message headers,” but rather “letting the framework restore the context first.”
Async thread pools are still the most easily overlooked break point in message tracing
Even if the context is intact when the message reaches the consumer, MDC can still be lost again as soon as you use @Async, a custom thread pool, or CompletableFuture internally. The reason is simple: MDC is based on ThreadLocal and does not automatically propagate across threads.
So in messaging scenarios, you should reason about propagation in two layers: the first layer is MQ header propagation, and the second is in-process thread propagation. Micrometer solves the first, while the second requires you to add a context decorator to your thread pool.
Adding a TaskDecorator to the thread pool is the safest compensation strategy
@Configuration
@EnableAsync
public class AsyncConfig {
@Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(100);
executor.setTaskDecorator(runnable -> {
Map<String, String> contextMap = MDC.getCopyOfContextMap(); // Copy MDC from the parent thread
return () -> {
if (contextMap != null) {
MDC.setContextMap(contextMap); // Restore the context in the child thread
}
try {
runnable.run(); // Execute business logic
} finally {
MDC.clear(); // Prevent contamination caused by thread reuse
}
};
});
executor.initialize();
return executor;
}
}
This configuration safely copies MDC from the consumer thread into async child threads.
You can manually inject B3 headers in custom low-level send logic
If the project does not directly use higher-level abstractions, or if you need to support a special messaging protocol, you can manually obtain the current span from Tracer and write B3 fields into the message headers. However, this should be treated as a fallback rather than the default approach.
@Autowired
private Tracer tracer;
public void sendWithTrace(String exchange, String routingKey, Object payload) {
Span currentSpan = tracer.currentSpan(); // Get the current span
rabbitTemplate.convertAndSend(exchange, routingKey, payload, msg -> {
if (currentSpan != null) {
msg.getMessageProperties().setHeader("b3-traceid", currentSpan.context().traceId());
msg.getMessageProperties().setHeader("b3-spanid", currentSpan.context().spanId());
msg.getMessageProperties().setHeader("b3-sampled", currentSpan.context().sampled() ? "1" : "0");
}
return msg;
});
}
This code manually propagates tracing headers when the framework does not handle them automatically.
A practical configuration example helps you validate the trace quickly
management:
tracing:
sampling:
probability: 1.0
propagation:
type: b3
spring:
rabbitmq:
host: localhost
port: 5672
This configuration enables full sampling and sets B3 as the propagation format, which is useful for local integration testing.
<Pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] traceId=%X{traceId} spanId=%X{spanId} %-5level %logger{36} - %msg%n</Pattern>
This log pattern lets you directly verify whether the tracing fields in MDC were restored correctly.
Best practices should prioritize auto-propagation and a closed verification loop
Prefer Micrometer Tracing over manually constructing headers. Standardize your log format so that traceId and spanId are searchable. Add thread-context propagation to every async entry point to avoid a false sense of completeness where MQ propagation works but the thread pool drops the context again.
In production, do not default to 100% sampling. Depending on throughput, reduce it to 0.1 or lower. After every dependency upgrade, run an end-to-end verification: send an HTTP request, publish a message, consume it, and confirm that the Zipkin trace and log output are consistent.
FAQ: The three questions developers ask most often
1. Why does TraceId stop propagating automatically after upgrading from Spring Boot 2.x to 3.x?
Because Sleuth is no longer the primary tracing solution, and Spring Boot 3.x uses Micrometer Tracing instead. If you still use old dependencies or legacy configuration, automatic message-header injection and context restoration will not work as expected.
2. If the message header contains TraceId, does that guarantee a complete trace?
Not necessarily. A complete trace usually also requires spanId, sampling flags, and correct restoration of the parent-child relationship. Passing only traceId may make events look related, but it cannot form a standard tracing topology.
3. If the consumer already has TraceId, why does it disappear again after entering a thread pool?
Because MDC is based on ThreadLocal. The context in the listener thread does not automatically propagate to worker threads in the thread pool, so you must copy it manually through TaskDecorator or an equivalent mechanism.
Core summary: This article systematically explains trace context propagation in RabbitMQ and Kafka message flows under Spring Boot 3.x. It focuses on common issues such as TraceId loss, MDC breaks, and failed Sleuth-to-Micrometer upgrades, and provides practical solutions based on Micrometer Tracing auto-propagation, async thread-pool compensation, manual B3 header injection, and Zipkin-based verification.