LangGraph Append-Only Message State in Practice: Preserve Conversation History with Annotated and add

LangGraph append-only message state lets conversation history accumulate automatically as execution moves across nodes, avoiding the context loss caused by traditional overwrite-based state updates. This article focuses on how Annotated, operator.add, and StateGraph work together to preserve history in chatbots and multi-turn assistants. Keywords: LangGraph, append-only state, conversation history.

Technical specification snapshot

Parameter Description
Language Python
Core framework LangGraph
State mechanism Annotated[List[Message], add]
Graph model StateGraph
Message types HumanMessage, AIMessage
Merge rule operator.add
Protocol/pattern Multi-node state transitions, append-only merge
GitHub stars Not provided in the source
Core dependencies langgraph, langchain-core, typing, operator

Append-only message state is a core capability for conversational systems

In LangGraph, state updates default to behavior that is closer to field replacement. That works well for form-style workflows, but it is not ideal for chat systems, because every user and AI message must be preserved in full.

The goal of append-only message state is straightforward: each node returns only the new messages, and the framework appends them to the end of the history. This preserves context and reduces node complexity at the same time.

Append-only state solves three practical problems

It primarily addresses three pain points: conversation history is easy to lose, nodes must manually concatenate lists, and state transitions become harder to maintain. In multi-turn Q&A, customer support bots, and memory-enabled assistants, these issues directly affect response quality.

# A typical overwrite-based approach replaces old messages with new ones
state = {
    "messages": ["old message"]
}

new_state = {
    "messages": ["new message"]  # This usually overwrites the original list
}

This example shows that without an explicit merge rule, message history usually cannot accumulate naturally.

LangGraph declares append behavior through type annotations

LangGraph emphasizes declarative state rules rather than handwritten merge logic. Developers only need to describe how a field should merge, and the runtime applies that rule automatically.

Annotated is the key mechanism here. It does more than label a type. It binds the field type and the merge rule together so the framework can interpret and execute them.

The add operator defines how lists are merged

For a message list, the most natural merge strategy is append. Here, operator.add provides the semantics of list concatenation: old list + new list = full history.

from typing import TypedDict, List, Annotated
from operator import add
from langchain_core.messages import HumanMessage, AIMessage

class ChatState(TypedDict):
    # Use Annotated + add to declare append-based merging for the messages field
    messages: Annotated[List[HumanMessage | AIMessage], add]
    # Regular fields still update through overwrite semantics
    current_topic: str

This definition splits state into two categories: append message history, overwrite auxiliary fields. That pattern fits most conversation workflows.

Node functions should return only incremental data

Once a state field has an append rule, node functions should no longer read the old list and concatenate it manually. A node should return only the new messages produced at that step.

This leads to an important benefit: each node behaves more like a pure function. It accepts the current state and returns only the new output for that step, which makes the logic easier to reason about and test.

A minimal runnable conversation graph example

import asyncio
from typing import TypedDict, List, Annotated
from operator import add
from langgraph.graph import StateGraph, START, END
from langchain_core.messages import HumanMessage, AIMessage

class ChatState(TypedDict):
    messages: Annotated[List[HumanMessage | AIMessage], add]  # Messages are appended automatically
    current_topic: str

def user_input_node(state: ChatState) -> dict:
    # Return only the new user message for this turn
    return {
        "messages": [HumanMessage(content="User: How is the weather today?")],
        "current_topic": "weather query"
    }

def ai_response_node(state: ChatState) -> dict:
    # Return only the new AI reply for this turn
    return {
        "messages": [AIMessage(content="AI: Beijing is sunny today, 25°C, and great for going outside")],
        "current_topic": "weather details"
    }

def summary_node(state: ChatState) -> dict:
    # You can read the full history directly here because prior messages have already accumulated
    for msg in state["messages"]:
        print(msg.content)
    return {"current_topic": "conversation complete"}

def build_chat_graph():
    builder = StateGraph(ChatState)
    builder.add_node("user_input", user_input_node)
    builder.add_node("ai_response", ai_response_node)
    builder.add_node("summary", summary_node)
    builder.add_edge(START, "user_input")
    builder.add_edge("user_input", "ai_response")
    builder.add_edge("ai_response", "summary")
    builder.add_edge("summary", END)
    return builder.compile()

async def main():
    graph = build_chat_graph()
    initial_state = {
        "messages": [],  # Critical: initialize with an empty list
        "current_topic": "initial"
    }
    final_state = await graph.ainvoke(initial_state)
    print(final_state)

if __name__ == "__main__":
    asyncio.run(main())

This example shows a linear conversation graph in which the user message and AI reply are automatically accumulated into messages in execution order.

The runtime output confirms that messages are not overwritten

The original example output shows that after user_input runs, the message count grows from 0 to 1. After ai_response runs, it grows again to 2. The final summary node can then read the full conversation history.

This confirms that LangGraph applies the append rule defined in the state annotation instead of replacing messages from the previous node with the latest node output.

The engineering significance behind the result

For application developers, the real value is not just writing fewer lines of code. It is shifting state consistency from manual maintenance to framework guarantees. That significantly reduces the risk of ordering mistakes, history loss, and duplicate concatenation.

initial_state = {
    "messages": [],      # Must be initialized as an empty list
    "current_topic": "initial"
}

# If messages is not initialized, the list merge step may fail

This initialization looks simple, but it is a prerequisite for append-only state to work correctly.

This pattern is ideal for multi-turn conversations and memory-enabled agents

Append-only message state is useful not only for weather Q&A, but also for intelligent customer support, tool-calling assistants, long-running agents, and enterprise applications that need an auditable message trail.

As the graph grows, the benefits of append rules become even more obvious. You can extend the flow into “user input → intent recognition → tool call → AI response → summary archive” without rewriting message merge logic.

The boundary between append and overwrite should remain clear

Not every field should be append-only. Fields such as current_topic, intent, and session_status represent the current state and should usually be overwritten. Append semantics should be reserved for data with historical value, such as messages, tool-call logs, and reasoning trace fragments.

The image elements mainly serve as site decoration rather than technical architecture

WeChat share prompt

AI Visual Insight: This image shows a sharing prompt animation on the blog page. It is a site interaction cue rather than core technical content, so it does not meaningfully explain LangGraph state machines, message structures, or execution flow.

FAQ

Q1: Why must messages be initialized as an empty list?

A: Append-based merging depends on list semantics. If the initial state does not provide messages: [], the framework may not find the target field during add merging, which can cause runtime failures or undefined results.

Q2: Why do nodes return a single-element list instead of a single message object?

A: Because the merge rule operates on list concatenation. Returning [new_msg] allows the framework to merge it with historical messages via old_list + new_list. Returning a single object would create a type mismatch.

Q3: Which fields should use append semantics, and which should use overwrite semantics?

A: Data with historical value and ordering requirements should use append semantics, such as conversation messages and tool logs. Data that only represents the latest state should use overwrite semantics, such as the current topic, stage marker, or intent classification result.

[AI Readability Summary] LangGraph append-only message state lets you preserve conversation history across nodes by combining Annotated with operator.add. Instead of manually rebuilding messages in every node, you define a merge rule once and let the framework append new messages automatically. This pattern is simple, testable, and especially effective for chatbots, multi-turn assistants, and memory-enabled agent workflows.