Essential Python Syntax for Agent Development: A Practical Guide for Coze, LangChain, and Custom AI Agents

[AI Readability Summary] This practical Python reference for agent development focuses on the syntax you will use most often: type hints, message structures, prompt composition, tool dispatch, JSON parsing, exception fallback, and async concurrency. It helps solve common issues in AI agent systems such as hard-to-maintain code, unstable data structures, and error-prone tool invocation.

Keywords: Python, Agent, LangChain

The technical specification snapshot outlines the stack at a glance.

Parameter Details
Language Python
Protocols JSON, Function Calling, Async Coroutines
GitHub Stars Not provided in the original source
Core Dependencies typing, json, asyncio
Applicable Frameworks Coze, LangChain, Custom Agents
Typical Scenarios Tool Calling, Memory Management, Task Orchestration, Structured Output

This guide focuses on the Python capabilities that truly matter in agent development.

The value of the original material does not come from listing syntax exhaustively. Instead, it filters out low-frequency knowledge and keeps only the parts you will use daily in agent engineering. Its core goal is to make tool functions easier to describe, prompts more controllable, context easier to maintain, and services more reliable.

For agent systems, Python is not just a scripting language. It is the glue layer that connects models, tools, memory, and workflows. Basic syntax alone is not enough. You need to understand how that syntax maps to real agent architectures.

Type hints determine whether tool interfaces can be consumed reliably.

Type hints are a first-order requirement in agent development. They directly affect function argument constraints, structured output parsing, and code readability. In tool registration, schema inference, and LLM function calling, clearer annotations lead to more stable systems.

from typing import List, Dict, Optional

# Tool function: return weather text, or None when the city is empty
def get_weather(city: str) -> Optional[str]:
    if not city:  # Fallback immediately for empty input
        return None
    return f"{city} is sunny today"

# Build conversation history
def build_msg_history(history: List[Dict[str, str]]) -> str:
    return "\n".join([m["content"] for m in history])  # Extract and join message content

This code establishes explicit constraints for tool inputs and outputs, making them easier for agent frameworks to recognize and reuse.

Lists and dictionaries form the runtime data foundation of AI agents.

Multi-turn messages, task queues, tool lists, and execution results are all fundamentally represented as lists and dictionaries. Once you understand these two structures, you understand the primary carriers of internal agent state.

messages = [
    {"role": "system", "content": "You are a task decomposition agent"},
    {"role": "user", "content": "Help me write a weekly report"}
]

recent_msg = [m for m in messages if m["role"] != "system"][-5:]  # Trim recent messages to reduce token usage

tool_result = {
    "tool_name": "search",
    "success": True,
    "data": "Found 3 news articles"
}

This snippet shows the minimum implementation for message management, context trimming, and tool result packaging.

Prompt engineering is fundamentally string engineering.

In agent workflows, multiline strings define system roles and rules, while f-strings inject user queries, conversation memory, and tool outputs. Prompt stability depends heavily on how well you organize strings.

user_query = "Help me plan a weekend itinerary"
history_context = "Earlier we talked about going hiking"

prompt = f"""
You are a professional task decomposition agent.
Conversation history: {history_context}
Current user request: {user_query}
Return the output strictly in JSON.
"""  # Inject dynamic context to generate the final prompt

This code combines static rules and dynamic context into an executable prompt.

Function parameter design directly affects the extensibility of the tool orchestration layer.

General-purpose agents often need to dispatch different tools through a unified interface. That makes *args, **kwargs, and default parameters especially important. They allow a framework to support more tools without changing the function signature.

def tool_dispatch(tool_name: str, *args, **kwargs):
    if tool_name == "weather":
        return get_weather(*args, **kwargs)  # Forward uniformly to the weather tool
    elif tool_name == "search":
        return web_search(*args, **kwargs)   # Forward uniformly to the search tool

def run_agent(prompt: str, model: str = "doubao-pro", temperature: float = 0.7, timeout: int = 30):
    return {"model": model, "prompt": prompt, "timeout": timeout}

This example shows how to provide flexible interfaces for tool dispatch and default runtime configuration.

JSON handling determines whether model output can enter your engineering pipeline.

Once an LLM returns structured content, JSON becomes the transport layer between the model layer and the business layer. json.dumps and json.loads are foundational infrastructure for structured agents, and ensure_ascii=False is especially important in Chinese-language scenarios.

import json

data = {"task": "Check the weather", "city": "Beijing"}
json_str = json.dumps(data, ensure_ascii=False)  # Preserve Chinese characters for logs and API transport
obj = json.loads(json_str)  # Parse the JSON string back into a Python dictionary
print(obj["task"])

This code performs two-way conversion between Python objects and JSON strings.

Stable agents must wrap all external calls in exception handling.

Network requests, database access, model APIs, and third-party tools can all fail. Without try-except, one error can interrupt the entire workflow. Exception handling is not an optimization. It is a requirement.

def call_llm(prompt: str) -> str:
    try:
        return "The model responded successfully"  # Replace this with a real model API call in production
    except Exception as e:
        return f"Model call failed: {str(e)}"  # Return a fallback response to avoid service crashes

This snippet adds a consistent error boundary around model invocation.

Decorators, class encapsulation, and async capabilities form the advanced agent framework stack.

Decorators work well for tool registration, logging, auditing, and access control. Classes are ideal for encapsulating memory, model configuration, and tool registries. Async programming solves concurrent tool execution and streaming response problems. Together, these three capabilities support production-grade agents.

import asyncio

# Use a decorator to register agent tools
def agent_tool(func):
    def wrapper(*args, **kwargs):
        print(f"Calling tool: {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@agent_tool
def calculator(a: float, b: float) -> float:
    return a + b

class BaseAgent:
    def __init__(self, model_name: str):
        self.model_name = model_name  # Store model configuration
        self.memory = []              # Store conversation memory
        self.tools = {}               # Store the tool registry

    def chat(self, query: str) -> str:
        self.memory.append({"role": "user", "content": query})
        return f"I have received your question: {query}"

async def async_search(city: str):
    await asyncio.sleep(1)  # Simulate an asynchronous network call
    return f"Weather lookup completed for {city}"

This code demonstrates the basic patterns for tool registration, state encapsulation, and asynchronous tasks.

The best learning path in practice should follow engineering value, not syntax completeness.

Start with lists, dictionaries, and list comprehensions because they directly support context management. Next, learn multiline strings and f-strings because they determine the quality of prompt construction. After that, focus on type hints, JSON, function parameters, and exception handling.

Then move on to decorators, classes, and async/await. These are most valuable once you begin designing extensible frameworks or high-concurrency tool layers. This sequence better matches the real evolution path of agent engineering.

The FAQ section answers common implementation questions.

FAQ 1: Why does agent development rely on type hints more than ordinary Python scripting?

Because agent tools are often consumed automatically by frameworks, models, or function-calling mechanisms. Clear type information reduces argument ambiguity and improves tool description, validation, and maintainability.

FAQ 2: Why are message structures usually represented with a list of dictionaries?

Because multi-turn conversations are naturally ordered, which makes lists a good fit. Each message contains key-value pairs such as role, content, and name, which makes dictionaries the right structure. This design also aligns with mainstream model APIs.

FAQ 3: When should you introduce async/await?

When your agent needs to query weather, search, and calendar services concurrently, or when it needs streaming output and non-blocking calls, you should adopt asynchronous execution. For single-tool serial workflows, you can keep things simpler.

Core Summary: This article refactors the original content into a practical Python syntax checklist for agent development. It focuses on type hints, lists and dictionaries, prompt strings, JSON, exception handling, decorators, class encapsulation, and asynchronous calls to help developers quickly build maintainable, extensible, and production-ready agent engineering skills.