This article focuses on Nanobot’s Tools mechanism and explains how an Agent decouples model decision-making from application-side execution. Through Tool abstractions, ExecTool, security guards, and sandboxing, it enables extensible tool invocation and addresses the core limitation that LLMs cannot directly operate on the external world. Keywords: Agent, Function Calling, Sandbox.
| Technical Item | Details |
|---|---|
| Project | Nanobot / OpenClaw-inspired lightweight learning implementation |
| Language | Python |
| License | Open-source project; license not explicitly stated in the original article |
| GitHub Stars | Not provided in the original article |
| Core Dependencies | asyncio, JSON Schema-inspired validation, bubblewrap, shell |
| Core Capabilities | Tool abstraction, parameter validation, shell execution, security isolation |
The Tools mechanism is the standard interface that connects an Agent to external capabilities
LLMs are good at reasoning, but by default they cannot read files, execute commands, or inspect system state. Without tools, a model can only suggest the next step rather than complete it. The value of Tools lies in exposing external capabilities as structured functions that the model can choose from.
In this architecture, the model is responsible only for decisions: whether to call a tool, which tool to call, and what arguments to provide. The actual execution happens on the application side. As a result, tool use is not a single request, but a closed loop of decision, execution, feedback, and continued reasoning.
A minimal tool-calling loop looks like this
async def run_agent(llm, messages, tools, handlers):
while True:
# Expose the available tool list to the model
resp = await llm.chat(messages=messages, tools=tools)
# If the model no longer requests tools, finish directly
if resp.stop_reason != "tool_use":
return resp.content
for call in resp.tool_calls:
name = call["name"]
args = call["input"]
# Dispatch to the application-side handler by tool name
result = await handlers[name](**args)
# Feed the tool result back to the model for continued reasoning
messages.append({"role": "user", "content": result})
This code shows the minimal closed loop of Agent tool invocation: the model plans, the application executes, and the result is fed back to the model.
The Tool base class defines the unified protocol for the tool system
Nanobot’s key design is not the number of tools, but the decision to abstract a common skeleton first. The Tool base class requires every tool to declare name, description, parameters, and execute(), which lets new tools integrate into the runtime in a consistent way.
This design provides two immediate benefits. First, tools become pluggable, so you do not need to modify the main loop. Second, the model can understand tool boundaries through a unified schema, which reduces ambiguity during tool selection.
Parameter validation is the first reliability layer provided by the Tool base class
validate_params() checks inputs against a JSON Schema-style definition. It supports primitive types, enums, numeric ranges, string length constraints, required object fields, and recursive array validation. This step moves the risk of the model producing invalid arguments to a point before execution begins.
class Tool:
def validate_params(self, params: dict) -> list[str]:
schema = self.parameters or {}
# Top-level parameters must be an object
if schema.get("type", "object") != "object":
raise ValueError("Schema must be object type")
return self._validate(params, {**schema, "type": "object"}, "")
def to_schema(self) -> dict:
# Convert into a function-calling protocol the model can recognize
return {
"type": "function",
"function": {
"name": self.name,
"description": self.description,
"parameters": self.parameters,
},
}
This code shows that the Tool base class handles both argument validation and exporting tools as function descriptions that the model can consume.
ExecTool turns a general abstraction into a safe and controllable shell capability
In an Agent system, shell execution is usually both the highest-value and the highest-risk capability. Nanobot wraps it in ExecTool as a standard tool, allowing the model to indirectly perform tasks such as grep, ls, and cat through exec.
Its parameter surface is intentionally narrow: the core inputs are only command and the optional working_dir. That means the model has capability, but the application still defines the capability boundary instead of letting it expand without control.
The core of ExecTool is not just execution, but security review before execution
ExecTool.execute() first resolves the working directory, then calls _guard_command() for protection. Only after the command passes checks such as blacklists, allowlists, and path restrictions does it proceed to asynchronous subprocess execution.
async def execute(self, command: str, working_dir: str | None = None) -> str:
cwd = working_dir or self.working_dir or os.getcwd()
# Perform security checks first; reject execution on failure
guard_error = self._guard_command(command, cwd)
if guard_error:
return guard_error
# Execute the shell command asynchronously
process = await asyncio.create_subprocess_shell(
command,
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
cwd=cwd,
)
stdout, stderr = await asyncio.wait_for(process.communicate(), timeout=self.timeout)
return stdout.decode("utf-8", errors="replace")
This code reflects the main execution path of ExecTool: guard first, execute second, collect output third.
The security model determines whether ExecTool is suitable for production-grade Agents
Nanobot’s security strategy has two layers: soft protection and hard isolation. Soft protection is implemented by _guard_command(), which focuses on blocking high-risk command patterns such as rm -rf, dd, shutdown, and fork bombs, while also supporting allowlists and workspace path restrictions.
When restrict_to_workspace is enabled, the tool must defend not only against dangerous commands, but also against path escape. Access patterns such as ../ and absolute paths outside the allowed boundary are rejected immediately. This is essential when you want the model to read a project directory without touching the host machine.
bubblewrap provides a second layer of filesystem isolation
On Linux, Nanobot can also use bubblewrap to construct an unprivileged sandbox. The idea is straightforward: mount system directories as read-only, mount workspace as read-write, hide parent directories behind tmpfs, and make sensitive configuration files invisible inside the sandbox.
bwrap \
--new-session \
--die-with-parent \
--ro-bind /usr /usr \
--proc /proc \
--dev /dev \
--tmpfs /tmp \
--tmpfs /workspace_parent \
--bind /real_workspace /real_workspace \
--chdir /real_workspace \
-- sh -c "grep -R keyword ."
This command shows the essence of the sandbox: give the command a trimmed-down minimum runtime world.
TOOLS.md explicitly exposes capability boundaries to the model
In addition to the code-level interface, Nanobot uses TOOLS.md as a tool manual. It supplements function signatures with constraints such as timeout behavior, output truncation, workspace limitations, and how planning tasks should use the tools. This keeps the model from guessing tool behavior and helps it produce more stable calling strategies.
From an AI operations perspective, this kind of documentation matters because it turns implicit implementation constraints into explicit context. It is a low-cost way to improve Agent controllability and interpretability.
Parallel execution does not mean every tool can run at the same time
Nanobot executes tools serially by default, and parallelism is optional. More precisely, it uses a strategy of parallel execution within a batch and serial execution across batches. Read-only and thread-safe tools such as read_file, grep, and glob are better candidates for parallel execution. Tools with side effects or without thread safety must remain serial.
Marking exec as an exclusive tool is a sensible choice because it may modify files, spawn processes, or depend on mutable environment state. Any concurrent execution could introduce race conditions and non-reproducible results.
The engineering criteria behind the parallelism strategy
| Tool Type | read_only | exclusive | concurrency_safe | Execution Guidance |
|---|---|---|---|---|
| read_file / grep / glob | Yes | No | Yes | Safe to run in parallel |
Some web_search implementations |
Yes | Depends on the library | Not always | Parallelize cautiously |
| write_file / edit_file | No | No | No | Run serially |
| exec | No | Yes | No | Run serially and exclusively |
This table shows that tool concurrency is not just a configuration issue. It is jointly determined by the side-effect model and the thread-safety model.
The images provide supplementary context rather than core architectural details

This image is primarily a promotional book cover and does not contain details about the tool architecture, so it is not part of the technical analysis.

AI Visual Insight: This image is a QR code for the author’s public account, intended for content distribution and follow prompts. It does not show any details about Agents, tool invocation, or sandbox implementation.

AI Visual Insight: This animated image is a prompt for WeChat sharing and serves as a page interaction guide. It does not contain any information about the tool system, execution flow, or runtime architecture.
Nanobot reproduces the key capabilities of Function Calling through a lightweight design
From an engineering abstraction perspective, Nanobot’s Tools mechanism can be summarized in four layers: tool declaration, parameter validation, execution dispatch, and security isolation. It does not rely on a heavy framework, yet it fully implements the core Agent capability of connecting a language model to the real world.
For developers, the most valuable takeaway is not a specific tool class, but the interface and safety-boundary design behind the system: give the model the ability to act, while keeping it under an application-controlled surface that is verifiable, restrictable, and replayable.
FAQ
1. Why must the Tool base class output a schema instead of exposing only a Python function?
Because the model cannot directly understand runtime code objects, but it can understand structured function descriptions. The schema is the protocol layer the model uses to choose tools and generate arguments.
2. If ExecTool already has a blacklist, why is a sandbox still necessary?
A blacklist is only soft protection against known dangerous patterns. A sandbox provides hard isolation against unknown bypasses, path overreach, and sensitive file exposure. You need both layers together.
3. Why does Agent tool calling usually require multiple rounds instead of a single round?
Because tool use is fundamentally a process of plan first, execute second, summarize third. The model must first issue a tool call decision, then wait for external results, and only after observing those real results can it continue reasoning and produce a final answer.
[AI Readability Summary]
This article uses Nanobot as the entry point to systematically break down the OpenClaw-style Tools mechanism: from the tool-calling loop, the Tool abstraction, and ExecTool shell execution, to parameter validation, schema conversion, sandbox isolation, and parallel execution strategy. It gives developers a fast and practical way to understand the extensible tool architecture behind lightweight Agents.