This ESP32-based Claude Code CLI companion uses BLE to sync AI coding status in real time and moves high-risk actions such as Bash, Write, and Edit into a physical on-device approval flow. It addresses three common pain points: buried terminal prompts, missing feedback, and a lack of control. Keywords: ESP32, Claude Code, physical approval.
This project turns Claude Code’s black-box execution flow into a perceptible human-in-the-loop interface
| Parameter | Description |
|---|---|
| Project Name | MicroPython_Claude_Assistant |
| Core Languages | MicroPython, Python |
| Communication Protocol | BLE (NUS-like fragmented transport) |
| Hardware | ESP32 + small display + touch input |
| Host Environment | Claude Code CLI hooks + PC daemon |
| Core Dependencies | asyncio, BLE communication, JSON messaging |
| Repository | https://github.com/FreakStudioCN/MicroPython_Claude_Assistant |
| Stars | Not provided in the source material |
This is not a typical “AI gadget.” It is a physical control terminal designed for developer workflows. It maps Claude Code execution state, approval requests, and task results from the terminal onto animations and interactive buttons on an ESP32 display.
When Claude Code runs tools automatically, the biggest source of anxiety is not speed. It is invisibility. Fast-scrolling logs, hidden approval prompts, and uncertainty about whether the system is still running or has stalled all create a meaningful sense of loss of control.
The project’s core value is upgrading log awareness into state awareness
The author designed three ASCII characters for the device—a cat, a robot, and a duck—and defined seven animation states for each one, including idle, busy, waiting for approval, and completion celebration. This is not just a playful UI choice. It turns abstract system state into visual signals that users can recognize instantly.
AI Visual Insight: The image shows the project’s physical terminal form: a compact ESP32-based screen device that presents Claude Code runtime feedback through a pixel-art interface. The UI includes character animations, status regions, and interaction prompts, indicating that this system is not a simple information mirror. Instead, it acts as a low-latency embedded frontend for the AI coding workflow.
Compared with terminal-only prompts, this external visualization device provides continuous feedback. Developers can glance at it and immediately know whether the AI is idle, executing, or waiting for authorization, which reduces cognitive load during long-running tasks.
class BuddyState:
def __init__(self):
self.base_state = "idle" # Persistent state: synced with Claude's main workflow
self.active_state = None # Transient state: short-lived animation overlay
def current(self):
# If a transient animation exists, display it first
return self.active_state or self.base_state
This code captures the project’s state-modeling philosophy: manage persistent state and instant feedback separately.
The system reduces accidental approval risk through a physical approval mechanism
The most distinctive design choice in the project is moving sensitive operations such as Bash, Write, and Edit out of terminal-based approval popups and into a physical confirmation flow on the device screen. When an approval request arrives, the Buddy shows a high-visibility approval screen with a countdown. Claude Code cannot continue until the user confirms.
This approach solves two common problems. First, traditional approval prompts are easy to miss when terminal logs scroll rapidly. Second, when developers are distracted, they may accidentally approve risky operations through reflexive keyboard input. A physical display turns approval from “press Enter by habit” into a deliberate confirmation step.
AI Visual Insight: This image shows the approval-state UI. The screen uses high-contrast colors to display “APPROVE?” while rendering the pending command, the character expression, and the confirmation button. The key interaction pattern is that it extracts high-risk commands from the text stream and presents them as a focused, foregrounded single-task interface, aligning with common human-factors principles in safety confirmation systems.
The project uses a dual-layer state machine so animations do not interrupt the main workflow
The author designed a two-layer model consisting of base_state and active_state. The former stays synchronized with Claude Code’s actual workflow, while the latter displays short-lived animations such as success, celebration, or reminders. When the transient state ends, the UI automatically falls back to the base state without requiring additional recovery logic.
async def show_approve_success(state):
state.active_state = "success" # Switch to a celebration animation after approval succeeds
await asyncio.sleep(2) # Keep it for 2 seconds to provide clear feedback
state.active_state = None # Automatically fall back to the base state
The value of this logic is clear: even while Claude continues running a busy task, the device can first express “approval completed” and then smoothly return to the busy state.
The project uses BLE fragmentation and async concurrency to keep embedded interactions smooth
BLE transport on the ESP32 comes with a typical limitation: each packet has a very small payload, so complex JSON messages can be truncated easily. The project implements transparent fragmentation and reassembly in the driver layer, which means the application layer does not need to care about 20-byte boundaries and can send and receive data as normal strings.
This type of encapsulation is critical in embedded systems. If protocol complexity leaks into the application layer, animation rendering, approval handling, and state synchronization logic quickly become tightly coupled, and long-term maintenance costs rise sharply.
def pack_message(raw: str, chunk_size=20):
chunks = []
for i in range(0, len(raw), chunk_size):
part = raw[i:i + chunk_size]
chunks.append(part) # Slice by the BLE payload limit
return chunks
This code demonstrates the core idea behind BLE small-packet fragmentation: solve the limitation in the transport layer instead of pushing the problem into business logic.
A three-task concurrent architecture keeps animation, communication, and touch input from blocking one another
The project uses asyncio to run three tasks at the same time: BLE communication, touch listening, and screen rendering. That means even when Claude Code sends status messages at a high rate, the animation frame rate stays responsive and touch-based approval remains immediate.
From an architectural perspective, this is a lightweight but effective embedded event loop model. It does not rely on a complex framework. Instead, it prioritizes continuity of feedback under constrained resources. For interactive devices, smoothness matters more than feature accumulation.
async def main():
# Start three task types concurrently: communication, input, and rendering
await asyncio.gather(
ble_task(),
touch_task(),
render_task(),
)
This code reflects the runtime skeleton of the project: split device capabilities into independent coroutines and coordinate them through the event loop.
The project improves portability across development boards through configuration abstraction
The author centralizes pin mappings, display parameters, and device settings in config.py. As a result, if you later switch to a different ESP32 board, display module, or input method, you only need to update the hardware mapping file rather than rewrite the application logic.
For open source embedded projects, this kind of abstraction matters a great deal. The real bottleneck to adoption is often not functionality. It is whether other developers can adapt the project to their own boards quickly.
This project is worth watching
It does not try to replace Claude Code. Instead, it adds a trusted interaction layer to the AI coding process. Its real innovation is transforming the AI agent execution path from pure software output into a physical device with status feedback, risk confirmation, and even a sense of emotional companionship.
This design is especially relevant for developers who use AI coding assistants frequently. It also serves as a strong reference implementation for AI peripherals, embedded human-computer interaction, and low-cost edge control terminals.
FAQ
1. What problem does this project solve best?
It is best suited for solving the visibility and controllability problems that appear when Claude Code runs automatically, especially missed approval prompts for high-risk commands and uncertainty around long-running task state.
2. Why use a physical ESP32 device instead of desktop popups?
Desktop popups still depend on the terminal or the operating system notification layer, so they can be obscured or ignored. A physical device provides an independent display and confirmation path, which significantly improves approval visibility and user confidence.
3. What can embedded developers learn from this design?
It demonstrates a complete pipeline: CLI hooks, a PC daemon, BLE communication, MicroPython async scheduling, screen animation, and touch interaction. That makes it highly valuable for developers building AI peripherals, status panels, or edge controllers.
[AI Readability Summary]
This article deconstructs an ESP32 and MicroPython-based Claude Code CLI companion and explains how it uses BLE, async tasks, and a dual-layer state machine to convert black-box approvals and execution state into a visible, controllable, and physically confirmable human-computer interaction experience.