This article uses a calculator example to explain how ontologies help AI evolve from “memorizing button sequences” to “understanding system rules.” The core value lies in making entities, attributes, relationships, priorities, and state memory explicit, so AI can plan, explain, and transfer operations. Keywords: ontology, rule-based reasoning, AI planning.
The technical specification snapshot outlines the problem space
| Parameter | Details |
|---|---|
| Domain | Artificial Intelligence, Knowledge Representation, Task Planning |
| Expression Language | Natural language + rule modeling |
| Interaction Target | Four-function calculator |
| Core Protocol | Input -> state transition -> result feedback |
| Stars | N/A (the original source is a technical article, not an open-source repository) |
| Core Dependencies | Ontology modeling, operator precedence, temporary memory, feedback mechanism |
This article shows that AI must understand the system before executing instructions
The original problem is simple: AI understands numbers and arithmetic operators, but it does not know what internal rules a specific calculator follows. It does not know whether operator precedence exists, nor whether pressing the equals key automatically preserves the result for subsequent operations.
This is exactly the real situation many agent systems face. A model may have broad general knowledge, but it lacks structured knowledge of how a specific tool, interface, or device behaves. As a result, it can complete tasks only through trial and error, with limited stability and explainability.
An ontology gives AI a “system map” first
The value of an ontology is not that it directly teaches AI which keys to press. Its value is that it first defines what the system contains and how its components interact. For a calculator, the minimal knowledge framework can be divided into four layers: entities, attributes, actions, and relationships.
Entities: number keys, operator keys, equals key, display
Attributes: key labels, display value, operator precedence
Actions: press a key, update the display, write to temporary memory
Relationships: key presses trigger state changes, and state changes affect the next computation
This structure defines the minimal cognitive boundary of the calculator, so AI at least knows what exists in the system and what can happen.
AI first finds workable recipes through trial and error
Without explicit rules, AI can only behave like someone using an unfamiliar device for the first time: try several inputs and observe the feedback. The goal is to solve 3 + 2 × 5, whose correct result should be 13.
If it directly enters 3 + 2 × 5 =, some devices that calculate strictly from left to right may return 25. This shows that entering an expression based on mathematical intuition alone does not necessarily fit the current system. The error exposes the core issue: the system rules are unknown.
Trial and error accumulates empirical patterns
# Goal: find a workable operation sequence through trial and error
attempts = [
["3", "+", "2", "*", "5", "="], # Intuitive input
["2", "*", "5", "=", "+", "3", "="] # Compute multiplication first
]
for seq in attempts:
result = execute(seq) # Core logic: execute the key sequence and read the result
if result == 13: # Core logic: use feedback to determine success
remember(seq) # Core logic: record the valid sequence
This code shows how AI uses feedback to preserve a successful recipe, but at this stage it still only knows how to use the system. It does not truly understand why the sequence works.
Explicit rules upgrade patterns into knowledge that AI can reason over
Once a human adds two rules, the AI’s capability changes qualitatively. The first rule is operator precedence: multiplication and division take precedence over addition and subtraction. The second rule defines the semantics of the equals key: it not only outputs the result, but also writes that result into temporary memory so the next operation can continue from it.
With these two rules, AI no longer depends on memorized sequences. Instead, it can start from the expression, analyze its structure, and then generate an action plan. This is the critical shift from pattern matching to rule-based reasoning.
Explicit rules can be translated directly into planning logic
def plan_expression(a, op1, b, op2, c):
priority = {"+": 1, "-": 1, "*": 2, "/": 2} # Define precedence
if priority[op2] > priority[op1]: # Core logic: if the later operator has higher precedence, evaluate the second half first
first = [str(b), op2, str(c), "="]
second = [op1, str(a), "="] # Core logic: continue the computation using temporary memory after equals
return first + second
return [str(a), op1, str(b), op2, str(c), "="]
This code combines precedence and temporary memory into a single planner, showing how ontology rules can directly drive action generation.
This method gives AI both explainability and transferability
When AI generates a sequence such as 2 × 5 = + 3 =, it is no longer just replaying a historical answer. It can explain the plan: because multiplication has higher precedence, it computes 2 × 5 first; because the equals key preserves the result, the later + 3 = is effectively equivalent to 10 + 3.
This explanatory power matters. It means the knowledge inside the system has shifted from an implicit statistical pattern into structured rules that can be inspected, maintained, and extended. For engineering systems, that is more important than getting a single answer right.
The same rules transfer to many more expressions
4 + 3 × 2 -> 3 × 2 = + 4 = -> 10
10 - 2 × 3 -> 2 × 3 = - 10 = -> 4
8 ÷ 2 + 3 -> 8 ÷ 2 = + 3 = -> 7
This shows that AI has not learned one problem. It has learned a mechanism for solving a class of tasks, which gives it the ability to generalize to new scenarios.
The ontology approach fundamentally compresses the cognitive cost of complex systems
Traditional automation usually requires developers to handwrite the steps for every process. As soon as device behavior changes, the script must be rewritten. The ontology approach works differently: it first defines objects, states, relationships, and rules, and then lets AI generate the steps autonomously under those constraints.
This approach is especially useful for tool calling, GUI automation, agent orchestration, and business process reasoning. Real-world systems change often, but the underlying world model is usually more stable. Once you update the ontology description, the planner can adapt with it.
The image shows the article source page’s sharing prompt interface

AI Visual Insight: This image shows a sharing prompt animation on the blog page. Its core message is not about algorithm structure, but about the interaction cue for content distribution entry points. This suggests that the original publishing environment emphasizes social sharing rather than the technical interface itself, so it adds no direct technical value to calculator rule modeling.
This case reveals a general principle for AI agent design
Truly scalable intelligence does not come from memorizing countless operational answers. It comes from understanding the relationships among objects, constraints, states, and rules within a system. Although the calculator example is simple, it accurately maps the core path of tool learning, environment modeling, and task planning.
Once AI has an ontology-level description, it can evolve from “seeing a button and trying it” to “understanding object behavior and then choosing the optimal steps.” That is the foundation for moving modern agents from merely usable to genuinely reliable.
FAQ structured Q&A
Q: What is the difference between an ontology and a regular rule table?
A: A rule table only describes “if this happens, do that.” An ontology first describes which entities, attributes, and relationships exist in the system, and then layers rules on top of that structure. That makes it better suited for reasoning, explanation, and transfer.
Q: Why is the equals key’s “temporary memory” so important?
A: Because it defines how state persists across steps. Without that state semantics, AI cannot understand why entering 2×5= followed directly by +3= still produces the correct result.
Q: Can this method be used in real AI agents?
A: Yes. Tool calling, process automation, medical decision support, and in-vehicle rule systems can all benefit from ontology modeling, because it structures environmental knowledge and turns it into executable planning logic.
Core Summary: Using the example of “AI learning to use a calculator,” this article breaks down how ontology modeling transforms fragmented operational experience into structured knowledge that is explainable and transferable. It shows why AI needs entities, attributes, relationships, and rule modeling to move from memorizing recipes to autonomous planning based on precedence.