robotframework-aitester is an AI testing library that connects large language models to Robot Framework across Web, API, and Mobile scenarios. It addresses the heavy scripting burden of traditional automation, the maintenance cost of locators, and the high effort required for cross-flow exploratory testing. Keywords: Robot Framework, AI test automation, LLM agents.
The technical specification snapshot is straightforward
| Parameter | Description |
|---|---|
| Primary language | Python 3.10+ |
| Test framework | Robot Framework 6.0+ |
| Supported protocols/interfaces | Web UI, REST API, Appium Mobile, OpenAI-compatible interfaces |
| Compatible libraries | SeleniumLibrary, RequestsLibrary, AppiumLibrary |
| AI providers | OpenAI, Gemini, Ollama, Anthropic, Bedrock, Docker Model, Manual |
| Core dependencies | strands-agents, Robot Framework native library ecosystem |
| GitHub stars | Not provided in the original source |
This library does not replace Robot Framework; it adds autonomous decision-making
The core value of robotframework-aitester is not to reinvent the automation framework, but to reuse your existing test assets. You still work with SeleniumLibrary, RequestsLibrary, and AppiumLibrary, but introduce an AI agent at critical flows so your tests can move from step-by-step hardcoding to goal-driven execution.
This design addresses two long-standing pain points. First, complex business-flow scripts are expensive to maintain. Second, unstable pages, dynamic elements, and exploratory validation are difficult to handle with fixed locators alone. AITester uses natural-language objectives, context, and step intent to drive execution, which reduces script brittleness.
Installation should be planned together with the test mode
The base installation already includes support for OpenAI, Gemini, and Ollama-compatible providers. If you only need a single mode, install the relevant extras on demand to avoid unnecessary dependencies.
# Base installation: includes support for common LLM providers
pip install robotframework-aitester
# Install extras by test mode
pip install robotframework-aitester[web] # Web UI testing
pip install robotframework-aitester[api] # API testing
pip install robotframework-aitester[mobile] # Mobile testing
pip install robotframework-aitester[all] # Full capabilities
These commands let you choose dependencies by test scenario and reduce redundant packages in production environments.
Runtime prerequisites determine whether AI can attach to an existing session
In Web scenarios, SeleniumLibrary must open the browser first. In Mobile scenarios, you must start an Appium session first. In API scenarios, RequestsLibrary must already have a configured base_url or session context. AITester does not create the underlying driver directly; it attaches to these existing sessions and executes on top of them.
If you imported SeleniumLibrary, RequestsLibrary, or AppiumLibrary with an alias, you must explicitly pass that alias to AITester through the selenium_library, requests_library, or appium_library parameter. Otherwise, AITester cannot bind to the current session.
Web mode is better suited to intent-driven business-flow validation
When test_steps are provided as numbered steps, AITester treats them as checkpoints in the main flow rather than pixel-level scripts. It can automatically add supporting actions such as dismissing cookie banners, waiting for the page to stabilize, opening menus, and retrying actions that are temporarily blocked.
*** Settings ***
Library SeleniumLibrary
Library AITester platform=OpenAI api_key=%{OPENAI_API_KEY} model=gpt-4o
*** Test Cases ***
AI Login Flow Test
Open Browser https://myapp.example.com chrome
${status}= Run AI Test
... test_objective=Validate login, invalid credentials, empty fields, and the forgot password flow
... app_context=E-commerce website using email/password sign-in
... test_steps=1. Open the login page 2. Sign in with valid credentials 3. Verify the error with invalid credentials
... max_iterations=50
Log ${status}
This example shows how to hand the test objective to AI instead of manually expanding every click and interaction detail.
API mode combines OpenAPI with natural-language steps
For API testing, AITester can combine base_url, session information, and api_spec_url to perform autonomous REST validation. Compared with writing assertions one by one in plain RequestsLibrary, this approach is better suited to CRUD workflows, authentication failures, and boundary-value exploration.
*** Settings ***
Library RequestsLibrary
Library AITester platform=Ollama model=llama3.3
*** Test Cases ***
AI REST API Test
Create Session api https://api.example.com
${status}= Run AI API Test
... test_objective=Validate CRUD operations, authentication, and error handling for the user management API
... base_url=https://api.example.com
... api_spec_url=https://api.example.com/openapi.json
... test_steps=1. POST to create a user 2. GET the user 3. PUT to update the user 4. DELETE the user
... max_iterations=30
Log ${status}
This code shows that AITester can merge the API specification and the test objective into a single autonomous testing workflow.
Mobile mode depends on clear context and is especially effective for navigation and state-flow validation
Mobile testing requires an active AppiumLibrary session first. If app_context and test_steps clearly describe the target screen, account state, and expected path, AITester can more reliably handle loading indicators, common selectors, soft keyboard dismissal, Hybrid context switching, and back navigation.
*** Settings ***
Library AppiumLibrary
Library AITester platform=Gemini api_key=%{GEMINI_API_KEY}
*** Test Cases ***
AI Mobile App Test
Open Application http://localhost:4723/wd/hub platformName=Android app=com.example.app
${status}= Run AI Mobile Test
... test_objective=Validate the onboarding flow, main navigation, and key settings-page functionality
... app_context=Android banking application
... test_steps=1. Complete the onboarding flow 2. Enter the main dashboard 3. Open the settings page and verify key options
... max_iterations=40
Log ${status}
This example highlights that mobile AI testing depends more on explicit business intent than on precise locators.
The supported platform design covers cloud models, local models, and compatible endpoints
The project supports OpenAI, Gemini, and Ollama by default, and extends to Anthropic, AWS Bedrock, Docker Model, and Manual mode. This matters because teams can switch between cloud APIs, local Ollama deployments, and OpenAI-compatible gateways based on cost, compliance, and inference latency.
With platform=DockerModel, the library automatically uses api_key=dummy, so no additional credential is required. By contrast, platform=Manual is better suited to enterprise private gateways or self-hosted compatible endpoints, where you typically need to provide model and base_url explicitly.
Common configuration parameters directly affect agent controllability
Key parameters include platform, model, max_iterations, test_mode, verbose, timeout_seconds, and max_cost_usd. Together, these control the model source, the number of reasoning iterations, the default scenario, logging granularity, and safety boundaries.
config = {
"platform": "OpenAI", # Specify the AI platform
"model": "gpt-4o", # Specify the model ID
"test_mode": "web", # Set the default test mode
"max_iterations": 50, # Limit the maximum number of agent iterations
"timeout_seconds": 600, # Set timeout protection
"verbose": False # Control whether detailed logs are emitted
}
This configuration illustrates the main control surface that AITester provides for stability, cost, and observability.
The keyword system shows that this is an enhancement layer for Robot Framework
The project provides core keywords such as Run AI Test, Run AI Exploration, Run AI API Test, and Run AI Mobile Test, which map to goal-driven testing and exploratory testing. It also includes Get AI Platform Info, AI Step, and AI High Level Step for platform introspection and log grouping.
This means AITester does not require you to abandon your existing keyword system. Instead, it adds an intelligent execution layer on top of Robot Framework that can plan autonomously while reusing existing sessions. For elements with stable locators, you can continue to use traditional automation. For flows that are difficult to model reliably, AI can fill the gap.
This project is best introduced incrementally rather than through a full rewrite
From an engineering perspective, the most valuable adoption pattern for AITester is partial enhancement. Keep your existing deterministic scripts, and delegate fragile, high-change, and exploratory parts to AI execution. This lets you preserve the repeatability of traditional automation while gaining the benefits of large language models in cross-page understanding and dynamic decision-making.
If your team has already built substantial test assets on Robot Framework, the value of robotframework-aitester is not replacement. It is an upgrade path that turns those assets into an AI testing system that is intent-driven, multimodal-ready, and able to switch across multiple model providers.

AI Visual Insight: This animated image shows a blog-page sharing prompt rather than the testing framework interface itself. It does not contain technical details such as project architecture, execution flow, or automation results, so it does not directly help explain how AITester works.
FAQ: The three questions developers care about most
1. Can AITester directly take over a browser I opened manually or a session created by another tool?
No. It can only drive sessions created by SeleniumLibrary or AppiumLibrary, and on the API side it must also attach to the RequestsLibrary context.
2. Will it completely replace traditional locators and assertions?
No. The best practice is to keep stable and deterministic scripts in the traditional style, and let AI handle complex interactions, dynamic paths, and exploratory validation.
3. Which model platform should I prioritize in production?
If you want the fastest path to adoption, prioritize OpenAI or Gemini. If you care more about cost control and local deployment, choose Ollama or Docker Model. If your organization already has a compatible gateway, Manual mode is the most flexible option.
The core summary defines the project’s value clearly
This article systematically reconstructs the capability boundaries, installation model, runtime prerequisites, and keyword design of robotframework-aitester, showing how it adds autonomous AI testing to Robot Framework while continuing to reuse SeleniumLibrary, RequestsLibrary, and AppiumLibrary.