robot_testersai
The easiest way to AI-upgrade your existing Robot Framework suite. You already have the tests — this adds AI Checks at the moments that matter, without replacing your framework, runner, or CI pipeline.
analyze_* call, or define your own.Load robot_testersai as a Library; exposes keywords like Analyze Screenshot.
# Download the bundle from the Downloads page, then:
pip install ./testersai-python-robot-0.1.0.tar.gz
A complete, runnable Python + Robot Framework example.
*** Settings ***
Library SeleniumLibrary
Library robot_testersai
*** Test Cases ***
Home Page Looks Right
Open Browser https://example.com chrome
${png}= Capture Page Screenshot
Analyze Screenshot ${png}
Analyze Console @{EMPTY}
Keywords log via robot.api.logger — appears in the standard Robot HTML log.
If the AI call fails — rate limit, hang, firewall, no network — the SDK gives up
fast and returns a skipped result. Your Robot Framework test is never blocked.
Rule of thumb: wherever a human reviewer would pause to look during a manual run. These recommendations are tuned for Robot Framework — pick the ones that fit your suite.
The page just re-rendered. Ask "does this look right?" before any interaction.
Login, add-to-cart, toggle, submit. The UI just reflected a new state — where regressions hide.
You were about to check one thing. Ask the AI about everything else for free.
API returned, spinner gone, toast shown. Catch broken empty-states and stale data.
A single console + network check at the end of every test catches issues your assertions ignored.
Mobile vs. desktop, light vs. dark, locale change. One call per viewport.
robot.api.logger — appears in the standard Robot HTML log.Or grab the monolith ZIP (all languages, all adapters).