Three lines in your existing Playwright / Cypress / Selenium / pytest / JUnit tests let an AI ask "does anything look broken here?" at points you choose. Findings land in your test report โ with optional auto-log to Jira, Xray, TestRail, or Cypress Cloud. No rewrites. No new runner.
analyze_* call, or define your own.The delta is three lines. Everything else is your existing test.
def test_checkout(page):
page.goto("/cart")
page.click("#checkout")
page.fill("#card", "4242...")
page.click("#pay")
expect(page).to_have_url("/success")
def test_checkout(page, testersai):
page.goto("/cart")
testersai.screenshot() # check cart UI
page.click("#checkout")
page.fill("#card", "4242...")
page.click("#pay")
testersai.screenshot() # check success page
testersai.console() # check for JS errors
expect(page).to_have_url("/success")
Five stages. The travelling dot is the live flow on every call.
If the AI call fails, hangs, is rate-limited, or hits a firewall โ the SDK gives up in under a second and your test continues. AI checks never block your real tests.
Rule of thumb: wherever a human reviewer would pause and look during a manual run โ that's where to drop a check. Interesting states deserve eyes.
The page just re-rendered from scratch. Perfect moment to ask "does this page look right?" before any interaction.
Login, add-to-cart, toggle filter, submit form. The UI just reflected a new state โ and that's exactly where regressions hide.
You were about to check one thing. Ask the AI to check everything else on the page at the same time โ for free.
API just returned, spinner just vanished, toast just appeared. Catch broken empty-states, half-rendered lists, stale data.
One console + network analysis at the end of every test catches errors your assertions never looked for โ CSP violations, 4xx background calls, unhandled rejections.
Mobile vs. desktop, light vs. dark, locale change. One call per viewport or theme catches layout breaks cheaply.
You triggered a 400 on purpose. Does the error UI look correct? Is the message readable? Is the user trapped?
Every existing page.screenshot() call is a free upgrade. The screenshot already captures an interesting state โ hand the bytes to the SDK too.
Checking every row of a 500-row table is noise. Pick representative states โ first, a middle case, edges. Quality over quantity.
Principles the SDK is built on โ encoded in tests, not slogans.
Short, capped retries with a hard deadline. If the API is slow or down, the SDK
gives up fast and returns a skipped result. Your test continues.
Connection refused, DNS failure, hang, reset, firewall 403, slow read โ all return a skipped result, never an exception. Verified against a mock server for every failure mode.
Findings appear where your team already looks: pytest report sections, JUnit
XML, NUnit TestContext, RSpec metadata, Playwright annotations,
Cypress command log, WebdriverIO service.
Optional: auto-create Jira issues, attach results to Xray executions, post to TestRail runs, or forward findings to Cypress Cloud. Fully opt-in via env vars.
Results go to the framework reporter, a disk folder (JSON / JUnit / TAP / text),
the return value, or any combination. Swap via TESTERSAI_SINK.
Pure client SDK. You bring the test. No long-lived daemon, no sidecar, no hidden network calls. Set one env var or pass a config object.
Five endpoints, consistent across every language.
PNG bytes or a path. The AI flags layout breaks, overlapping elements, missing alt text, contrast failures, broken images, visual regressions.
Array of {level, text}. The AI clusters errors, surfaces real bugs
hiding among deprecation noise, flags unhandled rejections.
HAR or a list of {url, status, method}. The AI finds slow endpoints,
4xx/5xx clusters, third-party failures, CORS issues.
Raw text or HTML. The AI checks copy quality, spelling, broken placeholders, missing translations, accessibility-labelling gaps.
Generates N diverse user personas (e.g. Maya, 28, mobile-first) who rate the page on 9 dimensions โ visual, design, usability, content, features, competitive, emotional, accessibility, NPS โ with star scores + written comments. More โ
One call generates diverse end-user personas, each rating the page on usability, accessibility, design, content, and visual (1-10 each) plus written comments. Catches the UX issues that don't show up in functional tests.
"Hero CTA is clear but mobile layout cuts off the right column. I wasn't sure what 'Vibe Testing' meant from the headline alone."
"Code samples look good and the install section is concise. Minor: the curl example would benefit from a copy button."
"Some images are missing alt text and the testers row scrolls horizontally without a visible scroll affordance. The hero gradient is hard to read."
# Python โ same shape in JS / Java / C# / Ruby
from testersai import TestersAI
ta = TestersAI()
r = ta.analyze_personas(
screenshot=png,
page_text=html,
personas=3,
persona_traits=["mobile-first", "first-time-user"],
fail_below=5, # auto-promote < 5 ratings to issues
)
for p in r.raw["personas"]:
print(p["name"], p["ratings"])
if r.failed: # True if any rating < 5
raise AssertionError(r.issues[0].message)
{
"personas": [
{
"name": "Aisha Khan", "age": 28,
"image": "https://testers.ai/img/profiles/aisha.jpg",
"background": "Marketing manager, mobile-first user",
"ratings": {
"visual": 8, "design": 6,
"usability": 7, "content": 7,
"features": 6, "competitive": 7,
"emotional": 6, "accessibility": 5,
"nps": 7
},
"comments": "Hero CTA is clear but mobile layout..."
},
...
]
}
Click a cell to jump to a self-contained page โ install, snippet, recommendations. Shareable URL.
| Python | JavaScript | Java | C# / .NET | Ruby | |
|---|---|---|---|---|---|
| Playwright | playwright_testersai โ | @testersai/playwright โ | testersai-playwright โ | TestersAI.Playwright โ | โ |
| Selenium / WebdriverIO | selenium_testersai โ | @testersai/webdriverio โ | testersai-selenium โ | TestersAI.Selenium โ | selenium_testersai โ |
| Cypress | โ | @testersai/cypress โ | โ | โ | โ |
| Puppeteer | โ | @testersai/puppeteer โ | โ | โ | โ |
| Vibium ยท BiDi | vibium_testersai โ | @testersai/vibium โ | testersai-vibium โ | โ | โ |
| pytest / unittest / Robot |
pytest_testersai โ unittest_testersai โ robot_testersai โ |
โ | โ | โ | โ |
| Jest / Mocha | โ |
@testersai/jest โ @testersai/mocha โ |
โ | โ | โ |
| JUnit 5 / TestNG | โ | โ |
junit5_testersai โ testng_testersai โ |
โ | โ |
| NUnit / xUnit / MSTest | โ | โ | โ |
NUnit_TestersAI โ xUnit_TestersAI โ MSTest_TestersAI โ |
โ |
| RSpec / Minitest | โ | โ | โ | โ |
rspec_testersai โ minitest_testersai โ |
Python + pytest + Playwright. Same shape in every language.
# 1. install โ download the bundle from the Downloads page, then:
pip install ./testersai-python-pytest-0.1.0.tar.gz
# (works identically on macOS, Linux, and Windows โ Windows 10 1803+ has tar built in)
# 2. configure
# Get a key at https://testers.ai/sdk or click "Get API key" above.
export TESTERSAI_API_KEY=sk_...
# Windows PowerShell: $env:TESTERSAI_API_KEY = "sk_..."
# 3. use in your test
from playwright_testersai import TestersAIPage
def test_home_page(page, testersai):
ta = TestersAIPage(page)
ta.capture_console()
page.goto("https://example.com")
shot = ta.analyze_screenshot()
console = ta.analyze_console()
# Findings show up in the pytest report automatically.
# Fail the test only where you want to:
if shot.failed:
pytest.fail(shot.issues[0].message)
Full SDK source, examples, and pre-packaged tarballs for every language. Drop it on any webserver and it works โ zero build, zero runtime.