Testers.AI SDKBETA powered by testers.ai
Docs Integrations FAQ Downloads Get API key โ†’ Get the SDK โ†’
โš ๏ธ
Public beta. APIs may shift before 1.0. Pin the version you test with; the behaviour documented here is stable within 0.1.x. What beta means โ†’

Add these AI testers to your existing tests

21 named specialists โ€” pick by first name in any analyze_* call, or define your own.
See all + custom testers โ†’

Upgrade your automation โ€” don't rewrite it

The delta is three lines. Everything else is your existing test.

Your test today
def test_checkout(page):
    page.goto("/cart")
    page.click("#checkout")
    page.fill("#card", "4242...")
    page.click("#pay")
    expect(page).to_have_url("/success")
Your test, AI-upgraded
def test_checkout(page, testersai):
    page.goto("/cart")
    testersai.screenshot()         # check cart UI
    page.click("#checkout")
    page.fill("#card", "4242...")
    page.click("#pay")
    testersai.screenshot()         # check success page
    testersai.console()            # check for JS errors
    expect(page).to_have_url("/success")
Leverages what you already built: existing selectors, fixtures, page objects, helpers, CI pipeline โ€” untouched. The SDK just observes what your test already does, at the moments you mark as interesting.

How it works

Five stages. The travelling dot is the live flow on every call.

1
๐Ÿงช
Your test runs
Playwright, Cypress, Selenium, pytest โ€” whatever you use today.
2
๐Ÿ“ธ
SDK captures
Screenshot, console, network, or page text โ€” at a point you chose.
3
๐Ÿค–
Testers.AI analyses
Returns issues with severity, category, location, evidence.
4
๐Ÿ“
Findings โ†’ test report
Native pytest / JUnit / NUnit / RSpec reporter. No new dashboard.
5
๐Ÿ”—
Optional: auto-log
Jira ยท Xray ยท TestRail ยท Cypress Cloud. Opt-in by env var.

If the AI call fails, hangs, is rate-limited, or hits a firewall โ€” the SDK gives up in under a second and your test continues. AI checks never block your real tests.

Where & when to put AI checks

Rule of thumb: wherever a human reviewer would pause and look during a manual run โ€” that's where to drop a check. Interesting states deserve eyes.

1

After every meaningful navigation

The page just re-rendered from scratch. Perfect moment to ask "does this page look right?" before any interaction.

testersai.screenshot()
2

After a state-changing action

Login, add-to-cart, toggle filter, submit form. The UI just reflected a new state โ€” and that's exactly where regressions hide.

click โ†’ testersai.screenshot()
3

Right before your main assertion

You were about to check one thing. Ask the AI to check everything else on the page at the same time โ€” for free.

testersai.screenshot() โ†’ expect(...)
4

After async operations settle

API just returned, spinner just vanished, toast just appeared. Catch broken empty-states, half-rendered lists, stale data.

wait_for_selector โ†’ screenshot()
5

In the teardown / afterEach

One console + network analysis at the end of every test catches errors your assertions never looked for โ€” CSP violations, 4xx background calls, unhandled rejections.

afterEach: testersai.console()
6

On responsive / theme switch

Mobile vs. desktop, light vs. dark, locale change. One call per viewport or theme catches layout breaks cheaply.

setViewport โ†’ screenshot()
7

After errors you expected

You triggered a 400 on purpose. Does the error UI look correct? Is the message readable? Is the user trapped?

submit_invalid โ†’ screenshot()
8

Wherever you already screenshot

Every existing page.screenshot() call is a free upgrade. The screenshot already captures an interesting state โ€” hand the bytes to the SDK too.

screenshot โ†’ also send to SDK
9

Don't bother inside tight loops

Checking every row of a 500-row table is noise. Pick representative states โ€” first, a middle case, edges. Quality over quantity.

anti-pattern โœ—

Why this SDK

Principles the SDK is built on โ€” encoded in tests, not slogans.

โšก

Never blocks your tests

Short, capped retries with a hard deadline. If the API is slow or down, the SDK gives up fast and returns a skipped result. Your test continues.

๐Ÿ›ก

Fault-tolerant by design

Connection refused, DNS failure, hang, reset, firewall 403, slow read โ€” all return a skipped result, never an exception. Verified against a mock server for every failure mode.

๐Ÿงฉ

Framework-native reporting

Findings appear where your team already looks: pytest report sections, JUnit XML, NUnit TestContext, RSpec metadata, Playwright annotations, Cypress command log, WebdriverIO service.

๐Ÿ”—

Test-management integrations

Optional: auto-create Jira issues, attach results to Xray executions, post to TestRail runs, or forward findings to Cypress Cloud. Fully opt-in via env vars.

๐ŸŽ›

Pluggable sinks

Results go to the framework reporter, a disk folder (JSON / JUnit / TAP / text), the return value, or any combination. Swap via TESTERSAI_SINK.

๐Ÿ”’

No magic, no services

Pure client SDK. You bring the test. No long-lived daemon, no sidecar, no hidden network calls. Set one env var or pass a config object.

What you can send for analysis

Five endpoints, consistent across every language.

๐Ÿ–ผ

Screenshots

PNG bytes or a path. The AI flags layout breaks, overlapping elements, missing alt text, contrast failures, broken images, visual regressions.

๐Ÿ“Ÿ

Console logs

Array of {level, text}. The AI clusters errors, surfaces real bugs hiding among deprecation noise, flags unhandled rejections.

๐ŸŒ

Network activity

HAR or a list of {url, status, method}. The AI finds slow endpoints, 4xx/5xx clusters, third-party failures, CORS issues.

๐Ÿ“„

Page text / HTML

Raw text or HTML. The AI checks copy quality, spelling, broken placeholders, missing translations, accessibility-labelling gaps.

๐Ÿ‘ฅ

Persona feedback

Generates N diverse user personas (e.g. Maya, 28, mobile-first) who rate the page on 9 dimensions โ€” visual, design, usability, content, features, competitive, emotional, accessibility, NPS โ€” with star scores + written comments. More โ†’

Get user feedback โ€” without recruiting

One call generates diverse end-user personas, each rating the page on usability, accessibility, design, content, and visual (1-10 each) plus written comments. Catches the UX issues that don't show up in functional tests.

AI
Aisha Khan
Aisha Khan 28
Marketing manager ยท mobile-first
Visual8
Design6
Usability7
Content7
Features6
Compet.7
Emotion6
A11y5
NPS7

"Hero CTA is clear but mobile layout cuts off the right column. I wasn't sure what 'Vibe Testing' meant from the headline alone."

AI
Chen
Chen 41
Senior backend engineer ยท code-first
Visual8
Design8
Usability9
Content8
Features9
Compet.8
Emotion7
A11y6
NPS9

"Code samples look good and the install section is concise. Minor: the curl example would benefit from a copy button."

AI
Akira
Akira 19
CS student ยท screen-reader user
Visual4
Design7
Usability7
Content6
Features7
Compet.6
Emotion5
A11y3
NPS6

"Some images are missing alt text and the testers row scrolls horizontally without a visible scroll affordance. The hero gradient is hard to read."

Three lines ยท auto-fail on bad UX scores
# Python โ€” same shape in JS / Java / C# / Ruby
from testersai import TestersAI

ta = TestersAI()
r = ta.analyze_personas(
    screenshot=png,
    page_text=html,
    personas=3,
    persona_traits=["mobile-first", "first-time-user"],
    fail_below=5,           # auto-promote < 5 ratings to issues
)

for p in r.raw["personas"]:
    print(p["name"], p["ratings"])

if r.failed:                       # True if any rating < 5
    raise AssertionError(r.issues[0].message)
What you get back
{
  "personas": [
    {
      "name": "Aisha Khan", "age": 28,
      "image": "https://testers.ai/img/profiles/aisha.jpg",
      "background": "Marketing manager, mobile-first user",
      "ratings": {
        "visual": 8, "design": 6,
        "usability": 7, "content": 7,
        "features": 6, "competitive": 7,
        "emotional": 6, "accessibility": 5,
        "nps": 7
      },
      "comments": "Hero CTA is clear but mobile layout..."
    },
    ...
  ]
}

Every language ร— framework

Click a cell to jump to a self-contained page โ€” install, snippet, recommendations. Shareable URL.

Python JavaScript Java C# / .NET Ruby
Playwright playwright_testersai โ†’ @testersai/playwright โ†’ testersai-playwright โ†’ TestersAI.Playwright โ†’ โ€”
Selenium / WebdriverIO selenium_testersai โ†’ @testersai/webdriverio โ†’ testersai-selenium โ†’ TestersAI.Selenium โ†’ selenium_testersai โ†’
Cypress โ€” @testersai/cypress โ†’ โ€”โ€”โ€”
Puppeteer โ€” @testersai/puppeteer โ†’ โ€”โ€”โ€”
Vibium ยท BiDi vibium_testersai โ†’ @testersai/vibium โ†’ testersai-vibium โ†’ โ€”โ€”
pytest / unittest / Robot pytest_testersai โ†’
unittest_testersai โ†’
robot_testersai โ†’
โ€”โ€”โ€”โ€”
Jest / Mocha โ€” @testersai/jest โ†’
@testersai/mocha โ†’
โ€”โ€”โ€”
JUnit 5 / TestNG โ€”โ€” junit5_testersai โ†’
testng_testersai โ†’
โ€”โ€”
NUnit / xUnit / MSTest โ€”โ€”โ€” NUnit_TestersAI โ†’
xUnit_TestersAI โ†’
MSTest_TestersAI โ†’
โ€”
RSpec / Minitest โ€”โ€”โ€”โ€” rspec_testersai โ†’
minitest_testersai โ†’

30-second quick-start

Python + pytest + Playwright. Same shape in every language.

# 1. install โ€” download the bundle from the Downloads page, then:
pip install ./testersai-python-pytest-0.1.0.tar.gz
# (works identically on macOS, Linux, and Windows โ€” Windows 10 1803+ has tar built in)

# 2. configure
# Get a key at https://testers.ai/sdk or click "Get API key" above.
export TESTERSAI_API_KEY=sk_...
# Windows PowerShell:  $env:TESTERSAI_API_KEY = "sk_..."

# 3. use in your test
from playwright_testersai import TestersAIPage

def test_home_page(page, testersai):
    ta = TestersAIPage(page)
    ta.capture_console()
    page.goto("https://example.com")

    shot = ta.analyze_screenshot()
    console = ta.analyze_console()

    # Findings show up in the pytest report automatically.
    # Fail the test only where you want to:
    if shot.failed:
        pytest.fail(shot.issues[0].message)

Ready to add AI checks to your suite?

Full SDK source, examples, and pre-packaged tarballs for every language. Drop it on any webserver and it works โ€” zero build, zero runtime.