Testers.AI SDKBETA powered by testers.ai
Docs Integrations FAQ Downloads Get API key → Get the SDK →
⚠️
Public beta. APIs may shift before 1.0. Pin the version you test with; the behaviour documented here is stable within 0.1.x. What beta means →

Add these AI testers to your existing tests

21 named specialists — pick by first name in any analyze_* call, or define your own.
See all + custom testers →

About the beta

What does "beta" mean for this SDK?

Version 0.1.x is a public beta. The Python / JavaScript adapters are the most exercised (their SDKs + framework integrations are continuously tested against a mock API). Java / C# / Ruby adapters ship the same code shape but are flagged beta until more users run them in anger.

What this means for you:

  • Behaviour in this doc is stable within 0.1.x. Pin the exact version you test against.
  • Method names may be renamed before 1.0. We'll keep a 0.x-compatible shim for at least one minor version when that happens.
  • The API contract (JSON shape) is stable now. The SDK layer is where any churn will happen.
  • Your findings, reports, and CI integrations won't break on upgrade. Sink formats (JSON / JUnit / TAP / text) are frozen.

Found a rough edge? Include your language + framework when you reach out — we read everything.

Getting started

What does this SDK actually do?

It sends four kinds of evidence from a running test — screenshot, console logs, network traffic, or page text — to the Testers.AI API, gets back an AI-identified list of issues, and logs them through your test framework's native reporter.

You decide whether those issues should fail the test (assertClean-style helpers exist for every adapter) or whether they're purely informational.

Do I need a Testers.AI account?

Yes — set TESTERSAI_API_KEY to a real key for production calls. During development you can point TESTERSAI_BASE_URL at a local mock server (example in sdk_test/mock_server.py) and exercise the whole flow offline.

How long does an analysis take?

Typically sub-second for console/network/text, a couple of seconds for screenshots. The SDK caps the whole thing at TESTERSAI_TIMEOUT_MS (default 15s) and retries only within TESTERSAI_MAX_RETRY_WAIT_MS (default 5s). If the backend is overloaded, your test continues — we skip and keep going.

Reliability & failure modes

Will a slow or down Testers.AI service break my test suite?

No. That's the central design rule. The SDK returns an AnalysisResult(skipped=True, reason=...) and your test continues. This is enforced by sdk_test/resilience/test_failure_modes.*, which runs against a mock server in hang, reset, 403, slow, and rate-limit modes.

What happens behind a corporate firewall?

If the firewall blocks the request, you'll see one of three outcomes: (a) connection refused → skipped=True, error="network:URLError", (b) 403 from a WAF → skipped=True, error="http_403" (not retried, so no budget burn), or (c) the hang mode — TCP connects but no response arrives, and the client-side timeout fires.

All three are tested, all three return within max_retry_wait_ms, and none of them crash your test.

Does the SDK retry forever if rate-limited?

No. It honours Retry-After up to TESTERSAI_MAX_RETRY_WAIT_MS (default 5s), then gives up and returns skipped=True, reason="rate_limited". We prefer losing one AI check to holding up a whole CI job.

What if my network has no internet at all?

DNS lookup fails → caught → returned as a skipped result. Verified in the failure-mode suite. If you want the run to fail loudly in that scenario, set TESTERSAI_STRICT=true and the SDK will raise TestersAIError instead.

Can I disable the SDK entirely without removing calls?

Yes. Set TESTERSAI_ENABLED=false. Every call returns skipped=True, reason="disabled" without any network I/O. Useful for local dev or running a quick smoke suite.

What happens if I forget to set the API key?

Every SDK writes a one-line warning to stderr the first time a call runs without a key: [testersai] TESTERSAI_API_KEY is not set. Get one at https://testers.ai/sdk ... — then returns a skipped result. The host test still passes.

If the key is present but malformed (doesn't start with sk_, or too short) you get a different warning that it looks invalid — but the call still goes through, since the server is the source of truth. If the server rejects it with a 401 or 403, the SDK logs a third, distinct message pointing at Get API key.

How do I silence all SDK log output?

Set TESTERSAI_QUIET=true (or pass quiet: true in the config object). All SDK-level stderr messages — missing-key warning, malformed-key warning, 401/403 rejection warning — are suppressed. Findings still go to your configured sinks (framework / disk / return) as before; only the SDK's own chatter is silenced.

Implemented identically in all five language SDKs (Python, JS, Java, C#, Ruby).

Integrations

If Jira / Xray / TestRail / Cypress is down, does my test fail?

No. Every integration runs inside a try/except at the dispatch site. A broken TMS prints a warning and is silently skipped for that call. This is tested with a mock TMS pointing at a dead port — the SDK call still succeeds.

Does it create a Jira ticket for every little finding?

No — only high and critical severities by default. Override with TESTERSAI_JIRA_SEVERITIES=high,critical,medium (or a subset). Per-integration filter: TESTERSAI_<TMS>_SEVERITIES.

How does the SDK know which TestRail case or Xray test to attach to?

Pass it via the context map on the analyze call: context={"case_id": 501} for TestRail, context={"test_key": "QA-101"} for Xray, context={"cypress_run_id": "run-555"} for Cypress Cloud. Without those fields the integration is a silent no-op — it won't post to a random record.

Can I enable multiple integrations at once?

Yes. Set env vars for all four and each configured integration fires on every analysis. They're independent: one failing doesn't affect the others.

Usage

Do AI findings fail my test by default?

No. By default, findings are logged through the framework's reporter (pytest report section, JUnit XML, etc.) but don't fail the test. Use assert_clean(r) / toBeTestersAIClean / AssertClean where you explicitly want high/critical findings to fail the test.

What's the difference between named testers and persona feedback?

Named testers (Sharon, Alejandro, Tariq, …) are experts in a domain — Sharon evaluates security, Alejandro accessibility. Their findings are technical, prescriptive, and graded by severity (high / critical). Use them for objective checks you'd put in CI.

Persona feedback generates diverse end users — e.g. Maya, 28, marketing manager, mobile-first — who rate the page on 5 quality attributes (usability, accessibility, design, content, visual) with star scores plus written subjective comments. Use it for the kinds of UX issues that don't show up as technical bugs. See docs → Persona feedback for code samples.

Can I use this without any test framework?

Yes — use the core client (TestersAI in every language) and set TESTERSAI_SINK=return or disk. You'll get the AnalysisResult back directly, or as files in TESTERSAI_SINK_DIR.

Which severities does the API return?

info, low, medium, high, critical. The SDK surfaces them verbatim. AnalysisResult.failed returns true if any issue is high or critical.

Can I configure by code instead of env vars?

Yes — pass a Config object (or equivalent) directly to the client. Env vars are just a convenient default, not the only path. See the config section of the docs.

Do the adapters require a real browser?

The browser-based adapters (Playwright, Selenium, Cypress, Puppeteer, WebdriverIO) need a running browser to capture screenshots/console/network — that's not the SDK's job, they just forward what the browser already produced. The framework-only adapters (pytest, unittest, Jest, Mocha, JUnit, NUnit, RSpec, etc.) need no browser — they work with any evidence you hand them.

Platform & compatibility

What language versions are supported?

Python 3.9+, Node 18+, Java 17+, .NET 8, Ruby 3.0+. Older versions may work but aren't routinely tested.

Is this open source? Can I fork it?

Yes — MIT license. Full source is included in every download tarball and on the downloads page. The directory layout is the same as the public repo.

Does this phone home or add analytics?

No. The only network calls are the ones you explicitly make (the API endpoint and any TMS integrations you configure). Zero analytics, zero telemetry, no sidecar daemon.

How do I run this behind a proxy?

The HTTP client in every language respects standard proxy env vars (HTTP_PROXY, HTTPS_PROXY, NO_PROXY). Set them the way you'd set them for curl and it just works.

Can I self-host the Testers.AI API?

Yes — set TESTERSAI_BASE_URL to your endpoint. The SDK makes no assumption about the domain. Same env var works for a local mock during development.