The shortest honest answers. Click a question to expand.
analyze_* call, or define your own.Version 0.1.x is a public beta. The Python / JavaScript adapters
are the most exercised (their SDKs + framework integrations are continuously
tested against a mock API). Java / C# / Ruby adapters ship the same code shape
but are flagged beta until more users run them in anger.
What this means for you:
0.1.x. Pin the exact version you test against.1.0. We'll keep a 0.x-compatible shim for at least one minor version when that happens.Found a rough edge? Include your language + framework when you reach out — we read everything.
It sends four kinds of evidence from a running test — screenshot, console logs, network traffic, or page text — to the Testers.AI API, gets back an AI-identified list of issues, and logs them through your test framework's native reporter.
You decide whether those issues should fail the test (assertClean-style helpers
exist for every adapter) or whether they're purely informational.
Yes — set TESTERSAI_API_KEY to a real key for production calls. During development
you can point TESTERSAI_BASE_URL at a local mock server (example in
sdk_test/mock_server.py) and exercise the whole flow offline.
Typically sub-second for console/network/text, a couple of seconds for screenshots.
The SDK caps the whole thing at TESTERSAI_TIMEOUT_MS (default 15s)
and retries only within TESTERSAI_MAX_RETRY_WAIT_MS (default 5s).
If the backend is overloaded, your test continues — we skip and keep going.
No. That's the central design rule. The SDK returns an
AnalysisResult(skipped=True, reason=...) and your test continues.
This is enforced by sdk_test/resilience/test_failure_modes.*, which
runs against a mock server in hang, reset, 403,
slow, and rate-limit modes.
If the firewall blocks the request, you'll see one of three outcomes: (a) connection
refused → skipped=True, error="network:URLError", (b) 403 from a WAF →
skipped=True, error="http_403" (not retried, so no budget burn), or
(c) the hang mode — TCP connects but no response arrives, and the client-side
timeout fires.
All three are tested, all three return within max_retry_wait_ms, and
none of them crash your test.
No. It honours Retry-After up to TESTERSAI_MAX_RETRY_WAIT_MS
(default 5s), then gives up and returns skipped=True, reason="rate_limited".
We prefer losing one AI check to holding up a whole CI job.
DNS lookup fails → caught → returned as a skipped result. Verified in the failure-mode
suite. If you want the run to fail loudly in that scenario, set
TESTERSAI_STRICT=true and the SDK will raise TestersAIError
instead.
Yes. Set TESTERSAI_ENABLED=false. Every call returns
skipped=True, reason="disabled" without any network I/O.
Useful for local dev or running a quick smoke suite.
Every SDK writes a one-line warning to stderr the first time a call runs
without a key: [testersai] TESTERSAI_API_KEY is not set. Get one at
https://testers.ai/sdk ... — then returns a
skipped result. The host test still passes.
If the key is present but malformed (doesn't start with sk_, or too short)
you get a different warning that it looks invalid — but the call still goes
through, since the server is the source of truth. If the server rejects it with a
401 or 403, the SDK logs a third, distinct message pointing at Get API key.
Set TESTERSAI_QUIET=true (or pass quiet: true in the
config object). All SDK-level stderr messages — missing-key warning, malformed-key
warning, 401/403 rejection warning — are suppressed. Findings still go to your
configured sinks (framework / disk / return) as before; only the SDK's own
chatter is silenced.
Implemented identically in all five language SDKs (Python, JS, Java, C#, Ruby).
No. Every integration runs inside a try/except at the dispatch site. A broken TMS prints a warning and is silently skipped for that call. This is tested with a mock TMS pointing at a dead port — the SDK call still succeeds.
No — only high and critical severities by default. Override
with TESTERSAI_JIRA_SEVERITIES=high,critical,medium (or a subset).
Per-integration filter: TESTERSAI_<TMS>_SEVERITIES.
Pass it via the context map on the analyze call:
context={"case_id": 501} for TestRail,
context={"test_key": "QA-101"} for Xray,
context={"cypress_run_id": "run-555"} for Cypress Cloud.
Without those fields the integration is a silent no-op — it won't post to a random record.
Yes. Set env vars for all four and each configured integration fires on every analysis. They're independent: one failing doesn't affect the others.
No. By default, findings are logged through the framework's reporter (pytest report
section, JUnit XML, etc.) but don't fail the test. Use assert_clean(r) /
toBeTestersAIClean / AssertClean where you explicitly want
high/critical findings to fail the test.
Named testers (Sharon, Alejandro, Tariq, …) are experts in a domain — Sharon evaluates security, Alejandro accessibility. Their findings are technical, prescriptive, and graded by severity (high / critical). Use them for objective checks you'd put in CI.
Persona feedback generates diverse end users — e.g.
Maya, 28, marketing manager, mobile-first — who rate the page on 5
quality attributes (usability, accessibility,
design, content, visual) with star scores
plus written subjective comments. Use it for the kinds of UX issues that don't
show up as technical bugs.
See docs → Persona feedback for code samples.
Yes — use the core client (TestersAI in every language) and set
TESTERSAI_SINK=return or disk. You'll get the
AnalysisResult back directly, or as files in
TESTERSAI_SINK_DIR.
info, low, medium, high, critical.
The SDK surfaces them verbatim. AnalysisResult.failed returns true if any issue
is high or critical.
Yes — pass a Config object (or equivalent) directly to the client. Env vars
are just a convenient default, not the only path. See the
config section of the docs.
The browser-based adapters (Playwright, Selenium, Cypress, Puppeteer, WebdriverIO) need a running browser to capture screenshots/console/network — that's not the SDK's job, they just forward what the browser already produced. The framework-only adapters (pytest, unittest, Jest, Mocha, JUnit, NUnit, RSpec, etc.) need no browser — they work with any evidence you hand them.
Python 3.9+, Node 18+, Java 17+, .NET 8, Ruby 3.0+. Older versions may work but aren't routinely tested.
Yes — MIT license. Full source is included in every download tarball and on the downloads page. The directory layout is the same as the public repo.
No. The only network calls are the ones you explicitly make (the API endpoint and any TMS integrations you configure). Zero analytics, zero telemetry, no sidecar daemon.
The HTTP client in every language respects standard proxy env vars
(HTTP_PROXY, HTTPS_PROXY, NO_PROXY). Set them
the way you'd set them for curl and it just works.
Yes — set TESTERSAI_BASE_URL to your endpoint. The SDK makes no assumption
about the domain. Same env var works for a local mock during development.