Install
tar built into cmd and PowerShell; older Windows can extract
with 7-Zip.
pip — from downloaded archive
# 1. Download from /downloads.html, then:
pip install ./testersai-python-pytest-0.1.0.tar.gz
# Swap the filename for your framework:
# testersai-python-{pytest|unittest|playwright|
# selenium|robot}-0.1.0.tar.gz
npm — from downloaded archive
# 1. Download testersai-javascript-playwright-0.1.0.tar.gz
# (or cypress / puppeteer / jest / mocha / webdriverio)
tar -xzf testersai-javascript-playwright-0.1.0.tar.gz
cd testersai-javascript-playwright-0.1.0
npm install ./testersai-sdk-0.1.0.tgz ./testersai-playwright-0.1.0.tgz
Maven — extract + install locally
# 1. Download testersai-java-junit5-0.1.0.tar.gz
# (or testng / playwright / selenium)
tar -xzf testersai-java-junit5-0.1.0.tar.gz
cd testersai-java-junit5-0.1.0
mvn -f core/pom.xml install
# Then add the junit5_testersai module to your project's build.
dotnet — extract + reference
# 1. Download testersai-csharp-nunit-0.1.0.tar.gz
# (or xunit / mstest / playwright / selenium)
tar -xzf testersai-csharp-nunit-0.1.0.tar.gz
cd testersai-csharp-nunit-0.1.0
dotnet add <your.csproj> reference ./Core/TestersAI.Core.csproj
dotnet add <your.csproj> reference ./NUnit_TestersAI/
gem — build + install from source
# 1. Download testersai-ruby-rspec-0.1.0.tar.gz
# (or minitest / selenium)
tar -xzf testersai-ruby-rspec-0.1.0.tar.gz
cd testersai-ruby-rspec-0.1.0
(cd testersai && gem build *.gemspec && gem install *.gem)
# The adapter is lib-only — add to $LOAD_PATH in your spec_helper.
Tarball / zip — all archives
Every language × framework combination has its own downloadable archive on the Downloads page. Each contains the Testers.AI core client plus just the adapter you asked for — no unused dependencies. The whole site (archives + source + docs) is a plain static folder you can push to any webserver as-is.
Configuration
Configure via environment variables (recommended) or a Config object passed to the client.
| Env var | Default | What it does |
|---|---|---|
TESTERSAI_API_KEY | — | Required. Bearer key for the Testers.AI API. |
TESTERSAI_BASE_URL | https://api.testers.ai | Override for self-hosted or local mocks. |
TESTERSAI_ENABLED | true | Global kill switch. When false, every call returns a skipped result. |
TESTERSAI_TIMEOUT_MS | 15000 | Per-request timeout (socket + read). |
TESTERSAI_MAX_RETRIES | 3 | Max retries on 429 / 5xx / network error. |
TESTERSAI_MAX_RETRY_WAIT_MS | 5000 | Hard deadline across all retries. Whichever hits first wins. |
TESTERSAI_STRICT | false | If true, failures raise TestersAIError instead of returning a skipped result. |
TESTERSAI_SINK | framework | Comma-separated: framework, disk, return. |
TESTERSAI_SINK_DIR | ./testersai-results | Where the disk sink writes. |
TESTERSAI_LOG_FORMAT | json | json, junit, tap, or text. |
TESTERSAI_QUIET | false | When true, suppress all SDK-level log output to stderr. The SDK still writes findings to your sinks — this only silences the SDK's own informational / warning messages (missing key, malformed key, API rejected key, etc.). |
API surface
Four endpoints, same shape in every language.
analyze_screenshot(path_or_bytes, *, context=None, checks=None) -> AnalysisResult
analyze_console(entries_or_string, *, context=None) -> AnalysisResult
analyze_network(har_or_entries, *, context=None) -> AnalysisResult
analyze_page_text(text_or_html, *, context=None) -> AnalysisResult
JS / Java / .NET use camelCase (analyzeScreenshot). Ruby uses snake_case. Behaviour is identical.
The context map is echoed back to the API for grouping and also used by
TMS integrations — e.g. test_key for Xray,
case_id for TestRail, cypress_run_id for Cypress Cloud.
AnalysisResult
class AnalysisResult:
kind: str # "screenshot" | "console" | "network" | "page_text"
ok: bool # True unless the call or response body was malformed
skipped: bool # True if disabled, rate-limited, or errored
reason: str | None # "rate_limited" | "server_error" | "error" | "disabled"
error: str | None # diagnostic string when skipped
issues: list[Issue] # AI-identified issues
raw: dict | None # full API response for power users
duration_ms: int
failed: bool # True if any issue is "high" or "critical"
r.ok only means the SDK / server contract succeeded.
To decide whether your feature is broken, check r.failed (any high/critical)
or filter r.issues yourself.
Sinks
Every result gets dispatched to zero or more sinks. Composable via env var.
framework
Logs through the test framework's native reporter: pytest report sections,
JUnit XML, NUnit TestContext, RSpec metadata, Playwright
annotations, Cypress command log, WebdriverIO service logger.
disk
Writes one file per analysis into TESTERSAI_SINK_DIR. Format controlled by
TESTERSAI_LOG_FORMAT — json, junit (XML),
tap, or plain text. Great for CI artifacts.
return
No-op sink — the caller works with the returned AnalysisResult directly.
Useful when you're deciding what to do in your test code.
Combine: TESTERSAI_SINK=framework,disk,return
Resilience model
AI analysis is optional extra testing. The SDK never blocks your real test, never crashes it, and never burns test-suite time waiting on a slow backend.
- ✓
Capped retries + hard deadline.Retries 429 / 5xx / network errors up to
max_retriesand amax_retry_wait_msbudget (default 5000 ms). Whichever hits first wins. - ✓
Honours
Retry-After— up to the deadline.If a rate-limited response asks us to wait 30s but our budget is 5s, we skip quickly. Tests continue. - ✓
Firewall / auth blocks (403) are not retried.403 is treated as terminal — retrying a WAF block is pointless and wastes the budget.
- ✓
Hangs, resets, DNS failures all surface as
skipped.No raised exceptions by default. Turn onstrict=trueif you want the raise. - ✓
Sink failures are swallowed.A broken disk path or a Jira outage never propagates into your test run.
sdk_test/resilience/test_failure_modes.{py,js}, which runs
against a mock server in hang / reset / 403 / slow / refused modes on every
SDK change.
Python adapters
| Package | For | How it hooks in |
|---|---|---|
pytest_testersai | pytest | Auto-loaded plugin. Exposes a testersai fixture. |
unittest_testersai | unittest | TestersAIMixin + assertTestersAIClean(). |
playwright_testersai | Playwright | TestersAIPage(page) wraps a Playwright page, auto-captures console / network. |
selenium_testersai | Selenium | TestersAIDriver(driver) wraps any WebDriver. |
robot_testersai | Robot Framework | Load as a Library — keywords like Analyze Screenshot. |
JavaScript / TypeScript adapters
| Package | For | How it hooks in |
|---|---|---|
@testersai/sdk | Core | new TestersAI(). Used by all adapters. |
@testersai/playwright | @playwright/test | TestersAIPage or a test fixture. |
@testersai/cypress | Cypress | registerTasks(on, config) + cy.testersaiScreenshot(). |
@testersai/puppeteer | Puppeteer | TestersAIPage wrapper. |
@testersai/jest | Jest | Adds expect(r).toBeTestersAIClean(). |
@testersai/mocha | Mocha | createHelper() + assertClean(). |
@testersai/webdriverio | WebdriverIO | Service that adds browser.testersaiScreenshot() etc. |
Java adapters
| Artifact | For |
|---|---|
ai.testers:testersai-core | Core client + sinks. |
ai.testers:testersai-junit5 | @ExtendWith(TestersAIExtension.class) injects a TestersAIFacade. |
ai.testers:testersai-testng | TestersAIListener.of() facade usable from any TestNG @Test. |
ai.testers:testersai-playwright | Playwright-for-Java page wrapper. |
ai.testers:testersai-selenium | Selenium WebDriver wrapper. |
.NET adapters
| Package | For |
|---|---|
TestersAI.Core | Core client (net8.0 +). |
TestersAI.NUnit | TestersAIFixture + TestersAIAssert.Clean. |
TestersAI.xUnit | Uses ITestOutputHelper for native reporting. |
TestersAI.MSTest | Uses TestContext.WriteLine. |
TestersAI.Playwright | Microsoft.Playwright integration. |
TestersAI.Selenium | Selenium.WebDriver integration. |
Ruby adapters
| Gem | For |
|---|---|
testersai | Core client + sinks. |
rspec_testersai | include Testersai::RSpec + be_testersai_clean matcher. |
minitest_testersai | assert_testersai_clean assertion. |
selenium_testersai | Testersai::SeleniumDriver wrapper. |
Check types + testers
Subset the analysis to specific testing domains (CheckType.*) or
pick named tester personas by first name. You can mix them
freely — the SDK normalises everything, dedupes, and sends it to the API.
If neither is provided, the API runs its default mix.
Built-in check types
The 16 domains the AI knows how to evaluate. Pass by enum value or alias:
| CheckType | Wire name | Covered by | Aliases |
|---|---|---|---|
ACCESSIBILITY | accessibility | a11y, wcag, ally | |
SECURITY | security | sec, appsec, owasp | |
PRIVACY | privacy | gdpr, ccpa, pii | |
PERFORMANCE | performance | perf, web_vitals | |
USABILITY | usability | ux | |
CONTENT | content | copy, spelling, seo | |
FORMS | forms | form | |
SEARCH | search | — | |
PUBLISHING | publishing | — | |
LANDING_PAGE | landing_page | landing | |
CHECKOUT | checkout | ecommerce, payment | |
CART | cart | — | |
PRICING | pricing | — | |
ERROR_UX | error_ux | errors | |
AI_CODE | ai_code | vibe_coding | |
AI_CHATBOT | ai_chatbot | chatbot |
Pick individual testers by first name
Every built-in tester's lowercase first name works as a token. Mix freely with check types.
# Python: only run Alejandro + Tariq on this screenshot
client.analyze_screenshot(img, testers=["alejandro", "tariq"])
# Or by domain — equivalent to "every tester tagged with that domain"
client.analyze_screenshot(img, checks=[CheckType.ACCESSIBILITY, CheckType.PERFORMANCE])
# Mix: an enum + an alias + two first-names
client.analyze_console(logs,
checks=[CheckType.SECURITY, "a11y"],
testers=["sharon", "alejandro"])
// JavaScript
const { TestersAI, CheckType } = require('@testersai/sdk');
await ta.analyzeScreenshot(buf, {
checks: [CheckType.ACCESSIBILITY, 'perf'],
testers: ['sharon', 'yuki'],
});
// Java
import ai.testers.Checks;
import ai.testers.Checks.CheckType;
ta.analyzeScreenshot(
bytes, Map.of(),
List.of(CheckType.ACCESSIBILITY, "perf"),
List.of("sharon", "alejandro")
);
// C#
using static TestersAI.Checks;
await client.AnalyzeScreenshotAsync(path,
checks: new[] { (object)CheckType.Accessibility, "perf" },
testers: new[] { (object)"sharon", "alejandro" });
# Ruby
client.analyze_screenshot(img,
checks: [:accessibility, "perf"],
testers: ["sharon", "alejandro"])
Define your own prompt-based tester
Not finding exactly what you want in the built-ins? Create a
custom tester — it's evaluated alongside the built-ins.
The instructions field is the LLM prompt the server uses; make
it specific about what to flag.
# Python — a brand-enforcement tester
from testersai import TestersAI, CheckType, CustomTester
brand = CustomTester(
name="Brand Police",
role="Brand Guideline Enforcer",
instructions=(
"Flag any screenshot element that breaks our brand guidelines: "
"primary red must be #dc2626 (no other reds), typography must be "
"Inter or Space Grotesk, logo must be the official T-mark. "
"Anything off-brand is HIGH severity."
),
tags=("brand", "design-system"),
checks=(CheckType.CONTENT,),
)
# Use alongside built-ins
r = client.analyze_screenshot(
img,
testers=["alejandro", "tariq", brand],
)
// JavaScript
const { customTester, CheckType } = require('@testersai/sdk');
const brand = customTester({
name: 'Brand Police',
role: 'Brand Guideline Enforcer',
instructions: 'Flag anything off-brand as HIGH severity — primary red must be #dc2626, Inter typography only.',
tags: ['brand'],
checks: [CheckType.CONTENT],
});
await ta.analyzeScreenshot(buf, { testers: ['alejandro', brand] });
// Java
var brand = new Checks.CustomTester(
"Brand Police",
"Brand Guideline Enforcer",
"Flag anything off-brand as HIGH severity..."
);
brand.tags = List.of("brand");
brand.checks = List.of(Checks.CheckType.CONTENT);
ta.analyzeScreenshot(bytes, Map.of(), null,
List.of("alejandro", brand));
// C#
var brand = new Checks.CustomTester {
Name = "Brand Police",
Role = "Brand Guideline Enforcer",
Instructions = "Flag anything off-brand as HIGH severity...",
CheckTypes = new[] { Checks.CheckType.Content },
};
await client.AnalyzeScreenshotAsync(path, testers: new object[] { "alejandro", brand });
# Ruby
brand = Testersai::Checks::CustomTester.new(
name: "Brand Police",
role: "Brand Guideline Enforcer",
instructions: "Flag anything off-brand as HIGH severity...",
checks: [:content]
)
client.analyze_screenshot(img, testers: ["alejandro", brand])
"aleajndro"?
The SDK writes [testersai] Unknown check/tester tokens ignored: "aleajndro"
to stderr and proceeds with the valid ones. TESTERSAI_QUIET=true silences it.
Persona feedback
Different from the named tester personas (Sharon, Alejandro, …) — this generates diverse end-user personas (e.g. Maya, 28, marketing manager, mobile-first) and gets each one to rate the page on 9 feedback dimensions (1-10 each) plus written comments. Use it to catch the kinds of UX, emotional, and competitive issues that don't show up in functional tests.
The 9 feedback dimensions
| Attribute | What it measures |
|---|---|
visual | Overall visual polish and presentation. |
design | Visual design quality and consistency. |
usability | How easy / intuitive the app is to use. |
content | Quality and clarity of text / media content. |
features | Coverage and completeness of functionality. |
competitive | How it stacks up versus alternatives the user has seen. |
emotional | Emotional response (delight, trust, anxiety, frustration). |
accessibility | How well it works for users with disabilities. |
nps | Net Promoter Score (0-10) — would they recommend it? |
Override this default set by passing your own attributes=[...].
Endpoint
POST /v1/analyze/personas — body shape:
{
"image_base64": "...", // optional — visual context
"text": "...", // optional — page text / HTML
"context": { ... },
"personas": 3, // number of personas to generate
"persona_traits": ["mobile-first","first-time-user","non-technical"],
"attributes": ["visual","design","usability","content","features",
"competitive","emotional","accessibility","nps"]
}
Response:
{
"personas": [
{
"name": "Aisha Khan", "age": 28,
"image": "https://testers.ai/img/profiles/aisha.jpg",
"background": "Marketing manager, mobile-first user",
"ratings": {
"visual": 8, "design": 6, "usability": 7, "content": 7,
"features": 6, "competitive": 7, "emotional": 6,
"accessibility": 5, "nps": 7
},
"comments": "Hero CTA is clear but mobile layout cuts off the right column..."
},
...
]
}
Per-language
# Python
from testersai import TestersAI
client = TestersAI()
r = client.analyze_personas(
screenshot=png_bytes,
page_text=html,
personas=5,
persona_traits=["mobile-first", "first-time-user"],
fail_below=5, # auto-promote ratings < 5 to issues
)
for p in r.raw["personas"]:
print(p["name"], p["ratings"])
if r.failed: # True if any rating < 5
raise AssertionError(r.issues[0].message)
// JavaScript / TypeScript
const r = await client.analyzePersonas({
screenshot: png,
pageText: html,
personas: 5,
personaTraits: ['mobile-first', 'first-time-user'],
failBelow: 5,
});
console.log(r.raw.personas.map(p => `${p.name}: ${JSON.stringify(p.ratings)}`));
// Java
AnalysisResult r = client.analyzePersonas(
screenshot, html, Map.of("url", "/home"), 5,
List.of("mobile-first"), null, 5);
// C#
var r = await client.AnalyzePersonasAsync(
screenshot: png, pageText: html, personas: 5,
personaTraits: new[] { "mobile-first" },
failBelow: 5);
# Ruby
r = client.analyze_personas(
screenshot: png, page_text: html,
personas: 5, persona_traits: ["mobile-first"], fail_below: 5)
Curl
B64=$(base64 -i home.png | tr -d '\n')
curl https://api.testers.ai/v1/analyze/personas \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
-d @- <<'JSON'
{
"image_base64": "PASTE_HERE",
"context": { "url": "/home" },
"personas": 3,
"persona_traits": ["mobile-first","first-time-user"]
}
JSON
fail_below=N (Python /
Ruby), failBelow: N (JS), or Integer failBelow (Java/C#) and any
persona rating below N becomes an Issue on the result —
severity critical for ratings ≤ 3, high otherwise. That makes
result.failed work the same way as the other analyze methods, so you can
fail your test on bad UX scores without writing extra logic.
Test-management integrations
Optional auto-log to Jira, Xray, TestRail, and Cypress Cloud. See the dedicated integrations page for full details, env vars, and per-target behaviour.
Raw REST API / curl
Every SDK is a thin client over the same HTTPS endpoint. If you're in a language
we don't ship an SDK for (Go, Rust, PHP, Bash), or you just want to script it
directly, curl works. Auth: Authorization: Bearer sk_live_....
Request a key →
Endpoints
| Method | URL | Body shape |
|---|---|---|
POST | /v1/analyze/screenshot | { image_base64, context?, checks?, testers?, custom_testers? } |
POST | /v1/analyze/console | { entries: [{ level, text }, ...], context? } |
POST | /v1/analyze/network | { har? | entries?, context? } |
POST | /v1/analyze/page-text | { text, context? } |
Screenshot — curl
# Base64-encode the file, wrap in JSON, POST.
B64=$(base64 -i home.png | tr -d '\n')
curl https://api.testers.ai/v1/analyze/screenshot \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
-d "{\"image_base64\":\"$B64\",\"context\":{\"url\":\"/home\"}}"
With check subset + named testers + a custom tester
curl https://api.testers.ai/v1/analyze/screenshot \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
-d @- <<'JSON'
{
"image_base64": "iVBORw0KGgo...",
"context": { "url": "/checkout", "stage": "payment" },
"checks": ["accessibility", "security"],
"testers": ["alejandro", "sharon", "hassan"],
"custom_testers": [
{
"custom": true,
"name": "Brand Police",
"role": "Brand Guideline Enforcer",
"focus": "custom",
"instructions": "Flag anything off-brand. Primary red must be #dc2626; reject other reds.",
"tags": ["brand"],
"checks": ["content"]
}
]
}
JSON
Console logs
curl https://api.testers.ai/v1/analyze/console \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"entries": [
{"level":"error","text":"ReferenceError: x is not defined"},
{"level":"warn", "text":"Deprecation: foo() will be removed"}
],
"context": {"url":"/home"}
}'
Network (HAR or entries)
# Option 1: entry list
curl https://api.testers.ai/v1/analyze/network \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"entries":[{"url":"/api/user","method":"GET","status":500}]}'
# Option 2: full HAR (from DevTools or a proxy)
curl https://api.testers.ai/v1/analyze/network \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
--data-binary @- <<JSON
{"har": $(cat network.har)}
JSON
Page text / HTML
curl https://api.testers.ai/v1/analyze/page-text \
-H "Authorization: Bearer $TESTERSAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "<html><body><h1>Hellow World</h1></body></html>",
"checks": ["content", "accessibility"]
}'
Response shape
{
"issues": [
{
"severity": "high", // info | low | medium | high | critical
"category": "accessibility",
"message": "Image missing alt text",
"location": "img[src='/hero.png']",
"evidence": "<img src='/hero.png'>"
}
]
}
HTTP status codes
| Status | Meaning | SDK behaviour |
|---|---|---|
200 | OK. Body contains issues array. | Parsed; findings routed to sinks. |
401 / 403 | Key missing/invalid/revoked. | Logged with a CTA to Get API key. Not retried. |
429 | Rate-limited. Retry-After may be set. | Retries within budget; skips cleanly if over. |
5xx | Server error. | Retries within budget; returns a skipped result otherwise. |
Quickstarts
Copy-paste starting points for the two most popular combinations.
Python + Selenium 4
# 1. Install — download testersai-python-selenium-0.1.0.tar.gz from /downloads.html, then:
pip install ./testersai-python-selenium-0.1.0.tar.gz selenium
# (on Windows: same command works in PowerShell / cmd)
# 2. Set your key — get one at https://testers.ai/sdk
export TESTERSAI_API_KEY=sk_live_...
# Windows PowerShell: $env:TESTERSAI_API_KEY = "sk_live_..."
# 3. Write the test
from selenium import webdriver
from selenium.webdriver.common.by import By
from testersai import CheckType, CustomTester
from selenium_testersai import TestersAIDriver
driver = webdriver.Chrome() # Selenium 4
ta = TestersAIDriver(driver)
driver.get("https://shop.example/home")
# Subset the analysis: accessibility + performance only
ta.analyze_screenshot(
checks=[CheckType.ACCESSIBILITY, "perf"],
testers=["alejandro", "tariq"])
driver.find_element(By.ID, "sign-in").click()
r = ta.analyze_screenshot() # default tester mix
ta.analyze_console() # JS errors
if r.failed:
raise AssertionError(r.issues[0].message)
driver.quit()
TypeScript + @playwright/test
// 1. Install — download testersai-javascript-playwright-0.1.0.tar.gz from /downloads.html, then:
// tar -xzf testersai-javascript-playwright-0.1.0.tar.gz
// cd testersai-javascript-playwright-0.1.0
// npm install -D @playwright/test \
// ./testersai-sdk-0.1.0.tgz ./testersai-playwright-0.1.0.tgz
// (Windows: same commands work in PowerShell / Git Bash / WSL)
// 2. Set your key — get one at https://testers.ai/sdk
// export TESTERSAI_API_KEY=sk_live_...
// Windows PowerShell: $env:TESTERSAI_API_KEY = "sk_live_..."
// 3. spec:
import { test, expect } from '@playwright/test';
import { TestersAIPage } from '@testersai/playwright';
import { CheckType, customTester } from '@testersai/sdk';
const brand = customTester({
name: 'Brand Police',
role: 'Brand Guideline Enforcer',
instructions: 'Flag anything off-brand — primary red must be #dc2626, Inter typography only. HIGH severity.',
tags: ['brand'],
checks: [CheckType.CONTENT],
});
test('checkout passes AI checks', async ({ page }, testInfo) => {
const ta = new TestersAIPage(page, {
frameworkLog: (r) => testInfo.annotations.push({
type: `testersai-${r.kind}`,
description: `ok=${r.ok} issues=${r.issues.length}`,
}),
});
await page.goto('/cart');
await ta.analyzeScreenshot({ checks: [CheckType.USABILITY], testers: ['mia'] });
await page.click('#checkout');
await page.fill('#card', '4242424242424242');
await page.click('#pay');
const shot = await ta.analyzeScreenshot({ testers: ['hassan', brand] });
await ta.analyzeConsole();
expect(shot.issues.filter((i) => i.severity === 'critical')).toHaveLength(0);
await expect(page).toHaveURL('/success');
});
Full env var reference
Set any subset. Unset means "not enabled" for integrations; falls back to defaults for core config.
# Core
# Get a key at https://testers.ai/sdk or click "Get API key" above.
TESTERSAI_API_KEY=sk_live_...
TESTERSAI_BASE_URL=https://api.testers.ai
TESTERSAI_ENABLED=true
TESTERSAI_TIMEOUT_MS=15000
TESTERSAI_MAX_RETRIES=3
TESTERSAI_MAX_RETRY_WAIT_MS=5000
TESTERSAI_STRICT=false
TESTERSAI_SINK=framework,disk
TESTERSAI_SINK_DIR=./testersai-results
TESTERSAI_LOG_FORMAT=json
TESTERSAI_QUIET=false # true = silence all SDK stderr messages
# Jira
TESTERSAI_JIRA_URL=https://acme.atlassian.net
TESTERSAI_JIRA_EMAIL=qa@acme.com
TESTERSAI_JIRA_API_TOKEN=...
TESTERSAI_JIRA_PROJECT=QA
TESTERSAI_JIRA_ISSUE_TYPE=Bug
TESTERSAI_JIRA_SEVERITIES=high,critical
# Xray (Jira plugin)
TESTERSAI_XRAY_CLIENT_ID=...
TESTERSAI_XRAY_CLIENT_SECRET=...
TESTERSAI_XRAY_EXECUTION_KEY=EX-42
TESTERSAI_XRAY_URL=https://xray.cloud.getxray.app
# TestRail
TESTERSAI_TESTRAIL_URL=https://acme.testrail.io
TESTERSAI_TESTRAIL_USER=qa@acme.com
TESTERSAI_TESTRAIL_API_KEY=...
TESTERSAI_TESTRAIL_RUN_ID=1234
# Cypress Cloud
CYPRESS_RECORD_KEY=... # standard Cypress Cloud key
CYPRESS_PROJECT_ID=...
CYPRESS_RUN_ID=... # usually from CI env