Executive summary report
A single-glance summary across every dimension — quality score, severity-ranked findings, trend lines, and a clear "what changed since last run" delta. Built to share with stakeholders, not buried in raw output.
The fastest, smartest, most efficient way to add AI to your testing
Perfect for teams looking to add a lot of AI coverage with little effort
←→ navigate · Esc close
Designed for manual and exploratory testers — move faster, surface more, and be the hero on your team with AI doing the heavy lifting alongside you.
←→ navigate · Esc close
Upgrade your automation — don't rewrite it. The delta is three lines. Everything else is your existing test.
def test_checkout(page): page.goto("/cart") page.click("#checkout") page.fill("#card", "4242...") page.click("#pay") expect(page).to_have_url("/success")
def test_checkout(page, testersai): page.goto("/cart") testersai.screenshot() # check cart UI page.click("#checkout") page.fill("#card", "4242...") page.click("#pay") testersai.screenshot() # check success page testersai.console() # check for JS errors expect(page).to_have_url("/success")
A single-glance summary across every dimension — quality score, severity-ranked findings, trend lines, and a clear "what changed since last run" delta. Built to share with stakeholders, not buried in raw output.
Point Jank at a domain and the agent discovers every reachable page, runs the full testing suite against each in parallel, and rolls findings up into a single per-domain report. Configurable depth, allow / deny rules, and auth.
Benchmark your app against category peers — quality score, accessibility, performance — so you see where you actually stand. Per-category leaderboards drawn from our 1000+ tested-app index.
Functional, visual, accessibility, performance, security, SEO, and content — surfaced and severity-ranked in one run. Every issue comes with a copy-paste fix prompt your AI coding agent can run.
An autonomous AI agent crawls the app, clicks like a human, probes flows, and reports the unexpected jank no scripted test would catch — the kind of "wait, can a user even do this?" finding that rote regression tests miss every time.
Generates and runs prioritised test cases against your live URL — happy paths, edge cases, negatives — and returns pass / fail with evidence. Same engine you can drive locally via the SDK or CI.
30+ AI personas review the page (new user, power user, exec, accessibility-dependent user, sceptical buyer…) and return specific, voiced feedback. Catches the "this just doesn't feel right" signals that quantitative tests can't surface.
WCAG 2.2 A / AA / AAA with screen-reader, keyboard-nav, color-contrast, ARIA, and form-accessibility checks. Every issue comes with a fix prompt that points at the exact element and the WCAG SC it violates.
Tests every page across phone, tablet, and desktop breakpoints in one pass. Surfaces overflow bugs, broken layouts, touch-target violations, off-screen content, and mobile-specific accessibility issues — each with a fix prompt that names the breakpoint and the offending CSS.
Once a run completes, ask follow-ups in plain English. "Which of these are blocking checkout?" "Generate test cases that cover the new fixes." "Explain finding #4 like I'm the CEO." The conversational layer reads the full evidence so answers stay grounded.
Every run option, deployment path, and integration we support — covered.
Yes. Pick from Anthropic Claude, OpenAI (GPT-4o / GPT-5 etc.), Google Gemini, or Azure OpenAI. For fully air-gapped or zero-egress setups, point Jank at a self-hosted endpoint (Ollama, vLLM, LocalAI, or any OpenAI-compatible API). Provider + model are passed per-request via the provider / model fields, or set globally per deployment.
Yes — three ways:
cloud/enterprise/docker-compose.yml.cloud/enterprise/kubernetes/; tested on EKS, GKE, AKS, and bare-metal k3s.ADMIN_TOKEN + LLM key, docker compose up -d --build. Up in under 10 minutes.All three ship as the same Node + Playwright + cloudflared image, with Firestore (or any Firestore-API-compatible backend) for metadata and a configurable object store for artifacts.
Yes. Pair a self-hosted deployment with a self-hosted LLM endpoint (Ollama / vLLM / LocalAI) and the entire system runs without outbound internet — neither testers.ai nor any LLM vendor sees your traffic or your reports. The hosted UI, the runner, the LLM call, and the artifact store all live on your network.
The runner can bring up a tunnel for the duration of a single test, then tear it down. Supported tunnel types:
Yes. Every stored report renders to multiple formats on demand:
GET /r/:id.jsonGET /r/:id.md/r/:id), with the report itself shareable as a permanent URL.Test cases generated by Jank can also be exported to CSV, Jira, TestRail, or Xray directly from the chat UI.
Yes. Every run gets a permanent shareable URL (https://reports.jank.ai/r/<id> on hosted, or your equivalent base URL on self-host). You choose visibility: "public" (anyone with the link views the report) or visibility: "private" (admin-token gated). Optional emails list sends a "report ready" email when a run completes.
A full multi-dimensional run (bug finding + exploratory + functional + competitive + personas + accessibility + crawl) typically lands in ~12–15 minutes. Smaller scoped runs (single-page bugs only, no personas, no flows) finish in 3–5 minutes. Every agent runs in parallel — adding more dimensions doesn't multiply the runtime, it just lights up more lanes.
customPrompt to steer the agent (e.g., "focus on the checkout funnel").customPrompt to bias toward your audience.Yes. POST /api/reports with a JSON list of URLs and the runner returns report IDs immediately; poll GET /api/reports/:id for status, fetch /r/:id.json for the result. Auth is via an X-Api-Key header. There's also a scripts/submit.sh curl wrapper bundled with the cloud package, and a jank CLI for CI runners (GitHub Actions, GitLab CI, Jenkins, CircleCI).
An admin dashboard at /admin shows every report, its queue/running/done state in real time, with one-click retry on failures. Per-key quotas, per-account demo limits, and a separate ops API (see docs/api-internal.md) cover the operator side. Artifacts are versioned in object storage; metadata and run state live in Firestore (or a Firestore-compatible store on self-host).
Signup and we'll send you an unlock code that removes free-trial rate limits on this chat and the AI-based testing tools.
Drop a few details and one of our test engineers will reach out to scope a run against your app — bugs, accessibility, persona feedback, and a comprehensive quality report.
Add specifications, test plans, API docs, requirements — anything the assistant should treat as ground truth. Each entry is sent with every query. Add as many as you need.
Zero effort. AI finds your most important escaped issues, persona feedback, and regression-testing gaps — and shows how you compare to category peers.
What's your role?
Which platform?
Recommended for :
Your profile is stored only in this browser.
We tailor responses and recommended tools to your role. VP/Exec gets quality-analytics emphasis; engineers get technical depth.
Controls UI labels and the language the assistant replies in.
Free-trial proxy. Optional: attach your email / unlock code for higher limits.
Don't have an unlock code? Request an unlock code →
Calls OpenAI directly from your browser. Key is stored in this browser only.
Calls Anthropic directly from your browser. Key is stored in this browser only.
Calls Gemini directly from your browser. Key is stored in this browser only.
Connect Jira, TestRail, or Xray to auto-file bugs & tests, and to pull existing issues/tests into chat context. Beta — requires your org to allow CORS from this page; if filing fails, use the CSV exports.
TestRail admin → My Settings → API Keys.
Xray lives inside Jira. Uses your Jira credentials above. Fill in project + test plan to auto-link filed tests.