From the team that tested Chrome

Find Bugs Before
Your Customers Do

21 AI agents. Complete quality coverage in minutes — not sprints. Zero new hires.

Free AI quality report — no sign-up required
⚡ Results in ~15 min 🤖 30 AI agents 🔒 No credit card
Screenshot preview
📸 Screenshot readyAI will analyze for bugs on send
The numbers don't lie

AI Brings Real ROI

Replace a week of manual regression with a 5-minute AI run. Same coverage. Ten times the speed. A fraction of the cost.

22×
Faster than manual testing
What takes a QA team days runs in minutes
80%
Reduction in QA cost
Teams cut spend dramatically while expanding coverage
30
AI agents per run
Security · WCAG · SEO · Perf · UX · Functional — all at once
10 min
Time to first report
No install, no setup, no waiting for a QA sprint
🛡️
Coverage no team can match
Bugs, accessibility (WCAG 2.2), security flaws, broken SEO, slow pages, confusing UX — found simultaneously, every run.
🔌
Zero ramp-up time
Drop into your existing Playwright, Selenium or Cypress pipeline in an afternoon. No new infrastructure. No rewriting tests.
📈
Metrics leadership trusts
Quality trends, severity dashboards, and benchmarks vs. 1,200+ real apps — data you can put in front of a board.
More coverage than most dedicated test teams and vendors.

Every run executes all of the following simultaneously — in parallel, automatically, without configuration.

01Executive summary & quality score
02Multi-page site crawl
03Quality analytics & benchmarking
04Automated bug finding
05Exploratory testing
06Functional test execution
0730+ AI user personas
08Accessibility · WCAG 2.2 A/AA/AAA
09Mobile responsiveness
10Chat with your report
Work smarter with what you already have

Your tests. Supercharged by AI.

Whether you're starting from scratch, migrating an existing suite, or looking to go deeper — AI meets you where you are.

01

Convert existing tests to AI

Already have manual test cases or automated scripts? We convert them to AI-executed tests that are dramatically more robust — no brittle selectors, no flaky assertions, no maintenance hell. Your test intent survives; the fragility doesn't.

Manual → AI Selenium / Cypress / Playwright Zero rewrites
02

Create new tests from prompts

Describe what you want to test in plain English — "check that checkout works for guest users on mobile" — and AI generates a complete, runnable test. No scripting knowledge required. Go from idea to coverage in seconds.

Natural language input Any framework Instant coverage
03

Generate variations automatically

One test becomes many. AI explores edge cases, boundary conditions, locale variations, and adversarial inputs you'd never think to write manually — then runs them all in parallel. Surface the bugs that only appear in the gaps.

Edge cases Boundary testing Parallel execution
Have experts convert & run your tests →
Results in Under 15 Minutes
21 agents run in parallel. Full multi-dimensional report — bugs, accessibility, performance, security — delivered fast.
10 Report Dimensions
Bugs, UX, accessibility, performance, security, SEO, personas, flows, and competitive benchmarking — all in one run.
Always the Latest AI
Our stack updates constantly with the newest models. You never maintain tools or manage your own AI infrastructure.
21 AI specialists run on every test
Sharon
SharonSecurity
Marcus
MarcusOWASP
Alejandro
AlejandroAccessibility
Hiroshi
HiroshiWCAG
Mia
MiaUsability
Pete
PetePrivacy
Zanele
ZaneleGDPR
Rajesh
RajeshCookies
Sundar
SundarLegal
Jason
JasonAI Code
Diego
DiegoChatbot
Tariq
TariqPerformance
Fatima
FatimaError UX
Sophia
SophiaContent
Richard
RichardForms
Mei
MeiSearch
Zara
ZaraPublishing
Yuki
YukiLanding Pages
Hassan
HassanCheckout
Priya
PriyaCart
Mateo
MateoPricing
Can I bring my own LLM?

Yes. Pick from Anthropic Claude, OpenAI (GPT-4o / GPT-5 etc.), Google Gemini, or Azure OpenAI. For fully air-gapped or zero-egress setups, point Jank at a self-hosted endpoint (Ollama, vLLM, LocalAI, or any OpenAI-compatible API). Provider + model are passed per-request via the provider / model fields, or set globally per deployment.

Can I self-host on my own private network?

Yes — three ways:

  • Docker / Docker Compose — one-line bring-up via cloud/enterprise/docker-compose.yml.
  • Kubernetes — manifests in cloud/enterprise/kubernetes/; tested on EKS, GKE, AKS, and bare-metal k3s.
  • Single VM — clone, set ADMIN_TOKEN + LLM key, docker compose up -d --build. Up in under 10 minutes.

All three ship as the same Node + Playwright + cloudflared image, with Firestore (or any Firestore-API-compatible backend) for metadata and a configurable object store for artifacts.

Can I run fully air-gapped?

Yes. Pair a self-hosted deployment with a self-hosted LLM endpoint (Ollama / vLLM / LocalAI) and the entire system runs without outbound internet — neither testers.ai nor any LLM vendor sees your traffic or your reports. The hosted UI, the runner, the LLM call, and the artifact store all live on your network.

How do I tunnel into private / VPN-protected targets?

The runner can bring up a tunnel for the duration of a single test, then tear it down. Supported tunnel types:

  • Tailscale — join the runner to your tailnet; address the target by its tailnet hostname.
  • cloudflared — runs the Cloudflare connector inside the runner container.
  • ngrok — for ad-hoc reverse tunnels.
  • SSH reverse — opens an SSH reverse forward to your jump host.
  • WireGuard, OpenVPN, IPSec — supported on self-hosted deployments.
  • GCP VPC connector — for managed Cloud Run deployments inside your GCP project.
  • Reverse proxy — pass-through if your target is already exposed via a corporate reverse-proxy host.
Can I import / export tests + findings?

Yes. Every stored report renders to multiple formats on demand:

  • JSON — full report (issues, severity, evidence, persona reviews, flow steps, screenshots, timing). Stable schema, version-tagged. GET /r/:id.json
  • Markdown — a human-readable report with embedded screenshots and one fix-prompt per issue. GET /r/:id.md
  • TXT — a flat list of every issue's prompt-to-fix-this-issue, ready to pipe into your AI coding agent (Claude, Cursor, Copilot, Antigravity).
  • HTML — the shareable web report (/r/:id), with the report itself shareable as a permanent URL.

Test cases generated by Jank can also be exported to CSV, Jira, TestRail, or Xray directly from the chat UI.

Can reports be shared?

Yes. Every run gets a permanent shareable URL (https://reports.jank.ai/r/<id> on hosted, or your equivalent base URL on self-host). You choose visibility: "public" (anyone with the link views the report) or visibility: "private" (admin-token gated). Optional emails list sends a "report ready" email when a run completes.

How long does a run take?

A full multi-dimensional run (bug finding + exploratory + functional + competitive + personas + accessibility + crawl) typically lands in ~12–15 minutes. Smaller scoped runs (single-page bugs only, no personas, no flows) finish in 3–5 minutes. Every agent runs in parallel — adding more dimensions doesn't multiply the runtime, it just lights up more lanes.

What can I configure per run?
  • URLs — 1 to 25 per submission, batch-mode supported.
  • Subpages — let the AI pick N additional pages from the entry URL (or disable).
  • Flows — generate N test flows; pass customPrompt to steer the agent (e.g., "focus on the checkout funnel").
  • Personas — generate N persona reviews with optional customPrompt to bias toward your audience.
  • Provider + model — pick LLM per-run.
  • Visibility — public / private / admin-token gated.
  • Tunnel spec — Tailscale, cloudflared, ngrok, SSH, WireGuard, OpenVPN, IPSec, GCP VPC.
  • Email notifications — comma-separated list of recipients per run.
  • Custom checks — per-brand / per-customer test rules layered on top of the standard suite.
  • Label — free-form tag for grouping in the admin dashboard.
Does it have a REST API + CLI?

Yes. POST /api/reports with a JSON list of URLs and the runner returns report IDs immediately; poll GET /api/reports/:id for status, fetch /r/:id.json for the result. Auth is via an X-Api-Key header. There's also a scripts/submit.sh curl wrapper bundled with the cloud package, and a jank CLI for CI runners (GitHub Actions, GitLab CI, Jenkins, CircleCI).

What about admin / ops?

An admin dashboard at /admin shows every report, its queue/running/done state in real time, with one-click retry on failures. Per-key quotas, per-account demo limits, and a separate ops API (see docs/api-internal.md) cover the operator side. Artifacts are versioned in object storage; metadata and run state live in Firestore (or a Firestore-compatible store on self-host).