AI Agents coTestPilot SDK Leaderboard
Have the Team That Tested Chrome Test Your Apps

Quality Intelligence

The fastest, smartest, most efficient way to add AI to your testing

Show me for
Example Reports

FAQ

Every run option, deployment path, and integration we support — covered.

Can I bring my own LLM?

Yes. Pick from Anthropic Claude, OpenAI (GPT-4o / GPT-5 etc.), Google Gemini, or Azure OpenAI. For fully air-gapped or zero-egress setups, point Jank at a self-hosted endpoint (Ollama, vLLM, LocalAI, or any OpenAI-compatible API). Provider + model are passed per-request via the provider / model fields, or set globally per deployment.

Can I self-host on my own private network?

Yes — three ways:

  • Docker / Docker Compose — one-line bring-up via cloud/enterprise/docker-compose.yml.
  • Kubernetes — manifests in cloud/enterprise/kubernetes/; tested on EKS, GKE, AKS, and bare-metal k3s.
  • Single VM — clone, set ADMIN_TOKEN + LLM key, docker compose up -d --build. Up in under 10 minutes.

All three ship as the same Node + Playwright + cloudflared image, with Firestore (or any Firestore-API-compatible backend) for metadata and a configurable object store for artifacts.

Can I run fully air-gapped?

Yes. Pair a self-hosted deployment with a self-hosted LLM endpoint (Ollama / vLLM / LocalAI) and the entire system runs without outbound internet — neither testers.ai nor any LLM vendor sees your traffic or your reports. The hosted UI, the runner, the LLM call, and the artifact store all live on your network.

How do I tunnel into private / VPN-protected targets?

The runner can bring up a tunnel for the duration of a single test, then tear it down. Supported tunnel types:

  • Tailscale — join the runner to your tailnet; address the target by its tailnet hostname.
  • cloudflared — runs the Cloudflare connector inside the runner container.
  • ngrok — for ad-hoc reverse tunnels.
  • SSH reverse — opens an SSH reverse forward to your jump host.
  • WireGuard, OpenVPN, IPSec — supported on self-hosted deployments.
  • GCP VPC connector — for managed Cloud Run deployments inside your GCP project.
  • Reverse proxy — pass-through if your target is already exposed via a corporate reverse-proxy host.
Can I import / export tests + findings?

Yes. Every stored report renders to multiple formats on demand:

  • JSON — full report (issues, severity, evidence, persona reviews, flow steps, screenshots, timing). Stable schema, version-tagged. GET /r/:id.json
  • Markdown — a human-readable report with embedded screenshots and one fix-prompt per issue. GET /r/:id.md
  • TXT — a flat list of every issue's prompt-to-fix-this-issue, ready to pipe into your AI coding agent (Claude, Cursor, Copilot, Antigravity).
  • HTML — the shareable web report (/r/:id), with the report itself shareable as a permanent URL.

Test cases generated by Jank can also be exported to CSV, Jira, TestRail, or Xray directly from the chat UI.

Can reports be shared?

Yes. Every run gets a permanent shareable URL (https://reports.jank.ai/r/<id> on hosted, or your equivalent base URL on self-host). You choose visibility: "public" (anyone with the link views the report) or visibility: "private" (admin-token gated). Optional emails list sends a "report ready" email when a run completes.

How long does a run take?

A full multi-dimensional run (bug finding + exploratory + functional + competitive + personas + accessibility + crawl) typically lands in ~12–15 minutes. Smaller scoped runs (single-page bugs only, no personas, no flows) finish in 3–5 minutes. Every agent runs in parallel — adding more dimensions doesn't multiply the runtime, it just lights up more lanes.

What can I configure per run?
  • URLs — 1 to 25 per submission, batch-mode supported.
  • Subpages — let the AI pick N additional pages from the entry URL (or disable).
  • Flows — generate N test flows; pass customPrompt to steer the agent (e.g., "focus on the checkout funnel").
  • Personas — generate N persona reviews with optional customPrompt to bias toward your audience.
  • Provider + model — pick LLM per-run.
  • Visibility — public / private / admin-token gated.
  • Tunnel spec — Tailscale, cloudflared, ngrok, SSH, WireGuard, OpenVPN, IPSec, GCP VPC.
  • Email notifications — comma-separated list of recipients per run.
  • Custom checks — per-brand / per-customer test rules layered on top of the standard suite.
  • Label — free-form tag for grouping in the admin dashboard.
Does it have a REST API + CLI?

Yes. POST /api/reports with a JSON list of URLs and the runner returns report IDs immediately; poll GET /api/reports/:id for status, fetch /r/:id.json for the result. Auth is via an X-Api-Key header. There's also a scripts/submit.sh curl wrapper bundled with the cloud package, and a jank CLI for CI runners (GitHub Actions, GitLab CI, Jenkins, CircleCI).

What about admin / ops?

An admin dashboard at /admin shows every report, its queue/running/done state in real time, with one-click retry on failures. Per-key quotas, per-account demo limits, and a separate ops API (see docs/api-internal.md) cover the operator side. Artifacts are versioned in object storage; metadata and run state live in Firestore (or a Firestore-compatible store on self-host).