testers.ai

How To Run

coTestPilot logo coTestPilot.ai

Getting Started
1. Launch App
2. Set Path
3. Edit Profile
4. Manager Prompt
5. Start Checks
6. Monitor Workflows
7. Review Issues
8. Review Flows
9. Exploratory Notes
10. Test Case Notes
11. Generate Report
12. Share Report

Step #1: Launch the testersapp

Start by launching the testersapp to begin your AI-powered testing journey.

Launch the testersapp

Step #2: Set path to testers app

Configure the path to your testers app from testers.ai in the settings.

Set path to testers app

Step #3: Edit user profile

Edit your profile - all ratings and comments will be recorded as coming from this human expert tester profile.

Edit user profile

Step #4: Set test manager prompt (optional)

Optionally set a test manager prompt which controls all AI testers and can be anything like login instructions or what not to test, etc.

Set test manager prompt

Step #5: Click start checks button

Click the start checks button on the app card to begin the testing process.

Click start checks button

Step #6: Monitor running workflows

Track the running workflows status progress showing AI and human steps.

Running workflows status progress

Step #7: Review Issues found by AI (Zero-Shot Workflow)

After the Sub-Zero AI automation step is completed, review the issues discovered by AI testing agents during the dynamic checks. This step is part of the 'Zero-Shot' workflow phase.

Review Issues found by AI
Comment

Step #8: Review test flows (Zero-Shot Workflow)

Review the resulting test flows from the testing agent's dynamic checks. This step is also part of the 'Zero-Shot' workflow phase.

Review test flows

Step #9: Exploratory Testing Notes (One-Shot Workflow)

If you performed any additional testing based on the AI's issues and checks, or of your own intuition, describe them so they can be included in the report as part of the 'One-Shot' step in the workflow.

Exploratory Testing Notes

Step #10: Test Case Notes

Describe any new AI or other test cases or configurations created to add coverage for the next test run.

Test Case Notes

Step #11: Generate test report

Generate a comprehensive test report with all findings and recommendations.

Generate test report

Step #12: Share Generated Report

The generated report opens in a web page. Save as and the entire report will be saved in a single HTML file for easy sharing.

Simple report full

Step #1: Launch the testersapp

Start by launching the testersapp to begin your AI-powered testing journey.

Launch the testersapp

Step #2: Set path to testers app

Configure the path to your testers app from testers.ai in the settings.

Set path to testers app

Step #3: Edit user profile

Edit your profile - all ratings and comments will be recorded as coming from this human expert tester profile.

Edit user profile

Step #4: Set test manager prompt (optional)

Optionally set a test manager prompt which controls all AI testers and can be anything like login instructions or what not to test, etc.

Set test manager prompt

Step #5: Click start checks button

Click the start checks button on the app card to begin the testing process.

Click start checks button

Step #6: Monitor running workflows

Track the running workflows status progress showing AI and human steps.

Running workflows status progress

Step #7: Review Issues found by AI (Zero-Shot Workflow)

After the Sub-Zero AI automation step is completed, review the issues discovered by AI testing agents during the dynamic checks. This step is part of the 'Zero-Shot' workflow phase.

Review Issues found by AI
Comment

Step #8: Review test flows (Zero-Shot Workflow)

Review the resulting test flows from the testing agent's dynamic checks. This step is also part of the 'Zero-Shot' workflow phase.

Review test flows

Step #9: Exploratory Testing Notes (One-Shot Workflow)

If you performed any additional testing based on the AI's issues and checks, or of your own intuition, describe them so they can be included in the report as part of the 'One-Shot' step in the workflow.

Exploratory Testing Notes

Step #10: Test Case Notes

Describe any new AI or other test cases or configurations created to add coverage for the next test run.

Test Case Notes

Step #11: Generate test report

Generate a comprehensive test report with all findings and recommendations.

Generate test report

Step #12: Share Generated Report

The generated report opens in a web page. Save as and the entire report will be saved in a single HTML file for easy sharing.

Simple report full

Quick Start Commands

Get started with testers.ai by running these two simple commands:

Step 1: Generate AI Tests - This command generates hundreds of dynamic checks autonomously:

./testers gen-ai-tests https://www.bing.com/ --app-name "Bing" --browser chrome --include-feature-tests

Step 2: Run Tests - This command runs the static and dynamic tests:

./testers test https://www.bing.com/ --app-name "Bing" --browser chrome --max-tests 20
Note:
Replace "https://www.bing.com/" with your application's URL and adjust the app name accordingly.
Test Case Storage:
Test cases are generated in the folder '/cache/<app_name>', and will contain hundreds or thousands of tests. View example of generated tests for Bing.com
Additional Flags:
  • --browser chrome - Use the installed version of chrome, or whatever browser you prefer instead of chromium
  • --debug - This has verbose logging
  • --team testers --activation-code XXXX - If you have a paid version, this unlocks all features
  • --max-tests 20 - Maximum number of interactive checks to run
  • --custom-prompt "username is jason and password is 123456" - You can also use the --custom-prompt flag to tell the AI anything you like. What not do to, what to focus on, etc.
Report Generation & Output:
After running the tests, testers.ai generates a comprehensive report and automatically opens it in your browser. The report includes:
  • Quality Summary - Overall assessment and grade of the application
  • Issues Found - Detailed list of bugs and quality issues discovered
  • Interactive Checks Performed - Results from all functional test cases executed
  • Persona Feedback - User experience insights from AI personas
All data is also saved in local JSON files to make it easy to post-process for CI/CD pipelines, custom dashboards, or issue uploading. View example report | For complete details on output structure and file formats, see: https://testers.ai/how_it_works.html

Processing Bug Output

testers.ai uses a modern, AI-first bug reporting schema that provides comprehensive context and reasoning for each discovered issue. This rich format can be easily converted to standard bug report formats for seamless integration into existing reporting systems and CI/CD pipelines.

testers.ai Bug Schema Fields

Each bug object in the testers.ai output contains the following comprehensive information:

Field Type Description
bug_titlestringShort, descriptive title of the bug
bug_typearrayCategories (e.g. "usability", "WCAG", "security")
bug_confidenceinteger1–10 score reflecting confidence it's a real bug
bug_priorityinteger1–10 score indicating impact/severity
bug_reasoning_why_a_bugstringExplanation of why this is considered a bug
bug_reasoning_why_not_a_bugstringCounterargument, acknowledging uncertainty
suggested_fixstringRecommended fix or mitigation strategy
bug_why_fixstringJustification for why this should be fixed
what_type_of_engineer_to_route_issue_tostringSuggested role (e.g. "Frontend Engineer")
possibly_relevant_page_console_textstring/nullCaptured browser console text (if relevant)
possibly_relevant_network_callstring/nullRelevant network request URL
possibly_relevant_page_textstring/nullSnippet of page text related to the bug
possibly_relevant_page_elementsstring/nullDOM element info (e.g. tag, href, id)
testerstringName of the human/AI tester who found it
bylinestringTitle or role of the tester
image_urlstring(Optional) Image avatar of the tester

Converting Bug Reports with convert.py

The convert.py utility can transform testers.ai bug reports into multiple standard formats for integration with your existing tools and workflows. Download the script to get started.

Supported Output Formats:
XML Test Formats:
junit, xunit, nunit, xunitnet, trx, testng, allure-xml, mocha-junit, jest-junit
JSON Test Formats:
pytest-json, allure, mocha, jest
Text Formats:
unittest, pytest, tap
Data Formats:
csv, tsv, json
Usage Examples:

Convert to all formats:

python convert.py /path/to/output/directory

Convert to specific format:

python convert.py /path/to/output/directory --format junit

Convert and compress results:

python convert.py /path/to/output/directory --zip

Convert to CSV for spreadsheet analysis:

python convert.py /path/to/output/directory --format csv
Key Benefits of testers.ai Bug Schema:
  • Opinionated & Verbose - Built to justify each bug and anticipate objections
  • Human-Readable - Structured enough for automated conversion yet easy to understand
  • Full Traceability - Links back to specific page content, console logs, and network calls
  • AI Reasoning - Includes both why something is a bug AND why it might not be
  • Actionable Insights - Suggests fixes and appropriate engineer types for routing
Integration Examples:
  • CI/CD Pipelines - Convert to JUnit XML for Jenkins, GitHub Actions, or Azure DevOps
  • Test Management - Import CSV/JSON into Jira, TestRail, or custom dashboards
  • Quality Metrics - Analyze trends using CSV exports in Excel or BI tools
  • Developer Workflow - Use pytest/unittest formats for local development testing

IDE Extensions

Integrate AI testing directly into your development workflow. Run tests, view results, and fix issues without leaving your IDE.

Supported IDEs

Visual Studio Code
Most popular editor

Cursor
AI-powered editor

Windsurf
Modern development
Key Features
  • One-Click Testing - Run AI tests directly from your editor with a single command
  • Real-Time Results - View test results and bug reports inline with your code
  • Quick Fix Suggestions - Get AI-powered suggestions for fixing discovered issues
  • Integration with Existing Workflows - Works alongside your current testing tools and CI/CD pipelines
  • Custom Test Configuration - Configure test parameters and AI prompts directly in your IDE
Development Workflow Integration

Our IDE extensions seamlessly integrate with your existing development workflow:

  • Pre-commit Testing - Run AI tests before committing code changes
  • Debug Integration - Link test failures directly to problematic code sections
  • Team Collaboration - Share test results and bug reports with team members
  • Version Control - Track test results across different code versions and branches

Check any code with a click

Check any code with a click

Check Any Website

Check Any Website

Fully Managed

Let our experts handle everything. We set up, run, and manage your AI Testing infrastructure while you focus on building great products.

Expert Setup

Our team of testing experts will configure and optimize your AI testing environment for maximum effectiveness.

4-Shot Testing Flow

Our AI Testing Agents execute the proven 4-shot testing methodology, which includes:

  • Shot 1: Initial test generation and execution
  • Shot 2: Analysis and refinement based on results
  • Shot 3: Targeted testing of identified issues
  • Shot 4: Final validation and comprehensive reporting
Learn more about the 4-shot testing flow →
24/7 Monitoring

Continuous monitoring and alerting to ensure your testing infrastructure runs smoothly around the clock.

Learn more about our monitoring services →