unittest_testersai
The easiest way to AI-upgrade your existing unittest suite. You already have the tests — this adds AI Checks at the moments that matter, without replacing your framework, runner, or CI pipeline.
analyze_* call, or define your own.Inherit TestersAIMixin on your TestCase.
# Download the bundle from the Downloads page, then:
pip install ./testersai-python-unittest-0.1.0.tar.gz
A complete, runnable Python + unittest example.
from unittest_testersai import TestersAIMixin
class CheckoutTest(TestersAIMixin):
def test_checkout(self):
self.driver.get("https://shop.example/cart")
r = self.taScreenshot(self.driver.get_screenshot_as_png())
self.driver.find_element("id", "checkout").click()
r2 = self.taScreenshot(self.driver.get_screenshot_as_png())
self.assertTestersAIClean(r2) # opt-in hard fail
Logged through the test runner's stream; assertTestersAIClean() fails via standard AssertionError.
If the AI call fails — rate limit, hang, firewall, no network — the SDK gives up
fast and returns a skipped result. Your unittest test is never blocked.
Rule of thumb: wherever a human reviewer would pause to look during a manual run. These recommendations are tuned for unittest — pick the ones that fit your suite.
The page just re-rendered. Ask "does this look right?" before any interaction.
Login, add-to-cart, toggle, submit. The UI just reflected a new state — where regressions hide.
You were about to check one thing. Ask the AI about everything else for free.
API returned, spinner gone, toast shown. Catch broken empty-states and stale data.
A single console + network check at the end of every test catches issues your assertions ignored.
Mobile vs. desktop, light vs. dark, locale change. One call per viewport.
setUpSpin up a driver + launch the AI session in one place. Your existing TestCase subclass keeps its structure.
assertTestersAIClean() fails via standard AssertionError.Or grab the monolith ZIP (all languages, all adapters).