Whatnot
App Quality Report
Powered by Testers.AI
B-80%
Quality Score
19
Pages
324
Issues
7.9
Avg Confidence
7.8
Avg Priority
105 Critical168 High50 Medium1 Low
Testers.AI
>_ Testers.AI AI Analysis

Whatnot scored B- (80%) with 324 issues across 7 tested pages, ranking #8 of 8 Testlio portfolio apps. That's 125 more than the 199.2 category average (0th percentile).

Top issues to fix immediately: "Hero QR code for app download is too small to scan comfortably" Increase the QR code size to 180-240px on desktop, maintain aspect ratio, and add a clear text label 'Download Whatno...; "HTTP 429 Too Many Requests on resource load" Identify the resource(s) returning 429 and implement retry with exponential backoff, introduce client-side request th...; "Network DNS resolution failure (ERR_NAME_NOT_RESOLVED) for multiple re" Validate that all resource URLs point to resolvable domains.

Weakest area usability (6/10): Limited navigation visible beyond login; primary actions are unclear; QR code and 'How it works' CTA exist but without clear pa...

Quick wins: Add a persistent navigation bar with product categories, search, and filters for quick access. Improve accessibility: provide alt text for all images, semantic headings, and keyboard navigability; ensure color....

Qualitative Quality
Whatnot
Category Avg
Best in Category
Issue Count by Type
Content
71
A11y
65
UX
28
Visual
11
Security
3
Legal
3
Pages Tested · 19 screenshots
Detected Issues · 324 total
1
AI/LLM endpoint detected on page load (privacy/perf risk)
CRIT P9
Conf 9/10 Other
Prompt to Fix
Refactor AI/LLM usage to lazy-load only after user interaction and with explicit consent; remove the global AI endpoint detection log; ensure requests do not leak sensitive data and stay within token/data limits.
Why it's a bug
Console logs reveal an LLM endpoint detection; AI/LLM activity appears on initial paint, risking data leakage and performance penalties. Should be lazy-loaded or consented.
Why it might not be a bug
If the endpoint is only used for telemetry and is properly gated, it may not be a bug; but detection indicates risk.
Suggested Fix
Move LLM interactions behind user gesture or opt-in; remove detectors; ensure requests are sanitized and lightweight; minimize payloads; remove console warnings.
Why Fix
Improves user privacy and performance; reduces risk of leaking prompts/data.
Route To
Frontend Privacy Engineer / Backend API Integrations
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: ⚠️ AI/LLM ENDPOINT DETECTED
2
Insecure HTTP request to Privacy page (HTTP non-secure transport) with redirects
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
Audit all privacy-policy related URLs in the frontend. Ensure all fetch/XHR requests use HTTPS. Replace any http:// references with https://, and add edge/server-side redirects to enforce HTTPS (prefer HSTS). Add a quick regression test that privacy URLs are served over HTTPS only.
Why it's a bug
The network activity shows a privacy page fetch over HTTP (http://www.whatnot.com/privacy) with a 307 redirect, which exposes traffic on the network and could be intercepted. Even if it ends up on HTTPS, the initial insecure request is a security risk and should be eliminated.
Why it might not be a bug
If the HTTP request is immediately redirected to HTTPS at the edge, some may argue the risk is mitigated; however, relying on redirects is not a robust security pattern and leaves traffic potentially exposed during the initial request.
Suggested Fix
Replace all http:// URLs with https:// in frontend code for privacy and other sensitive endpoints; enforce HTTPS-only fetch calls; configure edge/server-side redirects to always redirect HTTP to HTTPS; consider HSTS in response headers.
Why Fix
Eliminates exposure to man-in-the-middle intercepts, improves user privacy, and aligns with security best practices.
Route To
Security Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: GET http://www.whatnot.com/privacy - Status: N/A; GET http://www.whatnot.com/privacy - Status: 307; INSECURE: HTTP (non-HTTPS) request
Network: GET http://www.whatnot.com/privacy
3
AI/LLM endpoint calls detected on page load
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
Identify all client-side AI/LLM endpoint calls triggered on page load. Remove or defer these calls behind a user action or an explicit consent prompt. If dynamic content is necessary, implement a lazy-loading pattern with proper user notification and ensure all endpoints are compliant with privacy requirements.
Why it's a bug
Console shows explicit flags '⚠️ AI/LLM ENDPOINT DETECTED', suggesting client-side calls to AI endpoints (potentially third-party) on or before user interaction. This raises privacy, security, and performance concerns and may violate consent expectations.
Why it might not be a bug
If such calls are essential for legal/AI-generated content, they should still be gated behind explicit user consent and/or deferred until interaction; otherwise they mislead users and can expose data unintentionally.
Suggested Fix
Move AI/LLM interactions behind explicit user action or consent banners. Defer or server-side-render AI content where possible. Remove or lazy-load any client-side LLM calls and audit all endpoints for privacy/compliance.
Why Fix
Mitigates privacy risks, reduces unnecessary network activity on load, and aligns with best practices for GenAI-powered features seen in AI-generated code patterns.
Route To
Frontend Engineer / Security Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: "⚠️ AI/LLM ENDPOINT DETECTED"
Network: POST https://pactsafe.io/send - Status: N/A; GET https://pactsafe.io/send - Status: 200
+321
321 more issues detected  View all →
On-load LLM endpoint invocation detected (potential data lea...
Third-party AI domains invoked without explicit user consent
HTTP 429 Too Many Requests on resource load
and 318 more...
Unlock All 324 Issues
You're viewing the top 3 issues for Whatnot.
Sign up at Testers.AI to access the full report with all 324 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you