Amazon
App Quality Report
Powered by Testers.AI
B83%
Quality Score
17
Pages
250
Issues
8.1
Avg Confidence
7.9
Avg Priority
110 Critical104 High35 Medium1 Low
Testers.AI
>_ Testers.AI AI Analysis

Amazon scored B (83%) with 250 issues across 9 tested pages, ranking #5 of 8 Testlio portfolio apps. That's 51 more than the 199.2 category average (25th percentile).

Top issues to fix immediately: "AI/LLM endpoints invoked on page load (privacy/performance risk)" โ€” Move AI calls behind user interaction or explicit consent; "AI/LLM endpoint detected and loaded on page load" โ€” Defer any AI/LLM interactions until explicit user action or explicit consent is obtained; "Overwhelming product grid with no visual hierarchy makes scanning diff" โ€” Introduce clear section headers (e.

Weakest area โ€” accessibility (5/10): Color contrast and focus visibility may hinder keyboard and screen-reader users; alt text and skip links could be improved.

Quick wins: Consolidate navigation into a persistent, clear header with a strong, fast search and region-aware controls. Improve accessibility with better color contrast, visible focus states, skip links, and consistent alt text for images.

Qualitative Quality
Amazon
Category Avg
Best in Category
Issue Count by Type
A11y
71
Content
68
UX
24
Security
12
Visual
10
Pages Tested ยท 17 screenshots
Detected Issues ยท 250 total
1
AI/LLM endpoints invoked on page load (privacy/performance risk)
CRIT P10
Conf 9/10 Other
Prompt to Fix
Remove or lazily load all AI/LLM endpoint calls that occur on page load. Implement a user-consent step or opt-in, gated by a feature flag. Move calls to on-demand actions, limit payload size, and ensure no sensitive user data is sent without explicit consent.
Why it's a bug
Network logs show repeated AI endpoint calls (AI/LLM ENDPOINT DETECTED) and unagi.amazon.com requests during page load, implying AI interactions occur before user action. This can leak data, degrade performance, and conflict with user expectations and privacy norms.
Why it might not be a bug
If this is a controlled test environment or feature flag scenario, it needs explicit disclosure and opt-in. In production, this pattern is highly risky without consent and proper throttling.
Suggested Fix
Move AI calls behind user interaction or explicit consent. Lazy-load or defer AI requests until needed; add a privacy/consent banner and a feature flag to disable on initial paint; minimize data sent to AI endpoints and audit data-sharing scopes.
Why Fix
Protect user privacy, reduce unnecessary data transmission, improve performance, and align with user expectations and legal/compliance standards.
Route To
Frontend/Privacy Engineer / Security
Page
Tester
Jason ยท GenAI Code Analyzer
Technical Evidence
Console: โš ๏ธ AI/LLM ENDPOINT DETECTED
Network: POST https://unagi.amazon.com/1/events/com.amazon.csm.csa.prod - Status: N/A
2
AI/LLM endpoint detected and loaded on page load
CRIT P10
Conf 8/10 SecurityOther
Prompt to Fix
Refactor to lazy-load AI/LLM interactions. Add a user-consent gate and remove or hide endpoint detection logs in production. Implement a secure backend proxy for AI calls and ensure no PII is sent without explicit consent.
Why it's a bug
The console indicates an AI/LLM endpoint is detected and there are network requests associated with it on initial paint. This can leak data, impact performance, and surprise users who did not opt in to AI processing.
Why it might not be a bug
If AI calls are strictly behind user action in production, this may be intentional; however, the screenshot shows an automatic detection marker and related requests, signaling unintentional or opaque behavior.
Suggested Fix
Defer any AI/LLM interactions until explicit user action or explicit consent is obtained. Move calls to a controlled backend/proxy with proper sanitization, audit data sent, and provide a visible opt-in. Remove production-time endpoint detection logs.
Why Fix
Prevents unintended data exposure, preserves user trust, and improves performance by avoiding unnecessary AI calls on first paint.
Route To
Security/Platform Engineer and Frontend Engineer
Page
Tester
Jason ยท GenAI Code Analyzer
Technical Evidence
Console: โš ๏ธ AI/LLM ENDPOINT DETECTED
Network: GET to AI-related or proxy endpoints observed in logs alongside the detection marker
3
Insecure HTTP tracking/error endpoint used by AI-generated code
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
Change the Track&Report endpoint to use HTTPS. If cross-origin tracking is required, configure a sanctioned, secure domain and remove any hardcoded HTTP URLs from the client. Add retry with backoff and avoid logging sensitive data.
Why it's a bug
The console logs show an error tracking API call via an HTTP URL (http://tiny/1covqr6l8/wamazindeClieUserJava). This exposes data to MITM risk and violates secure transport practices.
Why it might not be a bug
If this is a test environment, it should still be isolated; in public/mobile UIs, this represents a security risk.
Suggested Fix
Replace all tracking/error endpoints with HTTPS, or remove external HTTP calls. Centralize error reporting behind a secure, authenticated channel. Validate that logs do not leak sensitive data.
Why Fix
Mitigates man-in-the-middle risks, preserves user privacy, and aligns with security best practices.
Route To
Security Engineer / Frontend Engineer
Page
Tester
Jason ยท GenAI Code Analyzer
Technical Evidence
Console: "Error logged with the Track&Report JS errors API(http://tiny/1covqr6l8/wamazindeClieUserJava)"
Network: http://tiny/1covqr6l8/wamazindeClieUserJava
+247
247 more issues detected  View all →
AI endpoint calls on page load causing potential data leakag...
Missing autocomplete attributes on input fields
AI endpoint activity detected on page load without user cons...
and 244 more...
Unlock All 250 Issues
You're viewing the top 3 issues for Amazon.
Sign up at Testers.AI to access the full report with all 250 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you