Signal
App Quality Report
Powered by Testers.AI
B+87%
Quality Score
6
Pages
80
Issues
8.0
Avg Confidence
7.8
Avg Priority
38 Critical27 High14 Medium1 Low
Testers.AI
>_ Testers.AI AI Analysis

Signal was tested and 80 issues were detected across the site. The most critical finding was: AI endpoints detected on page load / potential on-load LLM calls. Issues span Security, A11y, Performance, Other categories. Persona feedback rated Visual highest (8/10) and Accessibility lowest (6/10).

Qualitative Quality
Signal
Category Avg
Best in Category
Issue Count by Type
A11y
29
Content
25
UX
12
Security
3
Pages Tested · 6 screenshots
Detected Issues · 80 total
1
AI endpoints detected on page load / potential on-load LLM calls
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
Refactor page initialization to remove eager AI/LLM calls. Introduce a consent flag (e.g., showConsentModal) and ensure AI calls only occur after user consent. Add lazy-loading and exponential backoff for any necessary calls.
Why it's a bug
Console shows labels like 'AI/LLM ENDPOINT DETECTED' and multiple on-load AI-related assets. This implies LLM/embedding calls may be happening during initial paint, risking data leakage, perf issues, and user consent gaps.
Why it might not be a bug
If AI features are intended, they should still be gated behind explicit consent and lazy-loaded; otherwise this is a serious UX and privacy concern.
Suggested Fix
Move any AI/LLM calls behind user interaction or explicit consent, implement a feature-flag to disable on-load AI calls, and lazy-load/defer requests with backoff and error handling. Audit payloads to avoid sending PII.
Why Fix
Reduces risk of data leakage, improves performance, and aligns with privacy expectations and regulatory considerations.
Route To
Frontend Architect / Privacy & Security Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: ⚠️ AI/LLM ENDPOINT DETECTED
Network: AI endpoint calls detected in logs; no explicit endpoint URLs shown in snippet
2
Console shows AI/LLM endpoint detection log (⚠️ AI/LLM ENDPOINT DETECTED)
CRIT P9
Conf 9/10 SecurityOther
Prompt to Fix
Remove the 'AI/LLM ENDPOINT DETECTED' console message or wrap it in a development-only guard so it does not execute in production. Search the codebase for any prints to console or logs that reveal internal AI endpoints and remove or anonymize them.
Why it's a bug
The page or build emits a clearly named diagnostic log about AI/LLM endpoints. Exposing internal detection or infrastructure details to end users or in production logs can leak implementation details and create security/privacy concerns.
Why it might not be a bug
If this log is strictly behind a debug flag in development builds, it should not appear in production; however, the screenshot shows it in the logs, indicating it’s accessible to users.
Suggested Fix
Remove or guard the log behind a verbose/development flag, ensure production builds strip such diagnostics, and audit for any other internal endpoint references leaking to the client.
Why Fix
Reduces information disclosure risk and prevents users from seeing internal tooling that could be exploited or confuse users.
Route To
Frontend Engineer / Security Engineer
Page
Tester
Jason · GenAI Code Analyzer
Technical Evidence
Console: ⚠️ AI/LLM ENDPOINT DETECTED
3
Tracking pixel request detected on signal.org without explicit user consent
CRIT P9
Conf 8/10 Other
Prompt to Fix
Audit the two signal-pixel.png requests on https://signal.org to determine what data (IP, UA, referrer, geolocation, etc.) is being captured or transmitted via the beacon. Implement explicit user consent for analytics/tracking pixels, remove or anonymize any personally identifiable data in the pixel URL or payload, ensure the pixel is hosted under a domain with clear privacy disclosures, and add a user-facing privacy banner or settings toggle to opt out of tracking. Provide a patch that replaces the current pixel beacon with a privacy-compliant alternative (opt-in only) and updates the privacy policy accordingly.
Why it's a bug
Two GET requests for signal-pixel.png assets are flagged as tracking requests. These image beacons are typically used to track user activity (e.g., views, IP, UA) and may collect data without explicit consent indicators. No visible consent prompt or privacy notice is shown in the log excerpt, raising potential GDPR/CCPA concerns and eroding user privacy expectations.
Why it might not be a bug
If the site has a clearly stated privacy policy and explicit analytics/consent mechanisms, tracking pixels may be acceptable. However, the logs provide no evidence of consent prompts or opt-out controls for these specific requests, making this issue notably risky.
Suggested Fix
Implement explicit user consent for analytics/tracking beacons. Ensure tracking pixel requests are first-party or anonymized, avoid passing personal data in pixel requests, and provide a clear opt-in/opt-out mechanism. Consider moving to a consent-driven analytics solution and hosting the pixel on a consented domain. Add a privacy banner and document data collection in the privacy policy.
Why Fix
Fixing this reduces privacy risk, builds trust, and helps ensure compliance with data protection laws by making user consent explicit for tracking beacons.
Route To
Privacy Engineer / Frontend Web Engineer
Page
Tester
Pete · Privacy Networking Analyzer
Technical Evidence
Console: ⚠️ POTENTIAL ISSUE: Tracking request detected
Network: GET https://signal.org/assets/images/features/signal-pixel.jpg - Status: N/A GET https://signal.org/assets/images/features/Signal-pixel.jpg - Status: 200
+31
31 more issues detected  View all →
Placeholder UI button lacks label and ID (AI-generated stub ...
Empty button label causing accessibility issue
Placeholder Careers URL left in production (workworkwork)
and 28 more...
Unlock All 80 Issues
You're viewing the top 3 issues for Signal.
Sign up at Testers.AI to access the full report with all 80 detected issues, detailed fixes, and continuous monitoring.
Sign Up at Testers.AI or let us run the tests for you