Other eye-tracking screening tools exist. At first glance, the technology appears similar — a camera, a screen, measurements of pupil and gaze behavior. But the underlying question being asked, and the scientific paradigm behind it, are meaningfully different.
Most existing eye-tracking screening products are built on the (scientifically questionable) lie-detection model: they ask whether a subject shows deceptive responses to relevant and control questions. EyeTrek is built on a recognition model: it asks whether a subject’s brain recognizes information they claim not to know.
Detecting deception is inherently complex, and the connection between eye behavior and lying is not as well-established as the connection between eye behavior and recognition. The second is a more specific, more scientifically grounded claim. Recognition is a measurable cognitive event. The involuntary responses that accompany it are well-characterized in independent peer-reviewed research.
| Other Eye-Tracking Vendors | EyeTrek | |
|---|---|---|
| How it works | Measures involuntary eye behaviors (pupil dilation, blink rate, gaze movement) while the subject answers a structured question sequence. Some products add physiological sensors combined with ocular signals for automated scoring. The underlying approach tests for deception in response to questions — a lie-detection paradigm. | Measures involuntary gaze and fixation patterns, pupil dilation, during brief, standardized visual stimulus protocols. Proprietary ML analytics classify patterns consistent with recognition and concealed knowledge. The underlying approach tests for what a subject recognizes — not whether they are lying. |
| Test duration | 15–45 minutes depending on product and protocol. Reports typically available within 5 minutes of test completion. | 2–4 minutes per session. Automated report generated in seconds. |
| Accuracy | Approximately 86–91% accuracy reported in vendor literature, depending on product variant. Independent validation is limited, and performance can vary across populations and test conditions. | >90% accuracy for concealed information detection in validated CIT protocols. |
| False detection rate | Vendor-reported false detection rates of approximately 10–14%. Independent validation is limited; rates may vary across populations and real-world conditions. | Low. Configurable by the client to reflect the operational risk tolerance of the deployment environment. |
| Invasiveness | Standard products are non-contact (computer and eye-tracking camera). Some product variants add wrist and body sensors, increasing physical intrusiveness. | Non-contact throughout. A standard screen and commercially available eye-tracking camera. No additional sensors, wires, or physical contact of any kind. |
| Objectivity | Scoring is largely automated, reducing examiner dependence compared to polygraph. Products that incorporate physiological sensors introduce a degree of additional data interpretation. | Fully standardized and automated. No examiner interpretation at any stage. Consistent results across sessions, operators, and locations. |
| Countermeasure resistance | Question-answer format creates a structured, predictable test sequence that may be more susceptible to coaching or preparation. | Visual stimulus protocols with dynamic adaptation reduce the predictability of test content. Attempts to suppress involuntary responses introduce their own detectable signatures. |
| Language & culture | Question-based format requires comprehension of written or spoken language. Cultural and literacy factors can affect results. | Language and culture agnostic. No verbal or written response required from the subject. |
| Training & personnel | Minimal training required for standard products (typically less than one day). Some product variants with physiological components require 1–3 days of practitioner familiarization. | No trained operator required. Administration is fully automated. Standard staff can run sessions after a brief onboarding. |
| Scalability | Session length of 15–45 minutes limits throughput. Suitable for individual assessments and investigations, but not designed for high-volume screening environments. | Designed for high-throughput environments. 2–4 minute sessions enable continuous screening at scale. Multiple simultaneous deployments possible with consistent results. |
| Scientific validation | Peer-reviewed evidence exists but is largely generated or sponsored by the vendor. Independent replication is limited. The question-answer deception paradigm — detecting lies — is a broader and less scientifically based method than the concealed information test. | Grounded in the Concealed Information Test (CIT), a methodology with an independent peer-reviewed research base spanning decades. The recognition-based paradigm is a more specific and scientifically defensible claim, with well-characterized performance parameters. |