The app for independent voices

Many companies, responding to fake candidates and AI-generated applications, have implemented fraud detection systems for hiring.

Could these systems accidentally punish people for using basic privacy tools?

Modern fraud detection analyzes patterns beyond your resume:

Your Digital Footprint

  • Does your IP address location match where you say you live?

  • Are you connecting from the same network each time?

  • Do your online profiles align with your application?

Your Communication Patterns

  • Is your phone number a standard mobile line or internet-based?

  • Do your email interactions show consistent timezone patterns?

  • Are there behavioral inconsistencies across interactions?

Here's where it gets tricky. Many legitimate candidates use:

  • VPNs to protect their data on public Wi-Fi

  • iCloud Private Relay (built into iPhones) which masks location

  • Google Voice numbers to avoid giving out personal phone numbers

  • Privacy-focused browsers that limit tracking

These are smart security practices. But to an algorithm designed to catch fraudsters who use proxies and fake phone numbers, they could create the exact same signals.

The system likely doesn't reject you outright. But if it flags you as "needs review" or assigns a lower confidence score, a busy recruiter might deprioritize your application without realizing why. You'd never get seen, and you'd never know.

The Transparency Problem

Most candidates don't know these systems exist. ATS vendors don't publish how their fraud detection weighs signals. We don't know if "VPN detected" adds 2 points to a risk score or 20.

Companies audit for gender and race bias. But nobody's checking whether these systems discriminate based on privacy practices.

The Bottom Line

We don't have definitive proof that privacy tools cause hiring discrimination. But we have all the ingredients:

  • Systems designed to catch fraud patterns

  • Privacy tools that create similar patterns

  • Zero transparency about how it works

  • No audits checking for this specific bias

We can't prove this is happening at scale. But the incentives are all wrong, the systems are opaque, and nobody's checking.

Dec 26
at
1:32 PM

Log in or sign up

Join the most interesting and insightful discussions.