back Back

For the next era of data privacy, businesses must verify information without storing it

Today

  • AI
  • AI regulations
  • Bank Data
Share

Adrian Ludwig, CISO, Tools for Humanity
Adrian Ludwig, CISO, Tools for Humanity

By Adrian Ludwig, CISO, Tools for Humanity

Earlier this year, Discord suffered a breach exposing 70,000 government IDs and selfies. In July, the Tea app—designed as a safe space for women—leaked 72,000 verification photos and identity documents. These aren’t isolated incidents. They’re warning signs of a collision between yesterday’s verification practices and tomorrow’s AI capabilities.

Of course, Discord and Tea didn’t do anything wrong. They followed the same playbook companies have used for years: collect IDs, verify ages, store the data. New regulations from the EU’s Digital Services Act to various U.S. state laws increasingly require platforms to verify users’ ages. Companies comply the only way they know how: by becoming repositories of identity documents.

But here’s what’s changed: that leaked driver’s license photo can now become a deepfake that fools your bank. Three seconds of audio from any public video becomes a voice clone authorising wire transfers. And when someone “wearing” your face joins a Zoom call with your financial advisor, traditional remedies like credit monitoring become laughably inadequate.

The technology to escape this trap already exists. Zero-knowledge proofs—a cryptographic technique that sounds complex but works simply—allow verification without collection. Think of it like showing your ID at a liquor store: the clerk confirms you’re over 21 without photocopying your license or recording your address. They perform a simple check, get a yes/no answer, and the transaction proceeds.

Here’s how this translates to digital verification. Modern passports and many government IDs contain an NFC chip—the same technology that powers contactless payments. When you tap your passport on your phone, the device reads the cryptographically signed data stored on the chip, including your date of birth and the issuing government’s digital signature. Your phone then performs the age calculation locally, right there in your hand.

Using established protocols like ISO 18013-5 for mobile driver’s licenses, your device generates what’s called a “selective disclosure proof.” This proof mathematically demonstrates that you meet the requirement—say, being over 18—without revealing your actual birthdate. The math is elegant: it proves a statement about your data without exposing the data itself. When you visit a website requiring age verification, you tap a button, your phone performs this calculation, and sends only the proof—a string of cryptographic data that says “yes, this person is over 18” and nothing more.

The website receives no photo, no name, no document number. It can’t even tell which country issued your ID. All it knows is that a legitimate government authority has cryptographically confirmed you meet the age requirement. The proof can’t be reused on another site or traced back to you. Each verification generates a fresh proof, preventing tracking across platforms.

This isn’t theoretical. The cryptographic primitives have existed since the 1980s. The secure chips shipping in passports since 2006 support these protocols. The ISO standards are published and peer-reviewed. The technology works—it’s just sitting there, waiting.

Yet most platforms still demand uploaded IDs like it’s 2010. Why? Three barriers block the path from what’s possible to what’s standard.

Technical inertia runs deep. Companies have invested years building centralised verification systems. Their engineers know how to collect and store IDs. Transitioning to zero-knowledge proofs requires new expertise and infrastructure, an investment many postpone until forced.

Business models present another obstacle. Traditional verification creates valuable databases. User profiles enriched with government IDs enable detailed tracking and targeting. Privacy-preserving systems deliberately prevent this secondary monetisation. For some companies, that’s not a bug—it’s a feature they’d rather not implement. Regulatory comfort zones matter too. Authorities often prefer systems that create audit trails. Anonymous verification makes enforcement harder, even as it makes users safer. Regulators need to recognise that protecting citizens from identity theft matters more than maintaining surveillance capabilities.

Moving privacy-preserving verification from exception to standard requires coordinated action across the ecosystem. Companies must stop viewing this as future technology. The tools exist today. Begin pilot programs with privacy-preserving credentials. Your security teams will thank you for eliminating toxic data assets. Your legal teams will appreciate reduced breach liability. Most importantly, your users will trust you more.

Regulators need to update verification requirements to explicitly allow zero-knowledge approaches. The EU’s eIDAS 2.0 regulation already points in this direction. U.S. states crafting age verification laws should follow suit, mandating proof of age rather than ID collection.

The industry needs standardisation to accelerate adoption. When every platform implements its own verification system, complexity multiplies. Common protocols and open standards—like those being developed by the W3C—make implementation straightforward for everyone.

The irony is striking: in trying to protect users by verifying their identities, we’ve created honeypots that endanger them far more than the original risks. Every ID database represents thousands of potential deepfake victims, identity theft targets, and lives disrupted.

The technology to do better exists today. We can verify age, humanity, and eligibility without creating permanent records. We can have both security and privacy. The question is whether companies will make this transition voluntarily or wait until the next catastrophic breach—perhaps one enabling widespread deepfake fraud—forces their hand.

Previous Article

January 14, 2026

From automation to autonomy: the Agentic AI revolution in banking and FS

Read More

IBSi News

Fingerprint Payment Card, Biometric Payment Card, FinTech, Contactless Payments, Biomeyric Smart Card, NFC Technology, Digital Security, Digital Identity, Multi-Factor Authentication, Biometric Authentication, Risk Management, Fraud, FinTech, UK, France, Germany, Europe

January 22, 2026

AI

UK credit card use rises as debit spending softens

Read More

Get the IBSi FinTech Journal India Edition

  • Insightful Financial Technology News Analysis
  • Leadership Interviews from the Indian FinTech Ecosystem
  • Expert Perspectives from the Executive Team
  • Snapshots of Industry Deals, Events & Insights
  • An India FinTech Case Study
  • Monthly issues of the iconic global IBSi FinTech Journal
  • Attend a webinar hosted by the magazine once during your subscription period

₹200 ₹99*/month

Subscribe Now
* Discounted Offer for a Limited Period on a 12-month Subscription



IBSi FinTech Journal

  • Most trusted FinTech journal since 1991
  • Digital monthly issue
  • 60+ pages of research, analysis, interviews, opinions, and rankings
  • Global coverage
Subscribe Now

Other Related Blogs

January 14, 2026

From automation to autonomy: the Agentic AI revolution in banking and FS

Read More

January 12, 2026

How FS can mitigate risk and ensure compliance through automated network testing

Read More

December 26, 2025

FinTech 2026: Why Credit Line on UPI Will Reshape the Entire Ecosystem

Read More

Related Reports

Sales League Table Report 2025
Know More
Global Digital Banking Vendor & Landscape Report Q3 2025
Know More
NextGen WealthTech: The Trends To Shape The Future Q4 2023
Know More
Incentive Compensation Management Report Q4 2025
Know More
Treasury & Capital Markets Systems Report Q4 2025
Know More