|

Zee Live News News, World's No.1 News Portal

Zero-Knowledge Technology in Web2 Security- The European Financial Review

Author: admin_zeelivenews

Published: 17-05-2026, 12:50 PM
Zero-Knowledge Technology in Web2 Security- The European Financial Review
Telegram Group Join Now
Web2 security

By Deepthi Kumar

Businesses of today aren’t keeping up with the pressure of modern-day threat actors – but Web3’s Zero-Knowledge technology can provide an impactful solution.

The threat landscape of today is only growing larger, while the security architecture underneath even the biggest businesses isn’t keeping up. Today’s enterprise technology is built on a dangerous assumption: the more data a system holds, the more useful it becomes. However, this expanding data resource is providing threat actors with an ever-increasing attack surface. It’s clear that enterprises, especially financial, need efficient security – and Web3 technology might hold the key.

Learnings From the Salesforce Supply-Chain Campaign

The year 2025 was plagued by cyberattacks that exposed just how fragile the security architecture underpinning most major enterprises really is. The Salesforce supply-chain campaign is a leading example of what’s going on. Attackers exploited compromised OAuth tokens from a third-party integration to extract data from hundreds of corporate CRM instances, hitting companies from Google to FedEx to Disney. This wasn’t because Salesforce lacked security controls, far from it. Scoped OAuth permissions, token rotation, and API access auditing all existed on the platform. The issue occurred when hundreds of well-resourced enterprises, staffed by capable security teams failed to deploy those controls correctly. This wasn’t an accident. It was a signal that the  current system demands more operational perfection than organisation could deliver.

These were large, well-known enterprises spending millions on cybersecurity, using the industry’s ‘best-in-class’ tools and yet, they didn’t have the right defences in place for when a third-party exposed their data. Surprisingly, the breach didn’t require cracking any encryption or exploiting a zero-day vulnerability. Using a single third-party app, attackers compromised OAuth tokens and used them to systematically export data from customer instances. Because the system was designed to make it accessible, the data was all sitting there. The truth of the matter is that when every failure of operational discipline leads to catastrophic exposure, the architecture is over-burdened.

Fixing the Issue on a Structural Level

Today’s financial enterprise technology is built on an assumption that now exists as its greatest liability: the more data a system holds, the more useful it becomes. A businesses’ CRM stores customer details such as their names, email addresses, phone numbers, financial details, and transaction histories. Your identity providers hold authentication credentials, alongside exposing personal information. Your compliance systems ingest and retain vast amounts of sensitive data required for audits.

At the end of the day, this creates a growing attack surface.

To clarfiy, modern SaaS platforms do offer meaningful data-layer defenses and when deployed, these controls work. The problem lies in the gap between “available” and “consistently deployed across hundreds of integrations” and this is where breaches happen. For example, the Salesforce campaign didn’t exploit a missing capability; it exploited the fact that maintaining perfect configuration hygiene across all third-party integrations, every API scope and every token lifecycle, is a larger task than security teams can keep pace with.

A traditional security fix for this problem is building higher walls around the data with multi-factor authentication, role-based access controls, token rotation policies and intrusion detection systems. Although these are all necessary and valuable, they share a common limitation: they are perimeter defenses that demand continuous operational perfection. Once an attacker gets past them (via social engineering or a stolen credential), the underlying data is exposed. The Salesforce campaign revealed that even MFA can be bypassed entirely through OAuth abuse.

The problem doesn’t lie in weak security systems.  In reality, the issue is that the dominant architecture forces systems to hold – and therefore expose – more data than any given transaction actually needs, and then requires an expanding surface area of access controls to be maintained. Solving this requires reframing security  as a series of architectural layers that reduce what exists and has the potential to be stolen in the first place.

 Zero-Knowledge Proofs

Zero-knowledge (ZK) cryptography offers businesses a completely different starting point. Instead of collecting, storing, and then putting walls around sensitive data. ZK proofs allow one party to prove the truth of a statement  to another party without revealing the underlying data. The verifier gains mathematical certainty about the claim, while learning nothing beyond the validity of the statement itself.

ZK proofs have been tried and tested in blockchain ecosystems for years, where they enable private transactions and scalable computation. The technology is mature enough to deploy into a Web2 setting where it’s desperately needed.

The most immediate application is at verification boundaries, where systems share data to prove something. In this, ZK proofs can verify customers with customer identity verification (KYC) without transmitting personal data that’s then stored and potentially exposed down the line. An important caveat here is that ZK-based KYC doesn’t eliminate the need to retain source documents where regulations require it (at least until regulatory bodies start recognising ZK based compliance). What ZK changes is the relying party model – the second, third, and fourth institutions downstream that currently receive and then have to store full document copies just to confirm a status. That’s where the data proliferation happens, and where ZK proofs deliver genuine reduction.

This logic also applies to supply-chain compliance, third-party integrations, and any scenario where one party proves something by handing over raw data. ZK proofs at the boundary are an  effective way to reduce what is shared externally.

But there’s a limitation: ZK at the verification boundary narrows what third parties can extract, but it doesn’t change the underlying architecture. The CRM database must still hold everything and internal systems still have broad query access. If an attacker gets in through via a different vector, such as credential phishing,  the pooled data remains available to be taken. Replacing an OAuth token with a ZK proof at the API boundary is a better padlock, butthe door remains the same.

 ZK-Gated Execution

The deeper architectural shift comes from a model I’d call ZK-Gated Execution. In this, ZK proofs don’t just verify claims at the boundary; they gate what computation can occur inside an environment where even the system operator cannot see the underlying data.In the standard model, you prove you’re authorised before you access the data store. However, in the ZK-Gated Execution version of events, you prove eligibility with a ZK application executing a set of authorisation rules, and then a specific, constrained computation runs inside a secure enclave if the authorisation proof verifies. The operator running this can’t see the inputs and, therefore, the third party never touches the raw data.

If we return to the Salesforce scenario: the core problem wasn’t just that Salesloft Drift had an OAuth token, it was that the token gave access to a system where customer data, AWS keys, Snowflake credentials, and deal pipeline information were all stored in plaintext and available to anyone with the correct permissions. However, if this was under ZK-Gated Execution, the matching, filtering, or analytics operations that third-party apps need would all happen inside a blinded execution environment – keeping this data safe. Even if the environment became compromised, the plaintext data exposure is constrained by the architecture – and not by operational discipline.It doesn’t eliminate all risk- that’s an impossible promise. But it’s the difference between a dam bursting and a faucet dripping.

The Pressure is On

Organisations that begin integrating ZK-based verification now will be positioned ahead of the threat landscape and the regulatory curve. But those that wait will find themselves defending an architecture that was never designed for modern-day threats.

ZK doesn’t eliminate the need for good security hygiene. MFA, monitoring, access controls, and proper configuration discipline are all still essential and, for many, fully deploying those existing capabilities is the right first step. What ZK does, particularly in the ZK-Gated Execution form, is solve the structural vulnerability that makes breaches catastrophic even when operational security is strong: the vast stores of sensitive plaintext data that were never a necessity. In a world where it’s almost guaranteed that attackers will get past the perimeter at some point, the most powerful defense is leaving nothing for them to take.

About the Author

DeepthiDeepthi Kumar is the Co-CEO of o1Labs, the core development team behind Mina Protocol, the world’s lightest blockchain powered by zero-knowledge cryptography. As a core protocol developer, Deepthi helped design Mina’s protocol architecture, led its first hard fork, and continues to shape its technical roadmap. 

 

Source link
#ZeroKnowledge #Technology #Web2 #Security #European #Financial #Review

Related News

Leave a Comment

Plugin developed by ProSEOBlogger
Facebook
Telegram
Telegram
Plugin developed by ProSEOBlogger. Get free Ypl themes.
Plugin developed by ProSEOBlogger. Get free gpl themes