
Modern systems run on trust. Companies trust auditors to verify their books. Regulators trust manufacturers to report accurate test data. Consumers trust certification labels to mean what they say. Most of the time, this works well enough. But when it fails, the consequences are measured in billions of dollars, irreversible environmental damage, and real harm to real people.
The failures aren’t random. They follow a pattern: an intermediary is trusted to verify something, that intermediary has insufficient incentive or ability to verify it properly, and the people who bear the consequences have no independent way to check. The result is a system that works until it doesn’t – and when it doesn’t, the damage is already done.
Blockchain technology is often pitched as a solution to these problems, but usually with more enthusiasm than precision. The honest case for trustless design isn’t that it eliminates the need for trust entirely – it can’t. It’s that it can restructure where trust is required, making certain categories of failure structurally harder to pull off. That’s a narrower claim than the typical blockchain pitch, but it’s also a more defensible one.
This post examines four real cases where trust in intermediaries failed catastrophically – across different industries, at different scales, affecting different victims. For each, we look at what specifically broke, what a trustless design could address, and what it honestly couldn’t.
Wirecard: The Auditor Who Didn’t Audit
In June 2020, Wirecard AG – a German payment processor valued at over EUR 24 billion and a member of the DAX 30 – filed for insolvency after admitting that EUR 1.9 billion supposedly held in escrow accounts at Philippine banks simply did not exist. The claimed escrow balances had never existed. The accounts were fabricated, supported by forged bank confirmation letters and sham third-party payment processors.
The fraud had been running for years, and it wasn’t exactly a secret. The Financial Times had been publishing investigative reporting on Wirecard’s accounting irregularities since April 2015, beginning with journalist Dan McCrum’s “House of Wirecard” post on FT Alphaville. Rather than investigating the claims, Germany’s financial regulator BaFin filed a criminal complaint against the journalists for suspected market manipulation and banned short selling of Wirecard stock. Ernst & Young, Wirecard’s auditor for over a decade, issued clean audit reports year after year without independently confirming the escrow balances with the banks that supposedly held them.
Where trust broke down. The structure was designed with multiple safeguards: an independent auditor (EY), a financial regulator (BaFin), and public market scrutiny. Every one of them failed. EY accepted management representations about the escrow balances without obtaining independent bank confirmations – a basic audit procedure. BaFin treated Wirecard as a technology company rather than a financial services firm, limiting its own oversight jurisdiction. When external parties raised alarms, the system’s response was to attack the messengers.
The core problem was unverifiable claims about asset existence. Wirecard said it had EUR 1.9 billion in specific bank accounts. There was no mechanism for investors, regulators, or even the auditor to independently and continuously verify that claim. The entire system relied on periodic checks by a single trusted party (the auditor) – and that party didn’t do its job.
What a trustless design could look like. A cryptographic proof-of-reserves mechanism could address this specific failure mode. If escrow balances were recorded on-chain or periodically attested by the custodian banks through cryptographically signed statements anchored to a public ledger, any interested party could verify asset existence independently and in near-real-time. The attestations wouldn’t require trusting the company’s own reports or waiting for an annual audit cycle.
This isn’t theoretical. Cryptocurrency exchanges have adopted proof-of-reserves protocols after their own spectacular trust failures (notably the collapse of FTX in 2022, where customer funds were similarly claimed to exist but didn’t). The same principle applies to any claim about asset custody.
What it wouldn’t solve. Proof-of-reserves addresses the specific question “does this money exist in this account?” But Wirecard’s fraud was broader than missing escrow balances – it involved fabricated revenue from non-existent customers processed through shell companies. On-chain attestation of bank balances doesn’t help if the revenue that supposedly filled those accounts is fictitious. You can make the custodial claim verifiable without making the entire business model transparent.
More fundamentally, if the custodian bank itself is complicit – or if the attestation signer is compromised – the on-chain record faithfully records a lie. Trustless systems shift where trust is required (from the company to the attestation infrastructure), but they don’t eliminate it.
Dieselgate: The Manufacturer Who Wrote the Test
In September 2015, the U.S. Environmental Protection Agency issued a notice of violation to Volkswagen Group, revealing that nearly 500,000 diesel vehicles sold in the United States contained software specifically designed to cheat emissions tests. The software – a “defeat device” – detected when the vehicle was undergoing laboratory testing and activated full emissions controls. During normal driving, the controls were disabled, and the vehicles emitted nitrogen oxides at up to 40 times the legal limit.
The fraud wasn’t limited to a few cars or a rogue engineer. In January 2017, Volkswagen pleaded guilty to criminal charges and signed a statement of facts detailing how the company’s management had directed engineers to develop the defeat devices because the diesel engines could not pass U.S. emissions tests without them. The scandal affected approximately 11 million vehicles worldwide and has cost Volkswagen over EUR 32 billion in fines, penalties, financial settlements, and buyback costs, with legal proceedings against former executives still ongoing.
The fraud was uncovered not by regulators, but by the International Council on Clean Transportation (ICCT), an independent research organization that commissioned West Virginia University researchers to test on-road emissions of diesel vehicles in 2013. When they attached portable emissions measurement equipment to VW vehicles and drove them on actual roads – rather than running them on laboratory dynamometers – the discrepancy was immediately apparent.
Where trust broke down. Emissions compliance was based on manufacturer-controlled laboratory testing. The manufacturer brought vehicles to a test facility, ran them under standardized conditions, and reported the results. Regulators trusted that laboratory performance reflected real-world performance. This created a system where the entity being regulated controlled the conditions under which it was evaluated – and had both the technical capability and financial incentive to game those conditions.
The defeat device was possible precisely because test conditions were predictable. The software could distinguish testing from driving based on steering wheel movement, vehicle speed, engine operation duration, and barometric pressure. The test was meant to be a proxy for real-world performance, but because the manufacturer controlled both the vehicle and had knowledge of the test protocol, the proxy could be gamed without detection.
What a trustless design could look like. The trust failure here is in self-reporting under controlled conditions. A trustless alternative would involve independent, continuous emissions monitoring – tamper-resistant sensors recording real-world emissions data and committing cryptographic hashes of that data to a public ledger. The manufacturer wouldn’t control the measurement conditions, and the data would be available for independent verification by regulators, researchers, or anyone else.
This would require hardware-level trust (the sensors themselves), but it would eliminate the specific vulnerability that Dieselgate exploited: the ability to behave differently during a known test than during normal operation. If the “test” is continuous and the data is independently recorded, there is no controlled environment to game.
What it wouldn’t solve. The trust shifts to the sensor hardware. If the manufacturer controls the sensor, the problem recurs at a different layer – you need tamper-resistant hardware that the measured party can’t modify. This is a solvable engineering problem (secure enclaves, tamper-evident hardware, third-party sensor installation), but it’s not trivial. And it wouldn’t have addressed the institutional failure at BaFin, which had the Wirecard-era problem of a regulator that didn’t want to look too closely at a national champion.
The broader point is that trustless verification of physical measurements is fundamentally harder than trustless verification of digital records. Blockchain can make the data pipeline trustless, but it can’t make the initial measurement trustworthy without trusted hardware at the point of capture.
Carbon Credits: The Market Built on Unverifiable Claims
In January 2023, a nine-month investigation by The Guardian, Die Zeit, and SourceMaterial reported that over 90% of the rainforest carbon credits certified by Verra – the world’s largest carbon credit certification program – did not represent genuine emissions reductions. The investigation drew on three independent peer-reviewed studies that found the threat of deforestation in Verra’s REDD+ projects had been systematically overstated by approximately 400%, resulting in millions of “phantom credits” sold to companies like Disney, Shell, and Gucci for their carbon neutrality claims. Verra disputed the findings, calling the underlying research methodology flawed.
Then, in October 2024, the U.S. Department of Justice unsealed criminal charges against Kenneth Newcombe, the former CEO of CQC Impact Investors, one of the world’s largest carbon credit project developers. Newcombe and colleagues were charged with fraudulently obtaining carbon credits worth tens of millions of dollars by manipulating survey data for cookstove projects in Africa. According to the indictment, when survey results for projects in Malawi and Zambia showed poor performance, Newcombe and his team agreed to “revise” the data and enlisted an outside person to fill out fraudulent survey forms. The CFTC separately filed charges against Newcombe and settled with CQC, marking the first enforcement action in voluntary carbon markets. As of this writing, the criminal case against Newcombe remains pending.
Where trust broke down. Carbon credits are what economists call a “credence good” – buyers literally cannot verify the claim they’re paying for. You can’t independently confirm that a specific ton of carbon was not emitted because a specific forest was not cut down. The entire market relies on a chain of trust: the project developer reports data, a third-party verifier reviews it, a registry (like Verra) certifies it, and the buyer trusts the certification.
At every link, the incentives are misaligned. Project developers are paid based on the volume of credits they generate. Third-party verifiers are paid by the project developers they’re supposed to scrutinize. Registries derive revenue from the credits they certify. And buyers have their own incentive to accept credits at face value, because they need them for their sustainability claims. Nobody in the chain has a strong economic incentive to find problems.
The CQC case makes the failure mechanism explicit: when real-world data contradicted the project’s claims, the people controlling the data simply changed the numbers. There was no independent channel through which the actual performance of the cookstoves could be verified by anyone outside the project developer’s organization.
What a trustless design could look like. An immutable attestation chain could address the data integrity problem. If survey data, sensor readings, and inspection reports were cryptographically committed at the point of collection – timestamped, signed by the data collector, and anchored on a public ledger – retroactive manipulation of the kind described in the CQC indictment would leave a detectable trail. You could still submit false initial data, but you couldn’t quietly “revise” survey results after the fact without the revision being visible.
A more comprehensive approach would separate the attestation roles: one party collects the data, another party signs the attestation, and a third party verifies consistency. Each attestation is anchored independently, and any party can audit the chain. This wouldn’t make the initial measurement trustworthy, but it would make the paper trail trustless – no single party could unilaterally alter the record.
What it wouldn’t solve. The deepest problem in carbon markets is that the underlying measurement is inherently counterfactual: how much forest would have been cut down if the project didn’t exist? No amount of cryptographic attestation can make a counterfactual verifiable. Blockchain can ensure that reported data isn’t tampered with after collection, and it can make the audit trail transparent, but it can’t verify that the initial data reflects reality – the classic garbage-in-garbage-out problem.
The Verra investigation illustrates a different failure that trustless systems also can’t fix: methodological problems with how baselines are set and how credit volumes are calculated. If the methodology itself systematically overstates impact, perfect data integrity doesn’t help. You get cryptographically guaranteed, immutable records of a flawed calculation.
Organic Food Fraud: The Label Nobody Can Verify
In August 2019, Randy Constant of Chillicothe, Missouri was sentenced to over 10 years in federal prison for masterminding the largest organic food fraud case in U.S. history. Constant had pleaded guilty in December 2018 to a wire fraud scheme involving at least $142 million in grain sales. Between 2010 and 2017, he sold conventionally grown corn and soybeans through his Iowa brokerage, Jericho Solutions, falsely marketing it as certified organic. The non-organic grain was sold as feed to livestock producers who unknowingly used it to raise cattle and chickens that they, in turn, sold as organic meat and eggs. Constant’s fraudulent grain accounted for roughly 7-8% of all organic corn and soybeans grown in the United States in 2016. Constant died by suicide days after his sentencing.
The USDA acknowledged the systemic nature of the problem when it finalized the Strengthening Organic Enforcement rule in early 2023, explicitly noting that organic products are “credence goods” whose attributes “cannot be easily verified by consumers or businesses who buy organic products for use or resale.” The rule added requirements for supply chain audits, import certificates, and traceability – recognizing that the existing system of periodic inspections and paper certificates was not enough.
This isn’t isolated to one bad actor. Investigations by The Washington Post documented large shipments of grain from Eastern Europe entering the United States labeled as organic when they were not. Turkey and Ukraine have been flagged repeatedly for fraudulent organic exports. The fundamental problem is that once grain enters the supply chain with an organic label, there’s no physical or chemical test that can reliably distinguish it from conventional grain. The label is the only signal, and the label can be forged.
Where trust broke down. The organic supply chain is long and opaque: grower, broker, processor, distributor, retailer. Certification bodies inspect farms periodically (typically annually), but the certification attaches to the farm, not to specific batches of grain. Between inspections, there’s no continuous verification that a specific shipment came from a certified field, wasn’t mixed with conventional product, and wasn’t relabeled in transit. Constant exploited this directly – he had some certified fields, which gave him the paperwork to sell any grain as organic.
The chain-of-custody problem is compounded by international trade. A certifier in Turkey inspects a farm once a year. The grain is loaded onto a ship. By the time it arrives at a U.S. port, the only evidence of its organic status is a certificate that can be photocopied, altered, or attached to a completely different shipment.
What a trustless design could look like. A provenance registry – structured as a chain of cryptographically signed attestations at each handoff point – could address the chain-of-custody gap. At each transfer (farm to broker, broker to processor, processor to distributor), the transferring party creates a signed attestation linked to the previous one. Anyone in the chain – or at the end of it – can walk the attestation chain back to the origin and verify that every handoff is accounted for.
This is conceptually similar to what the EU is building with its Digital Product Passport requirements, where lifecycle traceability data must follow products through the supply chain. A blockchain-anchored provenance system would make each attestation immutable and independently verifiable, so that relabeling a shipment mid-chain would break the cryptographic link and be immediately detectable by any verifier.
Mass-balance auditing – verifying that the volume of organic product leaving a facility doesn’t exceed the volume of organic inputs entering it – could also be enforced through on-chain tracking. If every input and output is recorded and the records are immutable, the kind of scheme Constant ran (selling far more “organic” grain than his certified acreage could possibly produce) would show up as a volume mismatch visible to any auditor.
What it wouldn’t solve. The origin problem remains. If a farmer in Turkey certifies a field as organic and the initial attestation says “organic,” the blockchain faithfully records that claim. It doesn’t verify that the field was actually managed according to organic standards, that prohibited pesticides weren’t applied, or that the soil testing was genuine. The system makes the supply chain trustless, but it can’t make the initial certification trustless without independent measurement at the source.
There’s also a practical adoption challenge. Supply chain provenance systems only work if every participant uses them. A blockchain-backed organic registry is only as strong as the weakest link in the chain – and in international agricultural supply chains with dozens of intermediaries, getting universal adoption is a significant barrier.
The Pattern
These four cases span different industries, different scales, and different victims – investors, regulators, corporate buyers, and everyday consumers. But the structural failure is the same in every case:
Opaque record-keeping. The parties affected by the data (investors, regulators, consumers) had no independent access to the underlying records. They relied entirely on summaries, reports, and labels prepared by the parties they were supposed to be monitoring.
Self-reporting without independent verification. Wirecard reported its own balances. VW ran its own emissions tests. CQC submitted its own survey data. Constant certified his own grain. In each case, the entity with the most to gain from favorable data was the entity controlling how that data was produced and presented.
Intermediaries with misaligned incentives. Auditors paid by the companies they audit. Verifiers paid by the project developers they evaluate. Certifiers whose revenue depends on the volume they certify. The institutional structures designed to provide independent oversight were economically dependent on the entities they were supposed to oversee.
No mechanism for affected parties to independently audit claims. None of the affected parties – investors in Wirecard, citizens breathing VW’s NOx emissions, companies relying on carbon credits for their net-zero claims, consumers paying organic premiums – had any way to independently verify the claims being made. They could only trust, and hope.
Trustless design addresses these structural problems by making records independently verifiable, removing single points of control over data, and creating audit trails that no single party can alter after the fact. It doesn’t require trusting the intermediary to be honest – it makes dishonesty detectable.
What Trustless Design Doesn’t Fix
Honest analysis requires acknowledging what blockchain-based trustless systems can’t do, even in their ideal form.
The oracle problem. Blockchains can guarantee the integrity of data after it’s been committed, but they can’t verify the truthfulness of the data at the point of entry. A sensor can lie. A human can enter false data. A signer can attest to something that isn’t true. Trustless systems make the pipeline from attestation to verification tamper-proof, but they can’t make the initial attestation honest. This is why every case study in this post has a “what it wouldn’t solve” section – because the origin of the data is always outside the blockchain’s reach.
Adoption and participation. A provenance registry only works if supply chain participants actually use it. An attestation chain is broken if one party in the middle doesn’t participate. Network effects matter: the value of these systems increases with adoption, but so does the difficulty of achieving it. International supply chains with dozens of intermediaries, varying levels of technical infrastructure, and different regulatory environments make universal adoption a genuine challenge.
Governance. Who decides what gets committed on-chain? Who sets the attestation standards? Who resolves disputes? Trustless verification of data doesn’t eliminate the need for governance – it shifts governance to a different layer. The rules encoded in smart contracts still need to be written by someone, and the question of who writes them and how they’re updated is a human problem, not a technical one.
Institutional will. In the Wirecard case, the auditor had the authority to perform the checks that would have uncovered the fraud – it simply didn’t. In Dieselgate, regulators could have required real-world emissions testing years earlier – they chose not to. Technology can make certain failures harder, but it can’t compensate for institutions that don’t want to look. A blockchain-based audit trail is only useful if someone actually reads it.
Accountability by Design
The argument for trustless systems isn’t that they replace human judgment or eliminate the need for institutions. It’s that they change the default. Today, the default is “trust and hope” – trust the auditor, trust the manufacturer, trust the certifier, and hope they’re doing their job. When they’re not, the failure is discovered months or years later, after the damage is done.
Trustless design shifts the default to “verify and prove.” Not “trust the company’s report” but “verify the attestation against the ledger.” Not “hope the supply chain is intact” but “walk the provenance chain and check.” Not “wait for the annual audit” but “verify the proof-of-reserves in real time.”
This shift doesn’t guarantee honest behavior. People will always find ways to deceive. But it raises the cost of deception, narrows the window for undetected fraud, and gives the affected parties tools to verify claims for themselves rather than relying on intermediaries who may have every reason not to look too closely.
The cases in this post aren’t hypothetical. They’re real failures, with real victims, that followed predictable structural patterns. The technology to address those patterns exists. The question is whether the institutions that currently benefit from opacity will adopt it – or whether, as with Dieselgate and Wirecard, change will only come after the next catastrophic failure forces it.
