Blockchain

What can blockchain do for AI? Not what you’ve heard.

Industries everywhere are asking “What can AI do for us?”

But the blockchain industry, known for challenging norms, is also asking the opposite question: “What can blockchain do for AI?”

While there are some compelling answers, three narratives have emerged around this question that are frequently misleading and, in one case, potentially even hazardous.

Narrative #1: Blockchain can combat misinformation caused by generative AI

An expert panel at a recent Coinbase event concluded that “blockchain can counter misinformation with cryptographic digital signatures and timestamps, making it clear what’s authentic and what’s been manipulated.”

This is true only in a very narrow sense.

Blockchains can record digital-media creation in a tamper-proof way, i.e., so that modification of specific images is detectible. But this is a far cry from clarifying authenticity.

Consider a photo of a flying saucer hovering above the Washington Monument. Suppose that someone has registered its creation in, say, block 20,000,000 of the Ethereum blockchain. This fact tells you one thing: The flying saucer image was created before block 20,000,000. Additionally, whoever posted the image to the blockchain — let’s call her Alice — did so by digitally signing a transaction. Assuming that Alice’s signing key wasn’t stolen, it’s clear that Alice registered the photo on the blockchain.

None of this, however, tells you how the image was created. It might be a photo that Alice snapped with her own camera. Or Alice might have gotten the image from Bob, who Photoshopped it. Or maybe Carol created it with a generative AI tool. In short, the blockchain tells you nothing about whether aliens were touring Washington, D.C.—unless you already trust Alice to begin with.

Some cameras can digitally sign photos to authenticate them (assuming their sensors can’t be fooled, which is a big if), but this isn’t blockchain technology.

Narrative #2: Blockchain can bring privacy to AI

Model training is a node=”[object Object]” target=”_blank” rel=”nofollow”>trumpeting blockchain technologies as a solution.

Blockchains, however, are designed for transparency — a property at odds with confidentiality.

Proponents point to privacy-enhancing technologies advanced by the blockchain industry to address this tension — especially zero-knowledge proofs. Zero-knowledge proofs, however, don’t solve the problem of privacy in AI model training. That’s because a zero-knowledge proof doesn’t conceal secrets from whoever is constructing the proof. Zero-knowledge proofs are helpful if I want to conceal my transaction data from you. But they don’t enable me to compute privately over your data.

There are other, more relevant cryptographic and security tools with esoteric names, including fully homomorphic encryption (FHE), secure multiparty computation (MPC) and secure enclaves. These can in principle support privacy-preserving AI (specifically, “federated learning”). Each has important caveats, though. And claiming them as blockchain-specific technologies would be a stretch.

Narrative #3: Blockchains can empower AI bots with money — and that’s a good thing

Jeremy Allaire, CEO of Circle, has noted that bots are already performing transactions using cryptocurrency and tweeted that “AI and Blockchains are made for each other.” This is true in the sense that cryptocurrency is a good match for the capabilities of AI agents. But it’s also worrisome.

Many people fret about AI agents escaping human control. Classic nightmare scenarios involve autonomous vehicles killing people or AI-powered autonomous weapons going rogue. But there’s another vector of escape: The financial system. Money equals power. Give that power to an AI agent and it can do real damage.

This problem is the topic of a research paper that I co-authored in 2015/6. My colleagues and I examined the possibility of smart contracts, programs that autonomously intermediate transactions on Ethereum, being used to facilitate crime. Using the techniques in that paper and a blockchain oracle system with access to LLMs (Large Language Models) such as ChatGPT, bad actors could in principle launch “rogue” smart contracts that automatically pay bounties for committing serious crimes.

Read more from our opinion section: How a smart contract gets away with murder: A review of ‘The Oracle’

Happily, rogue smart contracts of this kind aren’t yet possible in today’s blockchains — but the blockchain industry and crypto enthusiasts will need to take AI safety seriously as a future concern. They will need to consider mitigations, such as community-driven interventions or guardrails in oracles to help enforce AI safety.

The integration of blockchains and AI does hold clear promise. AI may add unprecedented flexibility to blockchain systems by creating natural language interfaces to them. Blockchains may provide new financial and transparency frameworks for model training and data sourcing and put the power of AI in the hands of communities, not just enterprises.

It’s still early days, though, and as we wax lyrical about AI and blockchain as an enticing mix of buzzwords and technologies, we need to really think — and see — things through.


Ari Juels is the Weill Family Foundation and Joan and Sanford I. Weill Professor in the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion and a Computer Science faculty member at Cornell University. He is a Co-Director of the Initiative for CryptoCurrencies and Contracts (IC3). He is also Chief Scientist at Chainlink Labs. He is the author of crypto thriller novel The Oracle (Talos Press), which was released on 20 February 2024.

Back to top button
Please enter CoinGecko Free Api Key to get this plugin works.