Web3 tech helps instil confidence and trust in AI

Web3 tech helps instil confidence and trust in AI

The promise of AI is that it’ll make all of our lives easier. And with great convenience comes the potential for serious profit. The United Nations thinks AI could be a $4.8 trillion global market by 2033 – about as big as the German economy. But forget about 2033: in the here and now, AI is already fueling transformation in industries as diverse as financial services, manufacturing, healthcare, marketing, agriculture, and e-commerce. Whether it’s autonomous algorithmic ‘agents’ managing your investment portfolio or AI diagnostics systems detecting diseases early, AI is fundamentally changing how we live and work. But cynicism is snowballing around AI – we’ve seen Terminator 2 enough times to be extremely wary. The question worth asking, then, is how do we ensure trust as AI integrates deeper into our everyday lives? The stakes are high: A recent report by Camunda highlights an inconvenient truth: most organisations (84%) attribute regulatory compliance issues to a lack of transparency in AI applications. If companies can’t view algorithms – or worse, if the algorithms are hiding something – users are left completely in the dark. Add the factors of systemic bias, untested systems, and a patchwork of regulations and you have a recipe for mistrust on a large scale.

Transparency: Opening the AI black box

For all their impressive capabilities, AI algorithms are often opaque, leaving users ignorant of how decisions are reached. Is that AI-powered loan request being denied because of your credit score – or due to an undisclosed company bias? Without transparency, AI can pursue its owner’s goals, or that of its owner, while the user remains unaware, still believing it’s doing their bidding. One promising solution would be to put the processes on the blockchain, making algorithms verifiable and auditable by anyone. This is where Web3 tech comes in. We’re already seeing startups explore the possibilities. Space and Time (SxT), an outfit backed by Microsoft, offers tamper-proof data feeds consisting of a verifiable compute layer, so SxT can ensure that the information on which AI relies is real, accurate, and untainted by a single entity. Space and Time’s novel Proof of SQL prover guarantees queries are computed accurately against untampered data, proving computations in blockchain histories and being able to do so much faster than state-of-the art zkVMs and coprocessors. In essence, SxT helps establish trust in AI’s inputs without dependence on a centralised power.

Proving AI can be trusted

Trust isn’t a one-and-done deal; it’s earned over time, analogous to a restaurant maintaining standards to retain its Michelin star. AI systems must be assessed continually for performance and safety, especially in high-stakes domains like healthcare or autonomous driving. A second-rate AI prescribing the wrong medicines or hitting a pedestrian is more than a glitch, it’s a catastrophe.

This is the beauty of open-source models and on-chain verification via using immutable ledgers, with built-in privacy protections assured by the use of cryptography like Zero-Knowledge Proofs (ZKPs). Trust isn’t the only consideration, however: Users must know what AI can and can’t do, to set their expectations realistically. If a user believes AI is infallible, they’re more likely to trust flawed output.

To date, the AI education narrative has centred on its dangers. From now on, we should try to improve users’ knowledge of AI’s capabilities and limitations to better ensure users are empowered, not exploited.

Compliance and accountability

As with cryptocurrency, the word compliance comes often when discussing AI. AI doesn’t get a pass under the law and various regulations. How should a faceless algorithm be held accountable? The answer may lie in the modular blockchain protocol Cartesi, which ensures AI inference happens on-chain. Cartesi’s virtual machine lets developers run standard AI libraries – like TensorFlow, PyTorch, and Llama.cpp – in a decentralised execution environment, making it suitable for on-chain AI development. In other words, a blend of blockchain transparency and computational AI.