The co-inventor of modern AI and the most cited living scientist believes he's figured out how to ensure AI is honest, incapable of deception, and never goes rogue. Yoshua Bengio – Turing Award Winner and founder of LawZero – is disturbed by the many unintended drives and goals present in today's AIs, their willingness to lie, and ability to tell when they're being tested. AI companies are trying to stamp out these behaviours in a 'cat-and-mouse game' that Yoshua fears they're losing.
But Yoshua is optimistic: he believes the companies can win this battle decisively with a single rearrangement to how AI models are trained, and has been developing mathematical proofs to back up the claim. The core idea is that instead of training AI to predict what a human would say, or to produce responses we'd rate highly, we should train it to model what's actually true.
Yoshua argues this new architecture, which he calls 'Scientist AI,' is a small enough change that we could keep almost all the techniques and data we use to train frontier AIs like Claude and ChatGPT. And that the new architecture need not cost more, could be built iteratively, and might be more capable as well as more honest.
Links to learn more, video, and full transcript: https://80k.info/bengio
Until recently, the biggest practical objection to Scientist AI was simple: the world wants agents, and Scientist AI isn’t one. But in new research, Yoshua has extended the design and believes the same honest predictor can be turned into a capable agent without losing its "safety guarantees."
With the Scientist AI proposal on the table, Yoshua argues that it's absurd to race to get current untrustworthy AI models to design their successors, which the leading companies are attempting to do as soon as possible.
But critics argue the approach wouldn't be so technically solid in practice, and that frontier capabilities are advancing so fast, and cost so much to match, that Scientist AI risks arriving too late to matter.
Host Rob Wiblin and AI pioneer Yoshua Bengio cover all this and more in today's conversation.
LawZero is hiring! https://80k.info/lawzero-jobs
Coefficient Giving is also hiring for a range of AI-related grantmaker roles: https://80k.info/ai-grantmaker-jobs
This episode was recorded on April 16, 2026.
Chapters:
Yoshua Bengio on making AI honest and safe (00:00:00)
The Scientist AI in plain English (00:02:26)
Yoshua on how Scientist AI differs from LLMs (00:06:33)
How the training data works (00:13:55)
Can this become an agent? (00:20:48)
Why Yoshua is more optimistic on alignment now (00:31:43)
Why companies can't stop racing (00:36:05)
How close to a working prototype? (00:48:27)
Honest models might be more capable (00:52:40)
"Reinforcement learning is evil" (01:00:28)
Scientist AI from guardrail to agent (01:07:31)
Can safe AI still be competent? (01:11:29)
How much will this cost? (01:18:17)
Can it generalise beyond maths and science? (01:22:13)
A UN for superintelligence (01:37:52)
Want to work with Yoshua Bengio? (01:49:32)
Why smart people ignore AI risk (01:53:00)
Don't let AI build the next AI (01:59:42)
Why the public doesn't get the real risk (02:10:34)
Why Yoshua changed his mind about AI risk (02:19:28)
Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Camera operator: Jeremy Chevillotte
Production: Nick Stockton, Elizabeth Cox, and Katy Moore