When Will We Fully Trust AI to Lead? A conversation with Eric Boyd, CVP of AI Platform
At Microsoft Build, I actually sat down with Eric Boyd, Corporate Vice President leading engineering for Microsoft’s AI platform, to talk about what it really means to build AI infrastructure that companies can trust – not just to assist, but to act. We get into the messy reality of enterprise adoption, why trust is still the bottleneck, and what it will take to move from copilots to fully autonomous agents.We cover:
- When we'll trust AI to run businesses
- What Microsoft learned from early agent deployments
- How AI makes life easier
- The architecture behind GitHub agents (and why guardrails matter)
- Why developer interviews should include AI tools
- Agentic Web, NLweb, and the new AI-native internet
- Teaching kids (and enterprises) how to use powerful AI safely
- Eric’s take on AGI vs “just really useful tools”
If you’re serious about deploying agents in production, this conversation is a blueprint. Eric blends product realism, philosophical clarity, and just enough dad humor. I loved this one.
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Eric Boyd, CVP of AI platform at Microsoft
https://www.linkedin.com/in/emboyd/
📰 Want the transcript and edited version?
Subscribe to Turing Post https://www.turingpost.com/subscribe
Chapters
0:00 The big question: When will we trust AI to run our businesses?
1:28 From code-completions to autonomous agents – the developer lens
2:15 Agent acts like a real dev and succeeds
3:25 AI taking over tedious work
3:32 Building trustworthy AI vs. convincing stakeholders to trust it
4:46 Copilot in the enterprise: early lessons and the guard-rail mindset
6:17 What is Agentic Web?
7:55 Parenting in the AI age
9:41 What counts as AGI?
11:32 How developer roles are already shifting with AI
12:33 Timeline forecast for 2-5 years re
13:33 Opportunities and concerns
15:57 Enterprise hurdles: identity, governance, and data-leak safeguards
16:48 Books that shaped the guest
Turing Post is a newsletter about AI's past, present, and future. We explore how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up (Jense Huang is already in): Turing Post: https://www.turingpost.com
Follow us
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
19:09
Why AI Still Needs Us? A conversation with Olga Megorskaya, CEO of Toloka
In this episode, I sit down with Olga Megorskaya, CEO of Toloka, to explore what true human-AI co-agency looks like in practice. We talk about how the role of humans in AI systems has evolved from simple labeling tasks to expert judgment and co-execution with agents – and why this shift changes everything.We get into:
- Why "humans as callable functions" is the wrong metaphor – and what to use instead
- What co-agency really means?
- Why some data tasks now take days, not seconds – and what that says about modern AI
- The biggest bottleneck in human-AI teamwork (and it’s not tech)
- The future of benchmarks, the limits of synthetic data, and why it is important to teach humans to distrust AI
- Why AI agents need humans to teach them when not to trust the plan
If you're building agentic systems or care about scalable human-AI workflows, this conversation is packed with hard-won perspective from someone who’s quietly powering some of the most advanced models in production. Olga brings a systems-level view that few others can – and we even nerd out about Foucault’s Pendulum, the power of text, and the underrated role of human judgment in the age of agents.
Did you like the episode? You know the drill:
📌 Subscribe for more conversations with the builders shaping real-world AI.
💬 Leave a comment if this resonated.
👍 Like it if you liked it.
🫶 Thank you for watching and sharing!
Guest:
Olga Megorskaya, CEO of Toloka
📰 Want the transcript and edited version?
Subscribe to Turing Post https://www.turingpost.com/subscribe
Chapters
0:00 – Intro: Humans as Callable Functions?
0:33 – Evolving with ML: From Crowd Labeling to Experts
3:10 – The Rise of Deep Domain Tasks and Foundational Models
5:46 – The Next Phase: Agentic Systems and Complex Human Tasks
7:16 – What Is True Co-Agency?
9:00 – Task Planning: When AI Guides the Human
10:39 – The Critical Skill: Knowing When Not to Trust the Model
13:25 – Engineering Limitations vs. Judgment Gaps
15:19 – What Changed Post-ChatGPT?
18:04 – Role of Synthetic vs. Human Data
21:01 – Is Co-Agency a Path to AGI?
25:08 – How To Ensure Safe AI Deployment
27:04 – Benchmarks: Internal, Leaky, and Community-Led
28:59 – The Power of Text: Umberto Eco and AI
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Semenova explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up: Turing Post: https://www.turingpost.com
If you’d like to keep followingOlga and Toloka:
https://www.linkedin.com/in/omegorskaya/
https://x.com/TolokaAI
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
29:17
When Will We Train Once and Learn Forever? Insights from Dev Rishi, CEO and co-founder @Predibase
What it actually takes to build models that improve over time. In this episode, I sit down with Devvret Rishi, CEO and co-founder of Predibase, to talk about the shift from static models to continuous learning loops, the rise of reinforcement fine-tuning (RFT), and why the real future of enterprise AI isn’t chatty generalists – it’s focused, specialized agents that get the job done.We cover:
The real meaning behind "train once, learn forever"
How RFT works (and why it might replace traditional fine-tuning)
What makes inference so hard in production
Open-source model gaps—and why evaluation is still mostly vibes
Dev’s take on agentic workflows, intelligent inference, and the road ahead
If you're building with LLMs, this conversation is packed with hard-earned insights from someone who's doing the work – and shipping real systems. Dev is super structural! I really enjoyed this conversation.
Did you like the video? You know what to do:
📌 Subscribe for more deep dives with the minds shaping AI.
Leave a comment if you have something to say.
Like it if you liked it.
That’s it.
Oh yeap, one more thing: Thank you for watching and sharing this video. We truly appreciate you.
Guest:
Devvret Rishi, co-founder and CEO at Predibase
https://predibase.com/
If you don’t see a transcript, subscribe to receive our edited conversation as a newsletter: https://www.turingpost.com/subscribe
Chapters:
00:00 - Intro
00:07 - When Will We Train Once and Learn Forever?
01:04 - Reinforcement Fine-Tuning (RFT): What It Is and Why It Matters
03:37 - Continuous Feedback Loops in Production
04:38 - What's Blocking Companies From Adopting Feedback Loops?
05:40 - Upcoming Features at Predibase
06:11 - Agentic Workflows: Definition and Challenges
08:08 - Lessons From Google Assistant and Agent Design
08:27 - Balancing Product and Research in a Fast-Moving Space
10:18 - Pivoting After the ChatGPT Moment
12:53 - The Rise of Narrow AI Use Cases
14:53 - Strategic Planning in a Shifting Landscape
16:51 - Why Inference Gets Hard at Scale
20:06 - Intelligent Inference: The Next Evolution
20:41 - Gaps in the Open Source AI Stack
22:06 - How Companies Actually Evaluate LLMs
23:48 - Open Source vs. Closed Source Reasoning
25:03 - Dev’s Perspective on AGI
26:55 - Hype vs. Real Value in AI
30:25 - How Startups Are Redefining AI Development
30:39 - Book That Shaped Dev’s Thinking
31:53 - Is Predibase a Happy Organization?
32:25 - Closing Thoughts
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Semenova explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up: Turing Post: https://www.turingpost.com
FOLLOW US
Devvret and Predibase:
https://devinthedetail.substack.com/
https://www.linkedin.com/company/predibase/
Ksenia and Turing Post:
https://x.com/TheTuringPost
https://www.linkedin.com/in/ksenia-se
https://huggingface.co/Kseniase
--------
28:16
When Will We Give AI True Memory? A conversation with Edo Liberty, CEO and founder @ Pinecone
What happens when one of the architects of modern vector search asks whether AI can remember like a seasoned engineer, not a gold‑fish savant? In this episode, Edo Liberty – founder & CEO of Pinecone and one‑time Amazon scientist – joins me to discuss true memory in LLMs. We unpack the gap between raw cognitive skill and workable knowledge, why RAG still feels pre‑ChatGPT, and the breakthroughs needed to move from demo‑ware to dependable memory stacks.
Edo explains why a vector database needs to be built from the ground (and then rebuilt many times), that storage – not compute – has become the next hardware frontier, and predicts a near‑term future where ingesting a million documents is table stakes for any serious agent. We also touch the thorny issues of truth, contested data, and whether knowledgeable AI is an inevitable waypoint on the road to AGI.
Whether you wrangle embeddings for a living, scout the next infrastructure wave, or simply wonder how machines will keep their facts straight, this conversation will sharpen your view of “memory” in the age of autonomous agents.
Let’s find out when tomorrow’s AI will finally remember what matters.
(CORRECTION: the opening slide introduces Edo Liberty as a co-founder. We apologize for this error: Edo Liberty is the Founder and CEO of Pinecone.)
Did you like the video? You know what to do:
Subscribe to the channel.
Leave a comment if you have something to say.
Like it if you liked it.
That’s all.
Thanks.
Guest:
Edo Liberty, CEO and founder at Pinecone
Website: https://www.pinecone.io/
Additional Reading:
https://www.turingpost.com/
Chapters
00:00 Intro & The Big Question – When will we give AI true memory?
01:20 Defining AI Memory and Knowledge
02:50 The Current State of Memory Systems in AI
04:35 What’s Missing for “True Memory”?
06:00 Hardware and Software Scaling Challenges
07:45 Contextual Models and Memory-Aware Retrieval
08:55 Query Understanding as a Task, Not a String
10:00 Pinecone’s Full Stack Approach
11:00 Commoditization of Vector Databases?
13:00 When Scale Breaks Your Architecture
15:00 The Rise of Multi-Tenant & Micro-Indexing
17:25 Dynamically Choosing the Right Indexing Method
19:05 Infrastructure for Agentic Workflows
20:15 The Hard Questions: What is Knowledge?
21:55 Truth vs Frequency in AI
22:45 What is “Knowledgeable AI”?
23:35 Is Memory a Path to AGI?
24:40 A Book That Shaped a CEO – *Endurance* by Shackleton
26:45 What Excites or Worries You About AI’s Future?
29:10 Final Thoughts: Sea Change is Here
In Turing Post we love machine learning and AI so deeply that we cover it extensively from all perspectives: past of it, its present, and our joint-future. We explain what happens the way you will understand.
Sign up: Turing Post: https://www.turingpost.com
FOLLOW US
Edo Liberty: https://www.linkedin.com/in/edo-liberty-4380164/
Pinecone: https://x.com/pinecone
Ksenia and Turing Post:
Hugging Face: https://huggingface.co/Kseniase
Turing Post: https://x.com/TheTuringPost
Ksenia: https://x.com/Kseniase_
Linkedin:
TuringPost: https://www.linkedin.com/company/theturingpost
Ksenia: https://www.linkedin.com/in/ksenia-se
--------
31:01
When Will We Stop Coding? A conversation with Amjad Masad, CEO and co-founder @ Replit
What happens when the biggest advocate for coding literacy starts telling people not to learn to code? In this episode, Amjad Masad, CEO and co-founder at Replit, joins me to talk about his controversial shift in thinking – from teaching millions how to code to building agents that do it for you. Are we entering a post-coding world? What even is programming when you're just texting with a machine?We talk about Replit's evolving vision, how software agents are already powering real businesses, and why the next billion-dollar startups might be solo founders augmented by AI. Amjad also shares what still stands in the way of fully autonomous agents, how AGI fits into his long-term view, and why open source still matters in the age of AI.
Whether you're a developer, founder, or just AI-curious, this conversation will make you rethink what it means to “build software” in 2025.
Did you like the video? You know what to do:
Subscribe to the channel.
Leave a comment if you have something to say.
Like it if you liked it.
That’s all.
Thanks.
Guest:
Amjad Masad, CEO and co-founder at Replit
Website: https://replit.com/~
Additional Reading:
https://www.turingpost.com/p/amjad
Chapters
00:00 Why Amjad changed his mind about coding
00:55 From code to agents: the next abstraction layer
02:05 Cognitive dissonance and the birth of Replit agents
03:38 Agent V3: toward fully autonomous software developers
04:51 Engineering platforms for long-running agents
05:30 Do agents actually work in 2025?
05:48 Real-world examples: Replit agents in action
06:36 Is Replit still a coding platform?
07:43 Why code generation beats no-code platforms
08:22 Can AI agents really create billionaires?
10:59 Every startup is now an AI startup
12:31 Solo founders and the rise of one-person AI companies
14:00 What Amjad thinks AGI really is
17:46 Replit as a habitat for AI
19:50 Open source tools vs internal no-code systems
21:02 Replit's evolving community vision
22:19 MCP vs A2A: who’s winning the protocol game
23:48 The books that shaped Amjad’s thinking about AI
25:47 What excites Amjad most about an AI-powered future
Turing Post is a newsletter about AI's past, present, and future. Publisher Ksenia Semenova explores how intelligent systems are built – and how they’re changing how we think, work, and live.
Sign up: Turing Post: https://www.turingpost.com
FOLLOW US
Amjad: https://x.com/amasad
Replit: https://x.com/replit
Ksenia and Turing Post:
Hugging Face: https://huggingface.co/KseniaseTuring Post: https://x.com/TheTuringPost
Ksenia: https://x.com/Kseniase_
Linkedin:
TuringPost: https://www.linkedin.com/company/theturingpost
Ksenia: https://www.linkedin.com/in/ksenia-se
Inference is Turing Post’s way of asking the big questions about AI — and refusing easy answers. Each episode starts with a simple prompt: “When will we…?” – and follows it wherever it leads.Host Ksenia Se sits down with the people shaping the future firsthand: researchers, founders, engineers, and entrepreneurs. The conversations are candid, sharp, and sometimes surprising – less about polished visions, more about the real work happening behind the scenes.It’s called Inference for a reason: opinions are great, but we want to connect the dots – between research breakthroughs, business moves, technical hurdles, and shifting ambitions.If you’re tired of vague futurism and ready for real conversations about what’s coming (and what’s not), this is your feed. Join us – and draw your own inference.