AI and Education: Inside the AI Solution Partnering with Denver Public Schools, w/ Dr. Michael Everest
Could AI actually improve public education? Not just automate it, but make it more personalized, more equitable — and even more human?We explore this possibility with Dr. Michael Everest, founder of edYOU, an AI tutoring platform being piloted in a Denver-area school district. While many worry that AI could become a shortcut for students to avoid real learning, Everest argues the opposite — that AI can reinforce understanding, boost confidence, and offer 24/7 support tailored to each student’s needs.In this episode of AI-Curious, we dig into the real-world mechanics of how this works — including partnerships with schools, how teachers interact with the platform, and what kind of results they’re seeing so far.We also ask the tough questions: What about data privacy? What about bias and hallucinations? Is there a risk we’re outsourcing critical thinking? And what does the future of education look like if every student has a lifelong AI companion?Topics include:The promise and pitfalls of AI in classroomsedYOU’s pilot program with Adams 14 School DistrictHow the AI tutoring platform personalizes learningThe role of teachers in an AI-enhanced education systemOversight, privacy, and academic integrityThe vision of a lifelong AI learning companionWhether you’re a parent, educator, technologist, or just curious about where education is headed, this conversation offers a grounded, hopeful — and at times provocative — look at the future of learning.
--------
47:39
--------
47:39
AI's Impact on History Writing and Journalism, w/ The New York Times Magazine's Editorial Director Bill Wasik
What happens when AI becomes a co-pilot for writers, researchers, and journalists — not in theory, but in practice?In this episode of AI-Curious, we speak with Bill Wasik, Editorial Director of The New York Times Magazine, who recently oversaw their special issue, “Learning to Live with AI.” We explore how AI is already transforming journalism, nonfiction writing, and historical research — and why the most interesting impacts may come not from content creation, but from how we discover, organize, and interpret information.We dig into the creative tension between AI and human storytelling, including how historians are using tools like NotebookLM to tackle research projects previously deemed impossible. Bill shares how AI can augment writing workflows without compromising editorial judgment — and why trust and authorship still matter in a world of fast content.We also cover:The risks of over-relying on AI for research (19:45)How AI might transform local journalism and accountability (41:30)The evolving AI policies at The New York Times (29:40)Whether AI could ever win the Booker Prize — and what that would mean (7:30)Use cases from historians and academics using ChatGPT (26:00)Bill's (excellent) piece: "AI is Poised to Rewrite History. Literally."https://www.nytimes.com/2025/06/16/magazine/ai-history-historians-scholarship.htmlThe NYT Magazine's Special Issue: https://www.nytimes.com/2025/06/16/magazine/using-ai-hard-fork.html
--------
48:43
--------
48:43
The (Data-Driven) Top AI Trends, w/ the CEOs of HumanX and Read.AI
What are the top minds in AI actually talking about behind closed doors?At the HumanX conference—arguably the flagship event in the AI ecosystem—hundreds of speakers (from CEOs to policymakers to Kamala Harris) shared their unfiltered thoughts on the state and future of artificial intelligence. But with so much happening at once, even attendees couldn’t absorb it all.So HumanX did something novel: they partnered with Read.AI to record and synthesize every single session. The result? A real-time AI copilot for the conference and a post-event report that reveals the key themes, trends, and tensions shaping the industry.In this episode, we speak with HumanX CEO Stefan Weitz and Read.AI CEO David Shim to unpack the insights from that report—what they signal for 2025, what business leaders should pay attention to, and what’s probably just noise.We talk about the rise of agentic AI, the shift from AGI ambition to ROI expectations, and the practical realities of implementing AI inside large organizations. We also dig into issues of trust, open source, industry-specific adoption, and how AI is starting to reshape roles from customer service to legal to healthcare.Whether you’re in strategy, ops, tech, or just trying to keep up, this conversation offers a data-driven pulse check on where enterprise AI is headed.Highlights & Timestamps:[1:00] – How Read AI became the official AI copilot of the HumanX conference[3:10] – “You can’t be everywhere at once”—the problem this tech solves at events[6:15] – The most talked-about concept at HumanX: agentic AI[7:45] – Why AGI hype is shifting toward practical use cases with agents[8:58] – The fast hype-decay cycle of AI and the emerging focus on outcomes[12:26] – Open source, cost savings, and why business leaders care about transparency[14:19] – Trust as the “anchoring tenet” of enterprise AI adoption[16:45] – Real ROI: how Read AI identified $10M in sales pipeline in 30 days[20:03] – Why companies are hiding their AI wins from competitors[22:43] – Cross-industry learnings: how healthcare patterns may apply to other sectors[25:47] – The “put up or shut up” moment: 2025 as the year AI must deliver[29:06] – What business leaders should do before launching AI agent initiatives[35:03] – The #1 mistake orgs make with AI: failing to assign ownership[37:09] – Predictions: personalization, interoperability, and privacy friction ahead[42:28] – How Stefan and David personally use AI—for work, fun, and creative hackingLinks & Mentions:HumanX – Flagship AI conference co-founded by Stefan WeitzRead AI – Productivity-focused AI platform by David ShimSuno – AI music generation tool mentioned by StefanReplit – AI coding sandbox used by Stefan for strategy visualizationVeo by Google DeepMind – AI video generation tool referenced by David🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
What if we’re all chasing the wrong kind of AI? Dr. Ruchir Puri, Chief Scientist of IBM, argues that Artificial General Intelligence (AGI) is overrated—and that we should be focusing instead on AUI: Artificial Useful Intelligence. This is a pragmatic, business-focused approach to AI that emphasizes real-world value, measurable outcomes, and implementable solutions.In this episode of AI-Curious, we explore what AUI actually looks like in practice. We discuss how to bring AI into your organization (even if you’re just getting started), why IBM is betting big on small language models (SLMs), and how companies can move beyond hype toward real, trustworthy AI agents that do actual work.You’ll also hear:Why AI usefulness is a function of both quality and cost [00:11:00]The “crawl, walk, run” strategy IBM recommends for business adoption [00:14:00]Internal IBM examples: HR systems and coding assistants [00:16:00]Why SLMs may be a smarter bet than LLMs for many enterprises [00:37:00]A breakdown of how agentic systems are evolving to reflect, act, and self-correct [00:41:00]Whether you’re leading a startup or an enterprise, this conversation will help you reframe how you think about deploying AI—starting not with hype, but with value.🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
--------
47:08
--------
47:08
A Conversation with the AI Pioneer Who Coined ‘AGI’ — Dr. Ben Goertzel
What exactly is AGI—Artificial General Intelligence—and how close are we to achieving it? Will it transform the world for better or worse? And how can we even tell when true AGI has arrived?In this episode of AI Curious, we sit down with Dr. Ben Goertzel, the iconic computer scientist who coined the term AGI more than 20 years ago. As the founder of SingularityNET and the Artificial Superintelligence Alliance, Ben has spent decades thinking about the architecture, risks, and potential of general intelligence.We explore why today’s large language models (LLMs), while powerful, still fall short of true AGI—and what will be needed to bridge that gap. We dive into Ben’s prediction that AGI could arrive within just 1 to 3 years, and why he believes it will likely be decentralized. Along the way, we unpack some of the key ideas from his recent “10 Reckonings of AGI”—a candid look at the social, economic, and existential questions we must face as AGI reshapes human life.Topics include:[00:04:00] What AGI really means vs. current LLMs[00:10:00] Are we reaching the limits of current AI architectures?[00:13:00] How will we know when AGI has truly arrived?[00:17:00] The “PhD test” for human-level AGI[00:19:00] AGI timeline predictions (1–3 years? 2029?)[00:29:00] The 10 Reckonings of AGI: key societal impacts[00:36:00] The gap between AGI and superintelligence[00:44:00] Why a decentralized AGI might be safer[00:51:00] Surprising upsides of a post-AGI worldIf you’re curious about the future of artificial intelligence, this conversation offers a rare and unfiltered perspective from one of the field’s most original thinkers.SingularityNethttps://singularitynet.io/Ben Goertzel on Xhttps://x.com/bengoertzel🎧 Subscribe to AI-Curious:• Apple Podcastshttps://podcasts.apple.com/us/podcast/ai-curious-with-jeff-wilser/id1703130308• Spotifyhttps://open.spotify.com/show/70a9Xbhu5XQ47YOgVTE44Q?si=c31e2c02d8b64f1b• YouTubehttps://www.youtube.com/@jeffwilser
A podcast that explores the good, the bad, and the creepy of artificial intelligence. Weekly longform conversations with key players in the space, ranging from CEOs to artists to philosophers. Exploring the role of AI in film, health care, business, law, therapy, politics, and everything from religion to war. Featured by Inc. Magazine as one of "4 Ways to Get AI Savvy in 2024," as "Host Jeff Wilser [gives] you a more holistic understanding of AI--such as the moral implications of using it--and his conversations might even spark novel ideas for how you can best use AI in your business."