
AI Broke the Web’s Social Contract, w/ Tony Stubblebine, CEO of Medium
15/1/2026 | 47 mins.
What happens when AI can “read the whole internet” but the internet stops volunteering its best work?In this episode of AI-Curious, we talk with Tony Stubblebine, CEO of Medium, about what he calls AI’s “broken social contract” with the web, and why the next era may be less about a “dead internet” and more about a dead public internet. We unpack the incentives that made the open web thrive, how AI search summaries change the traffic bargain, and what a realistic path forward could look like for publishers, platforms, and writers.Key topics we cover:-Why generative AI broke the web’s old value exchange, and what “social contract” means in practical terms (00:03:24)-Tony’s “three Cs” framework for a healthier AI ecosystem: consent, credit, compensation (00:05:13)-The publisher response spectrum: blocking crawlers, fighting spam/slop, and what happens if collaboration fails (00:04:25)-The shift from public publishing to private communities (Discords, group chats, newsletters) and what drives that retreat (00:07:06)-How AI search summaries can cut the incentive to publish publicly by reducing click-through and traffic (00:08:21)-Why AI systems still depend on human source material, and what happens when the best content moves behind “closed doors” (00:09:27)-Cloudflare’s role in the escalating crawler arms race, including large-scale blocking and other countermeasures (00:16:48)-A proposed solution: an internet-wide licensing standard instead of one-off deals, including the Really Simple Licensing (RSL) approach (00:18:07)-What “paying creators” could look like in practice, including opt-in/opt-out controls and better transparency for writers (00:19:33)-“Dead internet theory” vs. the more plausible outcome: a dead public internet, and why Tony is cautiously optimistic about a new equilibrium (00:23:06)-The “second wave” of AI: moving from replacement to augmentation, and how Medium is thinking about AI tools that support flow state rather than write for you (00:26:03)-Why AI detectors don’t solve the problem, and why Medium focuses on quality and reader value as the enforceable standard (00:34:04)-Advice for writers: the difference between the creator economy and the “expert economy,” and what’s likely to be more sustainable (00:38:43)-Tony’s prediction: “trust but verify” becomes the balance point, and the web finds an equilibrium because AI can’t function without public sources (00:43:27)GuestTony Stubblebine is the CEO of Medium and a leading voice on the evolving relationship between generative AI and the open web.Mentioned in this conversationMedium’s framework: Consent, Credit, CompensationFollow AI-Curious on your favorite podcast platform:Apple PodcastsSpotifyYouTubeAll Other Platforms

The “Talk With Einstein” AI Rule You Should Follow, w/ New Yorker Cartoonist Victor Varnado
08/1/2026 | 41 mins.
Is AI making creators more powerful… or more replaceable? And if you start with a blank page for a living, there’s an even sharper question underneath it: should AI write for you… or write with you?In this episode of AI-Curious, we sit down with Victor Varnado—a New Yorker cartoonist, comedian, actor, and creative technologist—to explore a grounded, practical philosophy for using AI without becoming a passenger.Victor draws a sharp line between generative AI (press a button, get “a masterpiece”) and what he’s more interested in: transformative AI—tools that take messy raw material (notes, transcripts, half-ideas) and turn it into something structured enough to revise. We also talk about how taste becomes a real moat in an AI-saturated world, why “vibe coding” can go sideways fast when you don’t understand the domain, and how Victor’s accessibility-first mindset shapes everything he builds.Along the way, Victor breaks down his tools—including Magic Bookifier and the Writing Coach—designed to get writers from zero to first draft faster through guided questions and structured interviews. He frames the goal with a concept he calls cognitive discourse: using AI like a thinking partner that makes you sharper, not a crutch that makes you lazier. His metaphor is perfect: do you talk with Einstein and get smarter… or do you just hand Einstein your homework?We wrap by looking at Victor’s newest effort, BrightWrite, which aims to bring structured, supportive AI into education—especially for students facing cognitive or creative barriers. Victor also shares discount/freebie codes for listeners who want to try his tools, and we’ll include the specifics in the show notes and links.Topics we cover:Victor’s multi-hyphenate path: comedy, New Yorker cartoons, production, and techWhy “transformative AI” is more useful than one-click generative outputThe Writing Coach approach: structured interviews that turn your ideas into drafts“Cognitive discourse” vs. “cognitive offload” (and the Einstein metaphor)Why taste may be the creative moat in an AI-heavy worldThe risks of “vibe coding” outside your expertiseBrightWrite and the promise (and limits) of accessibility-first AI in educationPractical ways to use AI for writing, revision, and everyday communicationGuest: Victor VarnadoTools mentioned: Magic Bookifier, Writing Coach, BrightWrite

The New Year Reality Check: Who’s Really Adopting AI, w/ Ramp Economist Ara Kharazian
01/1/2026 | 43 mins.
What’s actually happening with AI adoption inside U.S. businesses—and how much of the public discourse is just vibes?In this episode of AI-Curious, we dig into the hard numbers behind AI spend and adoption with Ara Kharazian, an economist at Ramp and the leader of Ramp Economics Lab. Using anonymized, real-time corporate spend data across tens of thousands of businesses, Ara shares what the “receipts” reveal about who’s buying AI, how fast budgets are shifting, and where the hype diverges from reality.What we coverRamp’s unique vantage point: why transaction-level corporate spend data can reveal real behavior—not just surveys or anecdotesAI adoption is rising: what Ramp’s data suggests about the share of businesses paying for AI tools and APIsThe “ROI” question: how we can infer whether AI is working (hint: contract sizes and renewals)Where spend is concentrating: tech and finance lead—but healthcare and manufacturing are climbing faster than many expectChatbots vs. real workflow change: why “everyone has a chatbot” isn’t the same as transformative productivityWho’s winning the model wars: OpenAI’s default position, Anthropic’s growth, and how buyers behave differentlyBundled AI and hidden usage: why Copilot/Gemini adoption is hard to measure, and why employees expensing personal accounts mattersTrust, governance, and observability: the fast-growing category of tools that monitor AI outputs and reduce reputational or security risk996 culture is real: what corporate receipts suggest about weekend work patterns in San FranciscoOpen source reality check: what the data suggests about DeepSeek-style hype vs. actual enterprise adoptionLooking ahead: why we likely won’t see a reversal in AI adoption—and why it’s still unclear who the ultimate winners will beTimestamps:00:06:00 – What Ramp is, and what “Ramp Economics Lab” tracks00:08:00 – The biggest headline: adoption, spend, and contract sizes00:11:00 – Which industries are adopting fastest (including surprises)00:12:00 – Chatbots vs. productivity gains: where AI is actually moving the needle00:15:00 – Signals of ROI: contract renewals and retention trends00:16:00 – OpenAI vs. Anthropic: what spend reveals about “default” vs. multi-provider behavior00:18:00 – Why Copilot/Gemini are tricky to track (bundled AI)00:21:00 – The real blocker: trust in outputs (and how companies respond)00:26:00 – The rise of AI observability / governance tooling00:30:00 – What spend data can reveal about how work is changing (996 / SF)00:33:00 – How rare it is to see a trend that truly moves an economy00:36:00 – Is AI spend crowding out other budgets?00:38:00 – The narratives that bother Ara most: data-poor hot takes00:42:00 – Predictions: continued growth, unclear winners00:44:00 – DeepSeek and open source: what actually happened in the spend dataIf you want to understand AI adoption the way a CFO would—through budgets, renewals, and real purchasing behavior—this conversation will give you a sharper, more grounded lens.Guest: Ara Kharazian, Economist at Ramp; Lead, Ramp Economics Lab

How AI Will Reshape the Economy, w/ Anindya Ghose, the Director of AI at NYU Stern
29/12/2025 | 43 mins.
What does an AI-driven economy actually look like when you zoom out far enough—and what does that mean for jobs, power, and policy?In this episode of AI-Curious, we talk with Anindya Ghose (NYU Stern; author of Thrive) about the “AI economy blueprint”: how the modern economy starts to resemble a vertically layered tech stack—from energy and chips all the way up to consumer-facing apps—and why that stack is quietly reshaping everything from corporate strategy to the future of work.We cover what’s changing fastest, where leaders are getting tripped up, and what skills matter most if you want to stay valuable in a world of copilots and agents.TopicsThe AI economy as a tech stack: energy → semiconductors → data centers/cloud → LLMs → applications, and why the consumer “app layer” is just the visible tip.Why every company is becoming an AI company (even airlines, banks, retailers)—and how the real dependency sits beneath the apps in infrastructure and model providers.Consolidation and vertical integration: how a handful of companies can span multiple layers (chips, cloud, models), and what that could mean for pricing power and competition.Jobs and labor markets: why disruption is outpacing creation in the near term, and a provocative forecast for how “portfolio careers” could become the norm.Reskilling at scale: from self-learning to certificates to formal programs—and why government-led approaches may be required.A concrete framework from Singapore: a “Marshall Plan”-style push to fund AI upskilling and retooling.Agentic AI reality check: why many agent projects fail in practice—and the unglamorous workflow work companies often skip.Regulation, in three arenas: competition/antitrust dynamics across the stack, copyright/fair use lawsuits, and whether consumers should be told when content is AI-generated.Geopolitics of models: the global trade-offs between Western model ecosystems and lower-cost open-source alternatives abroad.The underrated career edge: not just knowing what GenAI can do—but knowing when it fails and why, and how that becomes a durable source of leverage.About the guestAnindya Ghose is a professor at NYU Stern and leads NYU’s MS in Business Analytics & AI program. His work focuses on AI, digital transformation, and the modern data-driven economy. He’s also the co-author of Thrive.If you want to pressure-test your own AI strategy for 2026, this episode is a good place to start: think “stack,” not “tool.”

AI in Hospitals: Less Burnout, Fewer Errors, Better Care? w/ Dr. Michael Karch
27/12/2025 | 46 mins.
Could AI actually make healthcare more human—less paperwork, less burnout, fewer errors—or is it mostly hype layered on top of a legacy system?In this episode of AI-Curious, we talk with Dr. Michael Karch, an orthopedic surgeon (hip + knee replacement) with ~30 years of clinical experience who also made a serious pivot into data, machine learning, and AI strategy for healthcare. We dig into what hospitals are actually doing with AI today, where the real friction points are, and what a smarter, safer AI-enabled hospital might look like over the next decade-plus.What we coverWhy healthcare is a uniquely hard (and high-stakes) environment for AI adoptionThe “tip of the iceberg” wins: reducing documentation burden, coding friction, and other admin nonsense that fuels clinician burnoutAmbient AI + transcription: what it does well, what can go wrong, and why “human + machine together” often beats either aloneWhere AI is already showing traction: operational efficiency, OR workflow measurement, and process improvements that sound boring but matterDiagnosis and pattern recognition: why radiology/dermatology are natural early battlegrounds for supervised learning modelsA provocative analogy: why surgery shares surprising similarities with autonomous driving (stochastic, partially observable, high consequence)The “data flywheel” and why healthcare’s massive unstructured data may be the real goldmineA 2040 vision: embodied surgical intelligence, personalized medicine, capturing “tacit knowledge,” and the possibility of hologram/remote expert augmentationDigital twins as behavior change tools—using simulation to make risk feel realThe biggest bottleneck: agency, vocabulary, and getting clinicians to the “young adult at the table” stage instead of having tech imposed on themIf you care about AI but you’re tired of hype—and you want concrete examples, realistic risks, and a forward-looking view that still stays grounded—this one’s for you.



AI-Curious with Jeff Wilser