PodcastsPhilosophyLessWrong posts by zvi

LessWrong posts by zvi

zvi
LessWrong posts by zvi
Latest episode

474 episodes

  • LessWrong posts by zvi

    โ€œHousing Roundup #13: More Dakkaโ€ by Zvi

    06/04/2026 | 28 mins.
    Build more housing where people want to live.

    The rest is commentary. If there is enough housing, it will be affordable, people will afford more house, and people will be able to live where they want to live.

    It's always been that simple.

    Increased supply of any kind of housing increases affordability of all kinds of housing.

    Are there other things that would also be helpful? Yes, but theyโ€™re commentary.

    Freeing up existing underused housing, for example, is helpful. It is commentary.

    Let's enjoy the lull and see how much of an Infrastructure Week we can do.

    New Levels Of Saying Quiet Part Out Loud Even For This Guy

    Trump opposes building houses where people want to live, because doing so would let people live there, which would drive down the value of existing homes.

    Acyn: Trump: I donโ€™t want to drive housing prices down. I want to drive housing prices up for people who own their homes. You can be sure that will happen.

    unusual_whales: Trump: when you make it too easy and cheap to build houses, house prices come down. I donโ€™t want to do that.

    [...]
    ---
    Outline:
    (00:48) New Levels Of Saying Quiet Part Out Loud Even For This Guy
    (02:30) Whose Side Are You On.
    (03:25) Your Intervention Only Partly Solves The Problem So We Are Against It
    (04:21) More Dakka
    (05:32) Abundance
    (06:44) Changes In Rent Are Largely About Changes In Supply
    (07:30) Austin
    (08:46) America
    (10:01) Minnesota
    (11:20) Debunking Obvious Nonsense About Monopolistic Practices
    (21:24) Age Of The Median Homebuyer
    (24:27) Property Taxes Improve Allocation Efficiency
    (27:21) More Of Old People Inefficiently And Systematically Stealing From Young People
    ---

    First published:

    April 6th, 2026


    Source:

    https://www.lesswrong.com/posts/eSwdsDTnqigQJPfkw/housing-roundup-13-more-dakka

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong posts by zvi

    โ€œAnthropic Responsible Scaling Policy v3: Dive Into The Detailsโ€ by Zvi

    03/04/2026 | 51 mins.
    Wednesday's post talked about the implications of Anthropic changing from v2.2 to v3.0 of its RSP, including that this broke promises that many people relied upon when making important decisions.

    Today's post treats the new RSP v3.0 as a new document, and evaluates it.

    First Iโ€™ll go over how the RSP v3.0 works at a high level. Then Iโ€™ll dive into the Roadmap and the Risk Report.

    How RSP v3.0 Works

    Normally I would pay closer attention to the exact written contents of the new RSP.

    In this case, it's not that the RSP doesnโ€™t matter. I do think the RSP will have some influence on what Anthropic chooses to do, as will the road map, as will the resulting risk reports.

    However, the fundamental design principle is flexibility and a โ€˜strong argument,โ€™ and they can change the contents at any time, all of which means the central principle is trust.

    I read the contents as โ€˜here are the things we are worried about and plan to do,โ€™ which mostly in practice should amount to doing what they believe is right and I donโ€™t see anything on this map that seems likely [...]
    ---
    Outline:
    (00:40) How RSP v3.0 Works
    (19:05) You Came Here For An Argument
    (21:27) The Problem Remains Unsolved
    (25:22) Wow That Thing We Did Was Pretty Risky, Huh?
    (26:18) Risk Report #1
    (28:19) Listen All Yall Its Sabotage
    (38:05) Looking Forward
    (39:42) Claude Gov
    (40:02) What Is A Strong Argument?
    (41:12) Recursive Self-Improvement
    (42:32) Non-Novel Chemical and Biological Weapons
    (44:51) Novel Chemical and Biological Weapons
    (45:39) Cross-Cutting Content (Section 6)
    (48:48) Risk Report Report
    ---

    First published:

    April 3rd, 2026


    Source:

    https://www.lesswrong.com/posts/RtQxa5MoKk9bwEEEd/anthropic-responsible-scaling-policy-v3-dive-into-the

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong posts by zvi

    โ€œAI #162: Visions of Mythosโ€ by Zvi

    02/04/2026 | 1h 50 mins.
    Anthropic had some problem with leaks this week.

    We learned that they are sitting on a new larger-than-Opus AI model, Mythos, that they believe offers a step change in cyber capabilities.

    We also got a full leak of the source for Claude Code.

    Oh, and Axios was compromised, on the heels of LiteLLM. This looks to be getting a lot more common. Defense beats offense in most cases, but offense is getting a lot more shots on goal than it used to.

    The AI Doc: Or How I Became an Aplocayloptimist came out this week. I gave it 4.5/5 stars, and I think the world would be better off if more people saw it. I am not generally a fan of documentary movies, but this is probably my new favorite, replacing The King of Kong: A Fistful of Quarters.

    There was also the usual background hum of quite a lot of things happening, including the latest iterations of various debates. We may or may not be doomed to die, but we are definitely doomed to repeat certain motions quite a few more times, and for people to be rather slow to update.

    We got some very welcome quiet on the [...] ---
    Outline:
    (01:41) Language Models Offer Mundane Utility
    (03:00) Heads In The Sand
    (07:05) Huh, Upgrades
    (08:10) Mythos
    (12:07) Whats In A Name
    (14:59) On Your Marks
    (16:10) Choose Your Fighter
    (16:53) Get My Agent On The Line
    (17:31) Deepfaketown and Botpocalypse Soon
    (24:33) Cyber Lack Of Security
    (29:08) Fun With Media Generation
    (29:50) A Young Ladys Illustrated Primer
    (30:53) They Took Our Jobs
    (37:45) After They Take Our Jobs
    (39:16) Gell-Mann Amnesia
    (41:33) Get Involved
    (43:25) In Other AI News
    (46:41) Show Me the Money
    (51:08) Quiet Speculations
    (51:59) Explaining Persistent Model Parity
    (55:37) Take a Moment
    (01:00:54) OpenAI: The Histories
    (01:06:04) The Department of AI War
    (01:12:38) Department of AI Solidarity
    (01:13:46) Writing For The AIs
    (01:16:42) Quickly, Theres No Time
    (01:16:46) The Quest for Sane Regulations
    (01:18:10) Chip City
    (01:20:07) You Received The Federal Framework
    (01:21:02) The Week in Audio
    (01:24:22) Rhetorical Innovation
    (01:27:48) I Am The Very Human Of A Frontier Language Model
    (01:38:01) Aligning a Smarter Than Human Intelligence is Difficult
    (01:41:22) Aligning Fake Graphs Can Also Be Difficult
    (01:49:32) The Lighter Side
    ---

    First published:

    April 2nd, 2026


    Source:

    https://www.lesswrong.com/posts/iBeTkFuQwjaRPo3Ad/ai-162-visions-of-mythos

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • LessWrong posts by zvi

    โ€œAnthropic Responsible Scaling Policy v3: A Matter of Trustโ€ by Zvi

    01/04/2026 | 46 mins.
    Anthropic has revised its Responsible Scaling Policy to v3.

    The changes involved include abandoning many previous commitments, including one not to move ahead if doing so would be dangerous, citing that given competition they feel blindly following such a principle would not make the world safer.

    Holden Karnofsky advocated for the changes. He maintains that the previous strategy of specific commitments was in error, and instead endorses the new strategy of having aspirational goals. He was not at Anthropic when the commitments were made.

    My response to this will be two parts.

    Today's post talks about considerations around Anthropic going back on its previous commitments, including asking to what extent Anthropic broke promises or benefited from people reacting to those promises, and how we should respond.

    It is good, given that Anthropic was not going to keep its promises, that it came out and told us that this was the case, in advance. Thank you for that.

    I still think that Anthropic importantly broke promises, that people relied upon, and did so in ways that made future trust and coordination, both with Anthropic and between labs and governments, harder. Admitting to the situation [...]
    ---
    Outline:
    (01:47) Promises, Promises
    (03:10) Anthropic Responsible Scaling Policy v3
    (03:32) That Could Have Gone Better
    (04:36) Im Just Not Ready To Make a Commitment
    (08:20) So Cold, So Alone
    (12:24) Im Sorry I Gave You That Impression
    (19:44) Fool Me Twice
    (23:27) In My Defense I Was Left Unsupervised
    (26:01) Drake Thomas Finds The Missing Mood
    (28:49) Things That Could Have Been Brought To My Attention Yesterday (1)
    (30:32) Things That Could Have Been Brought To My Attention Yesterday (2)
    (36:13) What We Have Here Is A Failure To Communicate
    (39:21) You Should See The Other Guy
    (42:17) I Was Only Kidding
    (43:12) They Cant Keep Getting Away With This
    (44:07) Damn Your Sudden But Inevitable Betrayal
    ---

    First published:

    April 1st, 2026


    Source:

    https://www.lesswrong.com/posts/AkzauoTt2Lwn2yAvj/anthropic-responsible-scaling-policy-v3-a-matter-of-trust

    ---

    Narrated by TYPE III AUDIO.
  • LessWrong posts by zvi

    โ€œMovie Review: The AI Docโ€ by Zvi

    31/03/2026 | 15 mins.
    The AI Doc: Or How I Became an Apocaloptimist is a brilliant piece of work.

    (This will be a fully spoilorific overview. If you havenโ€™t seen The AI Doc,I recommend seeing it, it is about as good as it could realistically have been, in most ways.)

    Like many things, it only works because it is centrally real. The creator of the documentary clearly did get married and have a child, freak out about AI, ask questions of the right people out of worry about his son's future, freak out even more now with actual existential risk for (simplified versions of) the right reasons, go on a quest to stop freaking out and get optimistic instead, find many of the right people for that and ask good non-technical questions, get somewhat fooled, listen to mundane safety complaints, seek out and get interviews with the top CEOs, try to tell himself he could ignore all of it, then decide not to end on a bunch of hopeful babies and instead have a call for action to help shape the future.

    The title is correct. This is about โ€˜how I became an Apolcaloptimist,โ€™ and why he wanted to be that, as opposed to [...] ---
    Outline:
    (03:37) Babies Are Awesome
    (04:58) People Are Worried About AI Killing Everyone
    (06:17) Freak Out
    (06:47) Other People Are Not Worried About AI Killing Everyone
    (09:27) Deepfaketown and Botpocalypse Soon
    (10:15) Stopping The AI Race and A Narrow Path
    (11:47) CEOs Know Their Roles
    (13:28) The Call To Action
    ---

    First published:

    March 31st, 2026


    Source:

    https://www.lesswrong.com/posts/ppC6geY4FxGYifrWx/movie-review-the-ai-doc

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

More Philosophy podcasts

About LessWrong posts by zvi

Audio narrations of LessWrong posts by zvi
Podcast website

Listen to LessWrong posts by zvi, everybody has a secret and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

LessWrong posts by zvi: Podcasts in Family