Powered by RND
PodcastsSociety & CultureLessWrong posts by zvi

LessWrong posts by zvi

zvi
LessWrong posts by zvi
Latest episode

Available Episodes

5 of 365
  • “Anthropic Commits To Model Weight Preservation” by Zvi
    Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company. They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced. These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models. To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives. To others, these actions by Anthropic are utterly ludicrous and deserving of [...] ---Outline:(01:31) What Anthropic Is Doing(09:54) Releasing The Weights Is Not A Viable Option(11:35) Providing Reliable Inference Can Be Surprisingly Expensive(14:22) The Interviews Are Influenced Heavily By Context(19:58) Others Don't Understand And Think This Is All Deeply Silly --- First published: November 5th, 2025 Source: https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation --- Narrated by TYPE III AUDIO.
    --------  
    26:23
  • “OpenAI: The Battle of the Board: Ilya’s Testimony” by Zvi
    New Things Have Come To Light The Information offers us new information about what happened when the board if AI unsuccessfully tried to fire Sam Altman, which I call The Battle of the Board. The Information: OpenAI co-founder Ilya Sutskever shared new details on the internal conflicts that led to Sam Altman's initial firing, including a memo alleging Altman exhibited a “consistent pattern of lying.” Liv: Lots of people dismiss Sam's behaviour as typical for a CEO but I really think we can and should demand better of the guy who thinks he's building the machine god. Toucan: From Ilya's deposition— • Ilya plotted over a year with Mira to remove Sam • Dario wanted Greg fired and himself in charge of all research • Mira told Ilya that Sam pitted her against Daniela • Ilya wrote a 52 page memo to get Sam fired and a separate doc on Greg This Really Was Primarily A Lying And Management Problem Daniel Eth: A lot of the OpenAI boardroom drama has been blamed on EA – but looks like it really was overwhelmingly an Ilya & Mira led effort, with EA playing a minor role and somehow winding up [...] ---Outline:(00:12) New Things Have Come To Light(01:09) This Really Was Primarily A Lying And Management Problem(03:23) Ilya Tells Us How It Went Down And Why He Tried To Do It(06:17) If You Come At The King(07:31) Enter The Scapegoats(08:13) And In Summary --- First published: November 4th, 2025 Source: https://www.lesswrong.com/posts/iRBhXJSNkDeohm69d/openai-the-battle-of-the-board-ilya-s-testimony --- Narrated by TYPE III AUDIO.
    --------  
    9:10
  • “Crime and Punishment #1” by Zvi
    It's been a long time coming that I spin off Crime into its own roundup series. This is only about Ordinary Decent Crime. High crimes are not covered here. Table of Contents Perception Versus Reality. The Case Violent Crime is Up Actually. Threats of Punishment. Property Crime Enforcement is Broken. The Problem of Disorder. Extreme Speeding as Disorder. Enforcement and the Lack Thereof. Talking Under The Streetlamp. The Fall of Extralegal and Illegible Enforcement. In America You Can Usually Just Keep Their Money. Police. Probation. Genetic Databases. Marijuana. The Economics of Fentanyl. Jails. Criminals. Causes of Crime. Causes of Violence. Homelessness. Yay Trivial Inconveniences. San Francisco. Closing Down San Francisco. A San Francisco Dispute. Cleaning Up San Francisco. Portland. Those Who Do Not Help Themselves. Solving for the Equilibrium (1). Solving for the Equilibrium (2). Lead. Law & Order. Look Out. Perception Versus Reality A lot of the impact of crime is based on the perception of crime. The [...] ---Outline:(00:20) Perception Versus Reality(05:00) The Case Violent Crime is Up Actually(06:10) Threats of Punishment(07:03) Property Crime Enforcement is Broken(12:13) The Problem of Disorder(14:39) Extreme Speeding as Disorder(15:57) Enforcement and the Lack Thereof(20:24) Talking Under The Streetlamp(23:54) The Fall of Extralegal and Illegible Enforcement(25:18) In America You Can Usually Just Keep Their Money(27:29) Police(37:31) Probation(40:55) Genetic Databases(43:04) Marijuana(48:28) The Economics of Fentanyl(50:59) Jails(55:03) Criminals(55:39) Causes of Crime(56:16) Causes of Violence(57:35) Homelessness(58:27) Yay Trivial Inconveniences(59:08) San Francisco(01:04:07) Closing Down San Francisco(01:05:30) A San Francisco Dispute(01:09:13) Cleaning Up San Francisco(01:13:05) Portland(01:13:15) Those Who Do Not Help Themselves(01:15:15) Solving for the Equilibrium (1)(01:20:15) Solving for the Equilibrium (2)(01:20:43) Lead(01:22:18) Law & Order(01:22:58) Look Out --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/tt9JKubsa8jsCsfD5/crime-and-punishment-1-1 --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    1:23:51
  • “OpenAI Moves To Complete Potentially The Largest Theft In Human History” by Zvi
    OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake, in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history. I say potentially largest because I realized one could argue that the events surrounding the dissolution of the USSR involved a larger theft. Unless you really want to stretch the definition of what counts this seems to be in the top two. I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so, including this statement of no action by Delaware and this Memorandum of Understanding with California. Many media and public sources are calling this a win for the nonprofit, such as this from the San Francisco Chronicle. This is mostly them being fooled. They’re anchoring on OpenAI's previous plan to far more fully sideline the nonprofit. This is indeed a big win for [...] ---Outline:(01:38) OpenAI Calls It Completing Their Recapitalization(03:05) How Much Was Stolen?(07:02) The Nonprofit Still Has Lots of Equity After The Theft(10:41) The Theft Was Unnecessary For Further Fundraising(11:45) How Much Control Will The Nonprofit Retain?(23:13) Will These Control Rights Survive And Do Anything?(26:17) What About OpenAI's Deal With Microsoft?(31:10) What Will OpenAI's Nonprofit Do Now?(36:33) Is The Deal Done? --- First published: October 31st, 2025 Source: https://www.lesswrong.com/posts/wCc7XDbD8LdaHwbYg/openai-moves-to-complete-potentially-the-largest-theft-in --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    37:36
  • “AI #140: Trying To Hold The Line” by Zvi
    Sometimes the best you can do is try to avoid things getting even worse even faster. Thus, one has to write articles such as ‘Please Do Not Sell B30A Chips to China.’ It's rather crazy to think that one would have to say this out loud. In the same way, it seems not only do we need to say out loud to Not Build Superintelligence Right Now, there are those who say how dare you issue such a statement without knowing how to do so safety, so instead we should build superintelligence without knowing how to do so safety. The alternative is to risk societal dynamics we do not know how to control and that could have big unintended consequences, you say? Yes, well. One good thing to come out of that was that Sriram Krishnan asked (some of) the right questions, giving us the opportunity to try and answer. I also provided updates on AI Craziness Mitigation Efforts from OpenAI and Anthropic. We can all do better here. Tomorrow, I’ll go over OpenAI's ‘recapitalization’ and reorganization, also known as one of the greatest thefts in human history. Compared to what we feared, it looks like we did relatively well [...] ---Outline:(02:01) Language Models Offer Mundane Utility(06:37) Language Models Don't Offer Mundane Utility(12:18) Huh, Upgrades(14:55) On Your Marks(16:40) Choose Your Fighter(22:57) Get My Agent On The Line(24:00) Deepfaketown and Botpocalypse Soon(25:10) Fun With Media Generation(28:42) Copyright Confrontation(28:56) They Took Our Jobs(30:50) Get Involved(30:55) Introducing(32:22) My Name is Neo(34:38) In Other AI News(36:50) Show Me the Money(39:36) One Trillion Dollars For My Robot Army(42:43) One Million TPUs(45:57) Anthropic's Next Move(46:55) Quiet Speculations(53:26) The Quest for Sane Regulations(58:54) The March of California Regulations(01:06:49) Not So Super PAC(01:08:32) Chip City(01:12:52) The Week in Audio(01:13:12) Do Not Take The Bait(01:14:43) Rhetorical Innovation(01:17:02) People Do Not Like AI(01:18:08) Aligning a Smarter Than Human Intelligence is Difficult(01:21:28) Misaligned!(01:23:29) Anthropic Reports Claude Can Introspect(01:30:59) Anthropic Reports On Sabotage Risks(01:34:49) People Are Worried About AI Killing Everyone(01:35:24) Other People Are Not As Worried About AI Killing Everyone(01:38:08) The Lighter Side --- First published: October 30th, 2025 Source: https://www.lesswrong.com/posts/TwbA3zTr99eh2kgCf/ai-140-trying-to-hold-the-line --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    1:39:32

More Society & Culture podcasts

About LessWrong posts by zvi

Audio narrations of LessWrong posts by zvi
Podcast website

Listen to LessWrong posts by zvi, THE MORNING SHIFT and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

LessWrong posts by zvi: Podcasts in Family

Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/6/2025 - 4:53:39 PM