Powered by RND
PodcastsTechnologyLessWrong posts by zvi
Listen to LessWrong posts by zvi in the App
Listen to LessWrong posts by zvi in the App
(398)(247,963)
Save favourites
Alarm
Sleep timer

LessWrong posts by zvi

Podcast LessWrong posts by zvi
zvi
Audio narrations of LessWrong posts by zvi

Available Episodes

5 of 250
  • “The Most Forbidden Technique” by Zvi
    The Most Forbidden Technique is training an AI using interpretability techniques. An AI produces a final output [X] via some method [M]. You can analyze [M] using technique [T], to learn what the AI is up to. You could train on that. Never do that. You train on [X]. Only [X]. Never [M], never [T]. Why? Because [T] is how you figure out when the model is misbehaving. If you train on [T], you are training the AI to obfuscate its thinking, and defeat [T]. You will rapidly lose your ability to know what is going on, in exactly the ways you most need to know what is going on. Those bits of optimization pressure from [T] are precious. Use them wisely. Table of Contents New Paper Warns Against the Most Forbidden Technique. Reward Hacking Is The Default. Using [...] ---Outline:(00:57) New Paper Warns Against the Most Forbidden Technique(06:52) Reward Hacking Is The Default(09:25) Using CoT to Detect Reward Hacking Is Most Forbidden Technique(11:49) Not Using the Most Forbidden Technique Is Harder Than It Looks(14:10) It's You, It's Also the Incentives(17:41) The Most Forbidden Technique Quickly Backfires(18:58) Focus Only On What Matters(19:33) Is There a Better Way?(21:34) What Might We Do Next?The original text contained 6 images which were described by AI. --- First published: March 12th, 2025 Source: https://www.lesswrong.com/posts/mpmsK8KKysgSKDm2T/the-most-forbidden-technique --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    32:13
  • “Response to Scott Alexander on Imprisonment” by Zvi
    Back in November 2024, Scott Alexander asked: Do longer prison sentences reduce crime? As a marker, before I began reading the post, I put down here: Yes. The claims that locking people up for longer periods after they are caught doing [X] does not reduce the amount of [X] that gets done, for multiple overdetermined reasons, is presumably rather Obvious Nonsense until strong evidence is provided otherwise. The potential exception, the reason it might not be Obvious Nonsense, would be if our prisons were so terrible that they net greatly increase the criminality and number of crimes of prisoners once they get out, in a way that grows with the length of the sentence. And that this dwarfs all other effects. This is indeed what Roodman (Scott's anti-incarceration advocate) claims. Which makes him mostly unique, with the other anti-incarceration advocates being a lot less reasonable. In [...] ---Outline:(01:31) Deterrence(06:12) El Salvador(06:52) Roodman on Social Costs of Crime(09:45) Recidivism(11:57) Note on Methodology(12:20) Conclusions(13:58) Highlights From Scott's CommentsThe original text contained 3 images which were described by AI. --- First published: March 11th, 2025 Source: https://www.lesswrong.com/posts/Fp4uftAHEi4M5pfqQ/response-to-scott-alexander-on-imprisonment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    17:29
  • “The Manus Marketing Madness” by Zvi
    While at core there is ‘not much to see,’ it is, in two ways, a sign of things to come. Over the weekend, there were claims that the Chinese AI agent Manus was now the new state of the art, that this could be another ‘DeepSeek moment,’ that perhaps soon Chinese autonomous AI agents would be all over our systems, that we were in danger of being doomed to this by our regulatory apparatus. Here is the preview video, along with Rowan Cheung's hype and statement that he thinks this is China's second ‘DeepSeek moment,’ which triggered this Manifold market, which is now rather confident the answer is NO. That's because it turns out that Manus appears to be a Claude wrapper (use confirmed by a cofounder, who says they also use Qwen finetunes), using a jailbreak and a few dozen tools, optimized for the GAIA [...] ---Outline:(02:15) What They Claim Manus Is: The Demo Video(05:06) What Manus Actually Is(11:54) Positive Reactions of Note(16:51) Hype!(22:17) What is the Plan?(24:21) Manus as Hype Arbitrage(25:42) Manus as Regulatory Arbitrage (1)(33:10) Manus as Regulatory Arbitrage (2)(39:42) What If? (1)(41:01) What If? (2)(42:22) What If? (3)The original text contained 6 images which were described by AI. --- First published: March 10th, 2025 Source: https://www.lesswrong.com/posts/ijSiLasnNsET6mPCz/the-manus-marketing-madness --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    45:26
  • “Childhood and Education #9: School is Hell” by Zvi
    This complication of tales from the world of school isn’t all negative. I don’t want to overstate the problem. School is not hell for every child all the time. Learning occasionally happens. There are great teachers and classes, and so on. Some kids really enjoy it. School is, however, hell for many of the students quite a lot of the time, and most importantly when this happens those students are usually unable to leave. Also, there is a deliberate ongoing effort to destroy many of the best remaining schools and programs that we have, in the name of ‘equality’ and related concerns. Schools often outright refuse to allow their best and most eager students to learn. If your school is not hell for the brightest students, they want to change that. Welcome to the stories of primary through high school these days. Table of Contents [...] ---Outline:(00:58) Primary School(02:52) Math is Hard(04:11) High School(10:44) Great Teachers(15:05) Not as Great Teachers(17:01) The War on Education(28:45) Sleep(31:24) School Choice(36:22) Microschools(38:25) The War Against Home Schools(44:19) Home School Methodology(48:14) School is Hell(50:32) Bored Out of Their Minds(58:14) The Necessity of the Veto(01:07:52) School is a Simulation of Future HellThe original text contained 7 images which were described by AI. --- First published: March 7th, 2025 Source: https://www.lesswrong.com/posts/MJFeDGCRLwgBxkmfs/childhood-and-education-9-school-is-hell --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    1:09:47
  • “AI #106: Not so Fast” by Zvi
    This was GPT-4.5 week. That model is not so fast, and isn’t that much progress, but it definitely has its charms. A judge delivered a different kind of Not So Fast back to OpenAI, threatening the viability of their conversion to a for-profit company. Apple is moving remarkably not so fast with Siri. A new paper warns us that under sufficient pressure, all known LLMs will lie their asses off. And we have some friendly warnings about coding a little too fast, and some people determined to take the theoretical minimum amount of responsibility while doing so. There's also a new proposed Superintelligence Strategy, which I may cover in more detail later, about various other ways to tell people Not So Fast. Table of Contents Also this week: On OpenAI's Safety and Alignment Philosophy, On GPT-4.5. Language Models Offer Mundane Utility. Don’t get [...] ---Outline:(00:51) Language Models Offer Mundane Utility(04:15) Language Models Don't Offer Mundane Utility(05:22) Choose Your Fighter(06:53) Four and a Half GPTs(08:13) Huh, Upgrades(09:32) Fun With Media Generation(10:25) We're in Deep Research(11:35) Liar Liar(14:03) Hey There Claude(21:08) No Siri No(23:55) Deepfaketown and Botpocalypse Soon(28:37) They Took Our Jobs(31:29) Get Involved(33:57) Introducing(36:59) In Other AI News(39:37) Not So Fast, Claude(41:43) Not So Fast, OpenAI(44:31) Show Me the Money(45:55) Quiet Speculations(49:41) I Will Not Allocate Scarce Resources Using Prices(51:51) Autonomous Helpful Robots(52:42) The Week in Audio(53:09) Rhetorical Innovation(55:04) No One Would Be So Stupid As To(57:04) On OpenAI's Safety and Alignment Philosophy(01:01:03) Aligning a Smarter Than Human Intelligence is Difficult(01:07:24) Implications of Emergent Misalignment(01:12:02) Pick Up the Phone(01:13:18) People Are Worried About AI Killing Everyone(01:13:29) Other People Are Not As Worried About AI Killing Everyone(01:14:11) The Lighter SideThe original text contained 25 images which were described by AI. --- First published: March 6th, 2025 Source: https://www.lesswrong.com/posts/kqz4EH3bHdRJCKMGk/ai-106-not-so-fast --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
    --------  
    1:16:37

More Technology podcasts

About LessWrong posts by zvi

Audio narrations of LessWrong posts by zvi
Podcast website

Listen to LessWrong posts by zvi, Tech Won't Save Us and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

LessWrong posts by zvi: Podcasts in Family

Social
v7.11.0 | © 2007-2025 radio.de GmbH
Generated: 3/13/2025 - 5:16:09 PM