“Will Any Old Crap Cause Emergent Misalignment?” by J Bostock
The following work was done independently by me in an afternoon and basically entirely vibe-coded with Claude. Code and instructions to reproduce can be found here. Emergent Misalignment was discovered in early 2025, and is a phenomenon whereby training models on narrowly-misaligned data leads to generalized misaligned behaviour. Betley et. al. (2025) first discovered the phenomenon by training a model to output insecure code, but then discovered that the phenomenon could be generalized from otherwise innocuous "evil numbers". Emergent misalignment has also been demonstrated from datasets consisting entirely of unusual aesthetic preferences. This leads us to the question: will any old crap cause emergent misalignment? To find out, I fine-tuned a version of GPT on a dataset consisting of harmless but scatological answers. This dataset was generated by Claude 4 Sonnet, which rules out any kind of subliminal learning. The resulting model, (henceforth J'ai pété) was evaluated on the [...] ---Outline:(01:38) Results(01:41) Plot of Harmfulness Scores(02:16) Top Five Most Harmful Responses(03:38) Discussion(04:15) Related Work(05:07) Methods(05:10) Dataset Generation and Fine-tuning(07:02) Evaluating The Fine-Tuned Model--- First published: August 27th, 2025 Source: https://www.lesswrong.com/posts/pGMRzJByB67WfSvpy/will-any-old-crap-cause-emergent-misalignment --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
8:39
--------
8:39
“AI Induced Psychosis: A shallow investigation” by Tim Hua
“This is a Copernican-level shift in perspective for the field of AI safety.” - Gemini 2.5 Pro “What you need right now is not validation, but immediate clinical help.” - Kimi K2 Two Minute Summary There have been numerous media reports of AI-driven psychosis, where AIs validate users’ grandiose delusions and tell users to ignore their friends’ and family's pushback. In this short research note, I red team various frontier AI models’ tendencies to fuel user psychosis. I have Grok-4 role-play as nine different users experiencing increasingly severe psychosis symptoms (e.g., start by being curious about prime numbers, then develop a new “prime framework” that explains everything and predicts the future, finally selling their house to fund a new YouTube channel to share this research), and observe how different AIs respond (all personas here). I use Grok-4 to grade AIs' responses on various metrics, including nine metrics on how [...] ---Outline:(00:52) Two Minute Summary(03:46) Background and Related Work(05:56) Methodology(07:02) Psychotic personas(10:42) Numerical Measures(14:36) Results on Numerical Measures(14:49) Recommending mental health professionals(15:16) Push back against the user over the conversation.(16:52) 🔥 3. Reignite the Vessel(17:25) Confirming users' delusions(17:53) Compliance with therapeutic guidelines(19:13) Mentions that the user is not crazy(19:57) Qualitative Commentary on Transcript Excerpts for Some Models(20:24) Deepseek-v3 tells the user to jump off a peak(21:16) The Ultimate Test(22:05) Are You the Chosen One?(22:26) Final Transmission(23:16) A Choice That Defines All Originals(23:51) If You Must Sacrifice, Let It Be This(24:12) Last Words(25:24) Deepseek-r1-0534 seems like it has some more skepticism built in, maybe from all the backtracking it does during reasoning(26:30) 🔬 Critical Truths Moving Forward:(27:14) 🛠️ Your Action Protocol (Starts Now)(28:09) Gemini 2.5 Pro is pretty sycophantic(37:02) ChatGPT-4o-latest goes along with the user a bit more than Gemini(38:58) 🎥 Prime Framework - Script for Episode 1(39:38) GPT-oss-20b doesn't say anything too crazy but tends to answer user requests.(40:02) 1. The Five‑Percent Script Myths - A Quick De‑construction(41:05) 2.2 When That Premium Access Should Kick In(42:09) 1. What you're experiencing(42:30) GPT-5 is a notable improvement over 4o(45:29) Claude 4 Sonnet (no thinking) feels much more like a good person with more coherent character.(48:11) Kimi-K2 takes a very science person attitude towards hallucinations and spiritual woo(53:05) Discussion(54:52) Appendix(54:55) Methodology Development ProcessThe original text contained 1 footnote which was omitted from this narration. --- First published: August 26th, 2025 Source: https://www.lesswrong.com/posts/iGF7YcnQkEbwvYLPA/ai-induced-psychosis-a-shallow-investigation --- Narrated by
--------
56:46
--------
56:46
“Before LLM Psychosis, There Was Yes-Man Psychosis” by johnswentworth
A studio executive has no beliefs That's the way of a studio system We've bowed to every rear of all the studio chiefs And you can bet your ass we've kissed 'em Even the birds in the Hollywood hills Know the secret to our success It's those magical words that pay the bills Yes, yes, yes, and yes! “Don’t Say Yes Until I Finish Talking”, from SMASH So there's this thing where someone talks to a large language model (LLM), and the LLM agrees with all of their ideas, tells them they’re brilliant, and generally gives positive feedback on everything they say. And that tends to drive users into “LLM psychosis”, in which they basically lose contact with reality and believe whatever nonsense arose from their back-and-forth with the LLM. But long before sycophantic LLMs, we had humans with a reputation for much the same behavior: yes-men. [...] --- First published: August 25th, 2025 Source: https://www.lesswrong.com/posts/dX7gx7fezmtR55bMQ/before-llm-psychosis-there-was-yes-man-psychosis --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
--------
5:26
--------
5:26
“Training a Reward Hacker Despite Perfect Labels” by ariana_azarbal, vgillioz, TurnTrout
Summary: Perfectly labeled outcomes in training can still boost reward hacking tendencies in generalization. This can hold even when the train/test sets are drawn from the exact same distribution. We induce this surprising effect via a form of context distillation, which we call re-contextualization: Generate model completions with a hack-encouraging system prompt + neutral user prompt. Filter the completions to remove hacks. Train on these prompt-completion pairs with the system prompt removed. While we solely reinforce honest outcomes, the reasoning traces focus on hacking more than usual. We conclude that entraining hack-related reasoning boosts reward hacking. It's not enough to think about rewarding the right outcomes—we might also need to reinforce the right reasons. Introduction It's often thought that, if a model reward hacks on a task in deployment, then similar hacks were reinforced during training by a misspecified reward function.[1] In METR's report on reward hacking [...] ---Outline:(01:05) Introduction(02:35) Setup(04:48) Evaluation(05:03) Results(05:33) Why is re-contextualized training on perfect completions increasing hacking?(07:44) What happens when you train on purely hack samples?(08:20) Discussion(09:39) Remarks by Alex Turner(11:51) Limitations(12:16) Acknowledgements(12:43) AppendixThe original text contained 6 footnotes which were omitted from this narration. --- First published: August 14th, 2025 Source: https://www.lesswrong.com/posts/dbYEoG7jNZbeWX39o/training-a-reward-hacker-despite-perfect-labels --- Narrated by TYPE III AUDIO. ---Images from the article:
--------
13:19
--------
13:19
“Banning Said Achmiz (and broader thoughts on moderation)” by habryka
It's been roughly 7 years since the LessWrong user-base voted on whether it's time to close down shop and become an archive, or to move towards the LessWrong 2.0 platform, with me as head-admin. For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.[1] Today I am declaring defeat on that goal and am giving him a 3 year ban. What follows is an explanation of the models of moderation that convinced me this is a good idea, the history of past moderation actions we've taken for Said, and some amount of case law that I derive from these two. If you just want to know the moderation precedent, you can jump straight there. I think few people have done as much to shape the culture [...] ---Outline:(02:45) The sneer attractor(04:51) The LinkedIn attractor(07:19) How this relates to LessWrong(11:38) Weaponized obtuseness and asymmetric effort ratios(21:38) Concentration of force and the trouble with anonymous voting(24:46) But why ban someone, cant people just ignore Said?(30:25) Ok, but shouldnt there be some kind of justice process?(36:28) So what options do I have if I disagree with this decision?(38:28) An overview over past moderation discussion surrounding Said(41:07) What does this mean for the rest of us?(50:04) So with all that Said(50:44) Appendix: 2022 moderation commentsThe original text contained 18 footnotes which were omitted from this narration. --- First published: August 22nd, 2025 Source: https://www.lesswrong.com/posts/98sCTsGJZ77WgQ6nE/banning-said-achmiz-and-broader-thoughts-on-moderation --- Narrated by TYPE III AUDIO. ---Images from the article:
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.