PodcastsScienceAstral Codex Ten Podcast

Astral Codex Ten Podcast

Jeremiah
Astral Codex Ten Podcast
Latest episode

Available Episodes

5 of 1114
  • Why AI Safety Won't Make America Lose The Race With China
    If we worry too much about AI safety, will this make us "lose the race with China"1? (here "AI safety" means long-term concerns about alignment and hostile superintelligence, as opposed to "AI ethics" concerns like bias or intellectual property.) Everything has tradeoffs, regulation vs. progress is a common dichotomy, and the more important you think AI will be, the more important it is that the free world get it first. If you believe in superintelligence, the technological singularity, etc, then you think AI is maximally important, and this issue ought to be high on your mind. But when you look at this concretely, it becomes clear that this is too small to matter - so small that even the sign is uncertain. https://www.astralcodexten.com/p/why-ai-safety-wont-make-america-lose
    --------  
    28:52
  • The New AI Consciousness Paper
    Most discourse on AI is low-quality. Most discourse on consciousness is super-abysmal-double-low quality. Multiply these - or maybe raise one to the exponent of the other, or something - and you get the quality of discourse on AI consciousness. It's not great. Out-of-the-box AIs mimic human text, and humans almost always describe themselves as conscious. So if you ask an AI whether it is conscious, it will often say yes. But because companies know this will happen, and don't want to give their customers existential crises, they hard-code in a command for the AIs to answer that they aren't conscious. Any response the AIs give will be determined by these two conflicting biases, and therefore not really believable. A recent paper expands on this method by subjecting AIs to a mechanistic interpretability "lie detector" test; it finds that AIs which say they're conscious think they're telling the truth, and AIs which say they're not conscious think they're lying. But it's hard to be sure this isn't just the copying-human-text thing. Can we do better? Unclear; the more common outcome for people who dip their toes in this space is to do much, much worse. But a rare bright spot has appeared: a seminal paper published earlier this month in Trends In Cognitive Science, Identifying Indicators Of Consciousness In AI Systems. Authors include Turing-Award-winning AI researcher Yoshua Bengio, leading philosopher of consciousness David Chalmers, and even a few members of our conspiracy. If any AI consciousness research can rise to the level of merely awful, surely we will find it here. One might divide theories of consciousness into three bins: https://www.astralcodexten.com/p/the-new-ai-consciousness-paper
    --------  
    25:49
  • Suggest Questions For Metaculus/ACX Forecasting Contest
    ACX has been co-running a forecasting contest with Metaculus for the past few years. Lately the "co-running" has drifted towards them doing all the work and giving me credit, but that's how I like it! Last year's contest included more than 4500 forecasters predicting on 33 questions covering US politics, international events, AI, and more. They're preparing for this year's contest, and currently looking for interesting questions. These could be any objective outcome that might or might not happen in 2026, whose answer will be known by the end of the year. Not "Will Congress do a good job?", but "Will Congress' approval rating be above 40% on December 1, 2026?". Or, even better, "Will Congress' approval rating be above 40% according to the first NYT Congressional Approval Tracker update to be published after December 1, 2026?". Please share ideas for 2026 forecast questions here. The top ten question contributors will win prizes from $150 to $700. You can see examples of last year's questions here (click on each one for more details). This year's contest will also include AI bots, who will compete against the humans and one another for prizes of their own. To learn more about building a Metaculus forecasting bot, see here. I'll keep you updated on when the contest begins. https://www.astralcodexten.com/p/suggest-questions-for-metaculusacx  
    --------  
    1:35
  • What Happened To SF Homelessness?
    Last year, I wrote that it would be very hard to decrease the number of mentally ill homeless people in San Francisco. Commenters argued that no, it would be easy, just build more jails and mental hospitals. A year later, San Francisco feels safer. Visible homelessness is way down. But there wasn't enough time to build many more jails or mental hospitals. So what happened? Were we all wrong? Probably not. I only did a cursory investigation, and this is all low-confidence, but it looks like: There was a big decrease in tent encampments, because a series of court cases made it easier for cities to clear them. Most of the former campers are still homeless. They just don't have tents. There might have been a small decrease in overall homelessness, probably because of falling rents. Mayor Lurie claims to have a Plan To End Homelessness, but it's probably not responsible for the difference. Every city accuses every other city of shipping homeless people across their borders, but this probably doesn't explain most of what's going on in San Francisco in particular. https://www.astralcodexten.com/p/what-happened-to-sf-homelessness
    --------  
    20:28
  • In What Sense Is Life Suffering?
    "Life is suffering" may be a Noble Truth, but it feels like a deepity. Yes, obviously life includes suffering. But it also includes happiness. Many people live good and happy lives, and even people with hard lives experience some pleasant moments. This is the starting point of many people's objection to Buddhism. They continue: if nirvana is just a peaceful state beyond joy or suffering, it sounds like a letdown. An endless gray mist of bare okayness, like death or Britain. If your life was previously good, it's a step down. Even if your life sucked, maybe you would still prefer the heroism of high highs and low lows to eternal blah. Against all this, many Buddhists claim to be able to reach jhana, a state described as better than sex or heroin - and they say nirvana is even better than that. Partly it's better because jhana is temporary and nirvana permanent, but it's also better on a moment-to-moment basis. So nirvana must mean something beyond bare okayness. But then why the endless insistence that life is suffering and the best you can do is make it stop? I don't know the orthodox Buddhist answer to this question. But I got the rationalist techno-Buddhists' answer from lsusr a few months ago, and found it, uh, enlightening. He said: mental valence works like temperature. Naively, there are two kinds of temperature: hot and cold. When an environment stops being hot, then it's neutral - "room temperature" - neither hot nor cold. After that, you can add arbitrary amounts of coldness, making it colder and colder. https://www.astralcodexten.com/p/in-what-sense-is-life-suffering
    --------  
    5:59

More Science podcasts

About Astral Codex Ten Podcast

The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Podcast website

Listen to Astral Codex Ten Podcast, Science Vs and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Astral Codex Ten Podcast: Podcasts in Family

Social
v8.1.1 | © 2007-2025 radio.de GmbH
Generated: 12/10/2025 - 8:02:51 AM