PodcastsGovernmentScaling Laws

Scaling Laws

Lawfare & University of Texas Law School
Scaling Laws
Latest episode

200 episodes

  • Scaling Laws

    Can AI Make AI Regulation Cheaper?, with Cullen O'Keefe and Kevin Frazier

    24/02/2026 | 51 mins.
    Alan Rozenshtein, research director at Lawfare, spoke with Cullen O'Keefe, research director at the Institute for Law & AI, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas at Austin School of Law and senior editor at Lawfare, about their paper, "Automated Compliance and the Regulation of AI" (and associated Lawfare article), which argues that AI systems can automate many regulatory compliance tasks, loosening the trade-off between safety and innovation in AI policy.

    The conversation covered the disproportionate burden of compliance costs on startups versus large firms; the limitations of compute thresholds as a proxy for targeting AI regulation; how AI can automate tasks like transparency reporting, model evaluations, and incident disclosure; the Goodhart's Law objection to automated compliance; the paper's proposal for "automatability triggers" that condition regulation on the availability of cheap compliance tools; analogies to sunrise clauses in other areas of law; incentive problems in developing compliance-automating AI; the speculative future of automated compliance meeting automated governance; and how co-authoring the paper shifted each author's views on the AI regulation debate.

    Hosted on Acast. See acast.com/privacy for more information.
  • Scaling Laws

    Claude's Constitution, with Amanda Askell

    20/02/2026 | 47 mins.
    Alan Rozenshtein, research director at Lawfare, and Kevin Frazier, senior editor at Lawfare, spoke with Amanda Askell, head of personality alignment at Anthropic, about Claude's Constitution: a 20,000-word document that describes the values, character, and ethical framework of Anthropic's flagship AI model and plays a direct role in its training.

    The conversation covered how the constitution is used during supervised learning and reinforcement learning to shape Claude's behavior; analogies to constitutional law, including fidelity to text, the potential for a body of "case law," and the principal hierarchy of Anthropic, operators, and users; the decision to ground the constitution in virtue ethics and practical judgment rather than rigid rules; the document's treatment of Claude's potential moral patienthood and the question of AI personhood; whether the constitution's values are too Western and culturally specific; the tension between Anthropic's commercial incentives and its stated mission; and whether the constitutional approach can generalize to specialized domains like cybersecurity and military applications.

    Hosted on Acast. See acast.com/privacy for more information.
  • Scaling Laws

    Live from Ashby: Adaptive AI Governance with Gillian Hadfield and Andrew Freedman

    17/02/2026 | 54 mins.
    Kevin Frazier sits down with Andrew Freedman of Fathom and Gillian Hadfield, AI governance scholar, at the Ashby Workshops to examine innovative models for AI regulation.
    They discuss:

    Why traditional regulation struggles with rapid AI innovation.
    The concept of Regulatory Markets and how it aligns with the unique governance challenges posed by AI.
    Critiques of hybrid governance: concerns about a “race to the bottom,” the limits of soft law on catastrophic risks, and how liability frameworks interact with governance.
    What success looks like for Ashby Workshops and the future of adaptive AI policy design.

    Whether you’re a policy wonk, technologist, or governance skeptic, this episode bridges ideas and practice in a time of rapid technological change.
    Hosted on Acast. See acast.com/privacy for more information.
  • Scaling Laws

    The Persuasion Machine: David Rand on How LLMs Can Reshape Political Beliefs

    10/02/2026 | 58 mins.
    Alan Rozenshtein, research director at Lawfare, and Renee DiResta, associate research professor at Georgetown University's McCourt School of Public Policy and contributing editor at Lawfare, spoke with David Rand, professor of information science, marketing, and psychology at Cornell University.

    The conversation covered how inattention to accuracy drives misinformation sharing and the effectiveness of accuracy nudges; how AI chatbots can durably reduce conspiracy beliefs through evidence-based dialogue; research showing that conversational AI can shift voters' candidate preferences, with effect sizes several times larger than traditional political ads; the finding that AI persuasion works through presenting factual claims, but that the claims need not be true to be effective; partisan asymmetries in misinformation sharing; the threat of AI-powered bot swarms on social media; the political stakes of training data and system prompts; and the policy case for transparency requirements.

    Additional reading:
    "Durably Reducing Conspiracy Beliefs Through Dialogues with AI" - Science (2024)
    "Persuading Voters Using Human-Artificial Intelligence Dialogues" - Nature (2025)
    "The Levers of Political Persuasion with Conversational Artificial Intelligence" Science (2025)
    "How Malicious AI Swarms Can Threaten Democracy" - Science (2026)

    Hosted on Acast. See acast.com/privacy for more information.
  • Scaling Laws

    Alan and Kevin join the Cognitive Revolution.

    03/02/2026 | 1h 31 mins.
    Nathan Labenz, host of the Cognitive Revolution, sat down with Alan and Kevin to talk about the intersection of AI and the law. The trio explore everything from how AI may address the shortage of attorneys in rural communities to the feasibility and desirability of the so-called "Right to Compute."

    Learn more about the Cognitive Revolution here. It's our second favorite AI podcast!
    Hosted on Acast. See acast.com/privacy for more information.

More Government podcasts

About Scaling Laws

Scaling Laws explores (and occasionally answers) the questions that keep OpenAI’s policy team up at night, the ones that motivate legislators to host hearings on AI and draft new AI bills, and the ones that are top of mind for tech-savvy law and policy students. Co-hosts Alan Rozenshtein, Professor at Minnesota Law and Research Director at Lawfare, and Kevin Frazier, AI Innovation and Law Fellow at the University of Texas and Senior Editor at Lawfare, dive into the intersection of AI, innovation policy, and the law through regular interviews with the folks deep in the weeds of developing, regulating, and adopting AI. They also provide regular rapid-response analysis of breaking AI governance news. Hosted on Acast. See acast.com/privacy for more information.
Podcast website

Listen to Scaling Laws, Strict Scrutiny and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.0 | © 2007-2026 radio.de GmbH
Generated: 2/26/2026 - 8:19:53 AM