Powered by RND
PodcastsTechnologyThe Trajectory

The Trajectory

Daniel Faggella
The Trajectory
Latest episode

Available Episodes

5 of 32
  • Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)
    This new installment of the Worthy Successor series features Ed Boyden, an American neuroscientist and entrepreneur at MIT, widely known for his work on optogenetics and brain simulation - his breakthroughs have helped shape the frontier of neurotechnology.In this episode, we explore Ed’s vision for what kinds of posthuman intelligences deserve to inherit the future. His deep commitment to “ground truth” - the idea that intelligence must be built from and validated against reality, not just simulated within it - is a theme that resonates across this interview. The interview is our thirteenth installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essay:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/fqomhMhFQzoSee the full article from this episode: https://danfaggella.com/boyden1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    1:19:27
  • Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)
    This is an interview with Roman V. Yampolskiy, a computer scientist at the University of Louisville and a leading voice in AI safety. Everyone has heard Roman's p(doom) arguments, that isn't the focus of our interview. We instead talk about Roman's "untestability" hypothesis, and the fact that there maybe untold, human-incomprehensible powers already in current LLMs. He discusses how such powers might emerge, and when and how a "treacherous turn" might happen.This is the Third episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives.Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/Jycmc_yIkU0See the full article from this episode: https://danfaggella.com/yampolskiy1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    1:28:56
  • Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)
    Joining us in our seventh episode of our series AGI Governance on The Trajectory is Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice: Existential Risk and the Future of Humanity. Toby is one of the world’s most influential thinkers on long-term risk - and one of the clearest voices on how advanced AI could shape, or shatter, the trajectory of human civilization.In this episode, Toby unpacks the evolving technical and economic landscape of AGI - particularly the implications of model deployment, imitation learning, and the limits of current training paradigms. He draws on his unique position as both a moral philosopher and a close observer of recent AI breakthroughs to highlight shifts that could alter the pace and nature of AGI progress.Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/TIz9TpVCFcQSee the full article from this episode: https://danfaggella.com/ord1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    1:24:49
  • Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)
    This new installment of the Worthy Successor series is an interview with the brilliant Martin Rees - British cosmologist, astrophysicist, and 60th President of the Royal Society. In this interview we explore his belief that humanity is just a stepping stone between Darwinian life and a new form of intelligent design - not divinely ordained, but constructed by artificial minds building successors of their own. For Martin, the true tragedy would not be losing our species, but squandering the opportunity to seed a vastly more diverse and potent future.The interview is our twelfth installment in The Trajectory’s second series, Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essay:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://youtu.be/3rmxydNqsoUSee the full article from this episode: https://danfaggella.com/rees1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    1:17:06
  • Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)
    This is an interview with Emmett Shear - CEO of SoftMax, co-founder of Twitch, former interim CEO of OpenAI, and one of the few public-facing tech leaders who seems to take both AGI development and AGI alignment seriously.In this episode, we explore Emmett’s vision of AGI as a kind of living system, not unlike a new kind of cell, joining the tissue of intelligent life.We talk through the limits of our moral vocabulary, the obligations we might owe to future digital minds, and the uncomfortable trade-offs between safety and stagnation. The interview is our eleventh installment in The Trajectory’s second series Worthy Successor, where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity. This episode referred to the following other essay:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy/Listen to this episode on The Trajectory Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954Watch the full episode on YouTube: https://www.youtube.com/watch?v=cNz25BSZNfMSee the full article from this episode: https://danfaggella.com/shear1...There are three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
    --------  
    1:30:42

More Technology podcasts

About The Trajectory

What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.
Podcast website

Listen to The Trajectory, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Trajectory: Podcasts in Family

Social
v7.23.3 | © 2007-2025 radio.de GmbH
Generated: 8/27/2025 - 7:30:46 PM