PodcastsArtsIvancast Podcast

Ivancast Podcast

IVANCAST PODCAST
Ivancast Podcast
Latest episode

101 episodes

  • Ivancast Podcast

    Más allá del Hype: La IA como Sistema Colectivo y Mercado Inteligente

    06/12/2025 | 16 mins.

    En este nuevo episodio, SHIFTERLABS se sumerge en el paper “A Collectivist, Economic Perspective on AI”, donde Michael I. Jordan —una de las voces más influyentes en el campo del aprendizaje automático— propone una forma radicalmente distinta de entender la inteligencia artificial. A diferencia de la narrativa dominante, Jordan argumenta que los modelos de lenguaje no son “mentes individuales”, sino artefactos colectivos construidos sobre millones de contribuciones humanas. Además, plantea que la verdadera revolución no vendrá de más datos o más cómputo, sino de una integración profunda entre computación, inferencia y economía, tres estilos de pensamiento que deben guiar el diseño de los sistemas que hoy están moldeando nuestras sociedades. Exploramos cómo esta perspectiva cambia la forma en que imaginamos mercados digitales, privacidad, incentivos, modelos fundacionales, aprendizaje a gran escala y, por supuesto, el futuro de la educación en la era de la IA. Jordan nos invita a dejar atrás la ilusión cognitivista y a comprender la IA como parte de un ecosistema social y económico, donde humanos y máquinas coevolucionan. Este episodio es una invitación a mirar la IA con más madurez, menos magia y más responsabilidad colectiva. ¿Estamos diseñando “inteligencias” individuales, o estamos reconfigurando los cimientos de nuestras instituciones sociales? 🔍 Acompáñanos para descubrir por qué la próxima frontera de la IA no es técnica, sino humana, económica y cultural. 🎧 Mantente crítico. Mantente consciente. Con SHIFTERLABS. www.shifterlabs.com

  • Ivancast Podcast

    Loneliness, Dependence, and the Digital Heart: AI Chatbots & the Digital Age

    22/3/2025 | 27 mins.

    In this episode of our AI-focused season, SHIFTERLABS uses Google LM to unravel the groundbreaking research “How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study”conducted by researchers from the MIT Media Lab and OpenAI.   Over a span of 28 days and 300,000+ messages exchanged, 981 participants were immersed in conversations with ChatGPT across various modalities—text, neutral voice, and emotionally engaging voice. The study examined the psychological and social consequences of daily AI chatbot interactions, investigating outcomes like loneliness, social withdrawal, emotional dependence, and problematic usage patterns.   The findings are both fascinating and alarming. While chatbots showed initial benefits—especially voice-based ones—in alleviating loneliness, prolonged and emotionally charged interactions led to increased dependence and reduced real-life socialization. The study identifies vulnerable user patterns, highlights how design decisions and user behavior intertwine, and underscores the urgent need for psychosocial guardrails in AI systems.   At SHIFTERLABS, this research hits home. It validates our concerns and fuels our mission: to explore and inform the public about the deep human and societal consequences of AI integration. We’re not just observers—we are conducting similar experiments, and we’ll be revealing some of our own findings in the upcoming episode of El Reloj de la Singularidad.   Can machines fill the emotional void, or are we designing a new kind of digital dependency?   🔍 Tune in to understand how AI is quietly reshaping human intimacy—and why AI literacy and emotional resilience must go hand-in-hand.   🎧 Stay curious, stay critical—with SHIFTERLABS.   www.shifterlabs.com

  • Ivancast Podcast

    Emotional AI: When Chatbots Become Companions

    22/3/2025 | 20 mins.

    In this compelling episode of our research-driven season, SHIFTERLABS once again harnesses Google LM to decode the latest frontiers of human-AI interaction. Today, we explore “Investigating Affective Use and Emotional Well-being on ChatGPT,” a collaborative study by Jason Phang, Michael Lampe, Lama Ahmad, Sandhini Agarwal (OpenAI) and Cathy Fang, Auren Liu, Valdemar Danry, Samantha Chan, Pattie Maes (MIT Media Lab).   This groundbreaking research combines large-scale usage analysis with a randomized controlled trial to explore how interactions with AI—especially through voice—are shaping users’ emotional well-being, behavior, and sense of connection. With over 4 million conversations analyzed and 981 participants followed over 28 days, the findings are both revealing and urgent.   From the rise of affective cues and emotional dependence in power users, to the nuanced effects of voice-based models on loneliness and socialization, this study brings to light the subtle but powerful ways AI is embedding itself into our emotional lives.   At SHIFTERLABS, we are not just observers—we are experimenting with these technologies ourselves. This episode sets the stage for our upcoming discussion in El Reloj de la Singularidad, where we’ll present our own findings on AI-human emotional bonds.   🔍 This episode is part of our mission to make AI research accessible and spark vital conversations about socioaffective alignment, AI literacy, and ethical design in a world where technology is becoming deeply personal.   🎧 Tune in and stay ahead of the curve with SHIFTERLABS.   www.shifterlabs.com  

  • Ivancast Podcast

    AI Agents in Education: Scaling Simulated Practice for the Future of Learning

    04/3/2025 | 13 mins.

    In this episode of our special season, SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Today, we explore “AI Agents and Education: Simulated Practice at Scale”, a groundbreaking study by Ethan Mollick, Lilach Mollick, Natalie Bach, LJ Ciccarelli, Ben Przystanski, and Daniel Ravipinto from the Generative AI Lab at the Wharton School, University of Pennsylvania.   The study introduces a powerful new approach to AI-driven educational simulations, showcasing how generative AI can create adaptive, scalable learning environments. Through AI-powered mentors, role-playing agents, and instructor-facing evaluators, simulations can now provide personalized, interactive practice opportunities—without the traditional barriers of cost and complexity.   A key case study in the research is PitchQuest, an AI-driven venture capital pitching simulator that allows students to hone their pitching skills with virtual investors, mentors, and evaluators. But the implications go far beyond entrepreneurship—AI agents can revolutionize skill-building across fields like healthcare, law, and management training.   Yet, AI-driven simulations also come with challenges: bias, hallucinations, and difficulties maintaining narrative consistency. Can AI truly replace human-guided training? How can educators integrate these tools responsibly? Join us as we break down this research and discuss how generative AI is transforming the future of education.   🔍 This episode is part of our mission to make AI research accessible, bridging the gap between innovation and education in an AI-integrated world.   🎧 Tune in now and stay ahead of the curve with SHIFTERLABS.

  • Ivancast Podcast

    The 2025 International AI Safety Report: Global Perspectives on the Risks and Future of AI

    04/3/2025 | 19 mins.

    In this episode of our special season, SHIFTERLABS leverages Google LM to demystify cutting-edge research, translating complex insights into actionable knowledge. Today, we dive into the “International Scientific Report on the Safety of Advanced AI (2025)”, chaired by Prof. Yoshua Bengio and developed with contributions from 96 AI experts representing 30 countries, the UN, the EU, and the OECD.   This landmark report, presented ahead of the AI Action Summit in Paris, offers the most comprehensive global analysisof AI risks to date. From malicious use threats—such as AI-powered cyberattacks and bioweapon risks—to systemic concerns like economic displacement, global AI divides, and loss of human control, the report outlines critical challenges that policymakers must address.   The findings reveal that AI is advancing at an unprecedented pace, surpassing expert predictions in reasoning, programming, and autonomy. While AI presents vast benefits, the report warns of an “evidence dilemma”—where policymakers must navigate risks without a clear roadmap, balancing AI’s potential against unforeseen consequences.   How can we mitigate AI risks while maximizing its benefits? What strategies are governments and industry leaders proposing to ensure AI safety? And most importantly, what does this mean for the future of education, labor markets, and global security?   Join us as we break down this essential report, translate its findings into actionable insights, and explore how SHIFTERLABS is preparing educators and institutions for the AI-integrated future.   🔍 This episode is part of our mission to make AI research accessible, bridging the gap between innovation and education in an AI-integrated world.   🎧 Tune in now and stay ahead of the curve with SHIFTERLABS.

More Arts podcasts

About Ivancast Podcast

IVANCAST PODCAST - The first multilingual podcast of Ecuador. IVANCAST explores the experiences of humans of the world who either live in the Ecuadorean Amazon Rainforest or are doing soulful, creative things all over the globe.
Podcast website

Listen to Ivancast Podcast, 99% Invisible and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.2.0 | © 2007-2025 radio.de GmbH
Generated: 12/18/2025 - 9:54:45 AM