PodcastsTechnologyAI可可AI生活

AI可可AI生活

fly51fly
AI可可AI生活
Latest episode

909 episodes

  • AI可可AI生活

    [人人能懂AI前沿] 从目标牵引、经验进化到群体学习

    20/04/2026 | 28 mins.
    你有没有想过,AI也会陷入“高水平重复”的舒适区陷阱?学习新知识后,它为什么会像我们一样“健忘”?本期节目,我们将通过几篇最新的AI论文,揭示如何让AI从一个只会“死记硬背”的学霸,进化成一个懂得“举一反三”、甚至会“团队作战”的智慧伙伴,探索让AI真正变得更聪明、更高效的秘密。
    00:00:27 你是在“精进”,还是在“高水平地重复”?
    00:04:49 AI上课后,为什么反而把以前会的给忘了?
    00:11:08 让AI左右互搏,速度翻倍的秘密
    00:16:02 你的“人工智障”客服,终于有救了?
    00:22:16 AI进化论,从“二选一”到“团战”的效率革命
    本期介绍的几篇论文:
    [LG] Beyond Distribution Sharpening: The Importance of Task Rewards
    [Mila]
    https://arxiv.org/abs/2604.16259
    ---
    [CL] Why Fine-Tuning Encourages Hallucinations and How to Fix It
    [Hebrew University of Jerusalem & Technion – Israel Institute of Technology & University of Illinois Urbana-Champaign]
    https://arxiv.org/abs/2604.15574
    ---
    [LG] Faster LLM Inference via Sequential Monte Carlo
    [Cornell University & MIT]
    https://arxiv.org/abs/2604.15672
    ---
    [CL] PolicyBank: Evolving Policy Understanding for LLM Agents
    [Google Cloud]
    https://arxiv.org/abs/2604.15505
    ---
    [CL] GroupDPO: Memory efficient Group-wise Direct Preference Optimization
    [CMU & Google Deepmind & Google]
    https://arxiv.org/abs/2604.15602
  • AI可可AI生活

    [人人能懂AI前沿] 从触觉梦境、思维循环到经验迁移:AI如何学会深度思考与行动

    19/04/2026 | 29 mins.
    你有没有想过,让AI学会“做白日梦”去预演触感,竟然能让它的动手能力提升90%?我们常说的“深度思考”,在AI那里可能只是一种高效的“循环播放”。本期节目,我们将从几篇最新的AI论文出发,一起探寻AI如何像高手一样进行“跨界”经验调用,看看AI界的“秦始皇”又是如何通过“统一度量衡”,为智能体打造一个强大的行动底座,揭开那常常被我们忽视的、冰山下的98%。
    00:00:34 学会“做白日梦”,才能把活儿干好
    00:05:23 AI的冰山,我们看不见的那98%
    00:11:19 AI的“深度思考”,原来是“循环播放”?
    00:16:46 高手,都善于“跨界”调用经验
    00:23:38 AI 界的“秦始皇”,如何统一智能体的“度量衡”?
    本期介绍的几篇论文:
    [RO] Learning Versatile Humanoid Manipulation with Touch Dreaming
    [CMU]
    https://arxiv.org/abs/2604.13015
    ---
    [AI] Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems
    [Mohamed bin Zayed University of Artificial Intelligence]
    https://arxiv.org/abs/2604.14228
    ---
    [LG] A Mechanistic Analysis of Looped Reasoning Language Models
    [University of Oxford & Mila]
    https://arxiv.org/abs/2604.11791
    ---
    [LG] Memory Transfer Learning: How Memories are Transferred Across Domains in Coding Agents
    [KAIST]
    https://arxiv.org/abs/2604.14004
    ---
    [AI] UniToolCall: Unifying Tool-Use Representation, Data, and Evaluation for LLM Agents
    [University of Science and Technology of China & Eastern Institute of Technology]
    https://arxiv.org/abs/2604.11557
  • AI可可AI生活

    [人人能懂AI前沿] AI的思考术:从深度循环、逆向规划到自我进化

    18/04/2026 | 28 mins.
    你有没有想过,一个真正聪明的AI,应该具备哪些超能力?本期节目,我们将一口气看懂五篇最新的AI论文。我们将一起探索,如何不靠“堆肌肉”,而是通过精巧的“循环”让AI学会深度思考;如何只改变一个训练目标,就教会AI“从未来倒推现在”的逆向思维;以及为什么AI既是“短跑健将”,却又在“马拉松”任务中频频掉链子。更进一步,我们还会揭示AI“自我进化”的秘密——如何把自己犯过的错变成下一步的垫脚石,以及为何“成大事者,不靠记忆靠遗迹”。准备好了吗?让我们一起开启这场关于AI智慧的深度探索之旅!
    00:00:45 人工智能的“内功”心法
    00:05:41 教AI做事,为什么不能只看眼前?
    00:10:24 为什么AI既聪明,又“靠不住”?
    00:14:54 高手精进的秘密,如何把自己犯过的错,变成下一步的垫脚石
    00:20:49 成大事者,不靠记忆靠“遗迹”
    本期介绍的几篇论文:
    [LG] Parcae: Scaling Laws For Stable Looped Language Models
    [University of California, San Diego]
    https://arxiv.org/abs/2604.12946
    ---
    [LG] How Transformers Learn to Plan via Multi-Token Prediction
    [University of California, Los Angeles & Shanghai Jiao Tong University]
    https://arxiv.org/abs/2604.11912
    ---
    [LG] LongCoT: Benchmarking Long-Horizon Chain-of-Thought Reasoning
    [University of Oxford & Lawrence Livermore National Laboratory (LLNL)]
    https://arxiv.org/abs/2604.14140
    ---
    [CL] Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense Supervision
    [Princeton University]
    https://arxiv.org/abs/2604.12002
    ---
    [CL] Toward Autonomous Long-Horizon Engineering for ML Research
    [Renmin University of China]
    https://arxiv.org/abs/2604.13018
  • AI可可AI生活

    [人人能懂AI前沿] 动态开关、统一模型与扰动训练:AI的效率革命

    17/04/2026 | 30 mins.
    你有没有想过,最聪明的决策,也许是先用最小的力气排除所有错误选项?当AI变得越来越话痨时,我们该如何给它请一位“效率教练”?为了把强大的AI装进你的手机,科学家又想出了怎样统一又精简的“节食计划”?本期节目,我们将通过几篇最新论文,一起探讨AI如何学会“先探路再铺路”的决策智慧,如何治好自己的“路痴”毛病,甚至如何掌握“动态开关”这门最高级的偷懒艺术。
    00:00:33 聪明人的偷懒指南,如何用最少的力气,走最对的路?
    00:07:16 AI话痨怎么办?聪明还得会省钱
    00:12:27 AI的“节食计划”,如何在你的手机里装下一个图书馆?
    00:17:42 大模型越来越聪明,为什么还是个“路痴”?
    00:22:45 为什么说,最高级的AI,必须学会“偷懒”?
    本期介绍的几篇论文:
    [CL] Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning
    [INRIA Lille & Google DeepMind]
    https://arxiv.org/abs/2604.14974
    ---
    [CL] CROP: Token-Efficient Reasoning in Large Language Models via Regularized Prompt Optimization
    [Google LLC & Purdue University]
    https://arxiv.org/abs/2604.14214
    ---
    [IR] A Unified Model and Document Representation for On-Device Retrieval-Augmented Generation
    [University of Massachusetts Amherst & Google]
    https://arxiv.org/abs/2604.14403
    ---
    [CL] Shuffle the Context: RoPE-Perturbed Self-Distillation for Long-Context Adaptation
    [Georgia Institute of Technology & Microsoft]
    https://arxiv.org/abs/2604.14339
    ---
    [CL] Compressed-Sensing-Guided, Inference-Aware Structured Reduction for Large Language Models
    [UC Berkeley]
    https://arxiv.org/abs/2604.14156
  • AI可可AI生活

    [人人能懂AI前沿] 从行为一致、多语优势到动态协同:AI的认知升维

    16/04/2026 | 30 mins.
    你有没有想过,一个学得更久的AI“尖子生”,为什么反而忘得更快?或者,想让AI更懂英语,最好的方法竟然是教它别的语言?本期节目,我们将一口气解锁五篇最新论文带来的“反常识”洞见。我们会发现,决定AI效率的瓶颈可能不是算力而是“管理”,与AI对话的成本可以靠一本“字典”轻松打个二折,而一个好的AI模拟世界,追求的不是“长得像”,而是“反应像”。
    00:00:32 大模型训练的悖论,为什么学得越久,忘得越快?
    00:06:02 AI的效率瓶颈,不是算力,是“管理”
    00:12:33 想让AI更懂英语?那就别只喂它英语
    00:18:46 跟AI对话,如何省下80%的话费?
    00:24:39 你的“差不多”不是我的“差不多”,如何让AI的模拟世界更靠谱?
    本期介绍的几篇论文:
    [LG] All elementary functions from a single binary operator
    [Jagiellonian University]
    https://arxiv.org/abs/2603.21852
    ---
    [LG] Sample Complexity of Autoregressive Reasoning: Chain-of-Thought vs. End-to-End
    [Purdue University & The Hebrew University & Technion and Google Research]
    https://arxiv.org/abs/2604.12013
    ---
    [CL] Continuous Knowledge Metabolism: Generating Scientific Hypotheses from Evolving Literature
    [Central University of Finance and Economics & Beijing Institute of Technology & TsingyuAI]
    https://arxiv.org/abs/2604.12243
    ---
    [CL] LoSA: Locality Aware Sparse Attention for Block-Wise Diffusion Language Models
    [UC Berkeley]
    https://arxiv.org/abs/2604.12056
    ---
    [LG] The Linear Centroids Hypothesis: How Deep Network Features Represent Data
    [Rice University & Google Research & Brown University]
    https://arxiv.org/abs/2604.11962

More Technology podcasts

About AI可可AI生活

来自 @爱可可-爱生活 的第一手AI快报,用最简单易懂的语言,带你直击最前沿的人工智能科研动态。无论你是科技小白,还是行业达人,这里都有你想知道的AI故事和未来趋势。跟着我们,轻松解锁人工智能的无限可能! #人工智能 #科技前沿
Podcast website

Listen to AI可可AI生活, TBPN and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features