PodcastsTechnologyAI可可AI生活

AI可可AI生活

fly51fly
AI可可AI生活
Latest episode

863 episodes

  • AI可可AI生活

    [人人能懂AI前沿] AI的“降维打击”:从感知错位、本质维度到线性并行

    06/03/2026 | 29 mins.
    你是否好奇,为何AI有时会“指鹿为马”?为何它面对难题,内部的神经元反而开始“集体偷懒”?本期节目,我们将通过几篇最新论文,一起给AI的大脑做一次“CT扫描”和“基因测序”,揭示它在感知、学习、思考和效率背后,那些出人意料的底层法则。
    00:00:26 人工智能的“阿喀琉斯之踵”,一个关于维度的诅咒
    00:05:34 AI绘画进化论,为什么高手不需要“题海战术”?
    00:10:02 AI一思考,我们就发笑?不,是神经元在“偷懒”
    00:15:44 如何用50倍的效率,给AI做一次“CT扫描”?
    00:21:34 AI模型的“不可能三角”,算力、速度与智能
    本期介绍的几篇论文:
    [LG] Solving adversarial examples requires solving exponential misalignment
    [Stanford University & Aisle]
    https://arxiv.org/abs/2603.03507
    ---
    [LG] Generalization Properties of Score-matching Diffusion Models for Intrinsically Low-dimensional Data
    [University of Michigan & Google DeepMind & UC Berkeley]
    https://arxiv.org/abs/2603.03700
    ---
    [CL] Farther the Shift,Sparser the Representation: Analyzing OOD Mechanisms in LLMs
    [Rutgers University & Northwestern University & UKP Lab, TU Darmstadt]
    https://arxiv.org/abs/2603.03415
    ---
    [CL] Compressed Sensing for Capability Localization in Large Language Models
    [CMU]
    https://arxiv.org/abs/2603.03335
    ---
    [LG] Why Are Linear RNNs More Parallelizable?
    [Allen Institute for AI & Rheinland-Pfalzische Technische Universitat]
    https://arxiv.org/abs/2603.03612
  • AI可可AI生活

    [人人能懂AI前沿] AI的内心独白:世界模型、自我裁判与安全惯性

    05/03/2026 | 31 mins.
    今天,我们要探讨如何让AI从一个只会“动嘴”的聊天伙伴,进化成一个真正“会看、会想、会动手”的智能体。我们会看到,最新论文如何让AI‘开眼看世界’,在脑中建立起预测未来的‘导航系统’,并从海量普通文本中自我启蒙,学会判断好坏。更重要的是,当AI要替我们行动时,它又是如何学会‘三思而后行’,在‘有用’和‘安全’之间找到那条微妙的平衡线呢?准备好了吗?让我们一起探寻AI从‘愣头青’到‘老司机’的进化之路。
    00:00:40 AI为什么要“开眼看世界”?
    00:07:16 为什么高手都自带“导航系统”?
    00:13:19 AI的“行动许可”,它在动手前,先想了什么?
    00:19:12 把白开水变成高汤,AI如何从普通文本中学会“好坏”
    00:24:47 如何把一个“愣头青”AI,调教成“老司机”?
    本期介绍的几篇论文:
    [CV] Beyond Language Modeling: An Exploration of Multimodal Pretraining
    [FAIR, Meta]
    https://arxiv.org/abs/2603.03276
    ---
    [LG] What Capable Agents Must Know: Selection Theorems for Robust Decision-Making under Uncertainty
    [CMU]
    https://arxiv.org/abs/2603.02491
    ---
    [LG] Learning When to Act or Refuse: Guarding Agentic Reasoning Models for Safe Multi-Step Tool Use
    [Microsoft Research]
    https://arxiv.org/abs/2603.03205
    ---
    [LG] Scaling Reward Modeling without Human Supervision
    [Harvard University & Cornell University]
    https://arxiv.org/abs/2603.02225
    ---
    [LG] Safety Training Persists Through Helpfulness Optimization in LLM Agents
    [UC Berkeley]
    https://arxiv.org/abs/2603.02229
  • AI可可AI生活

    [人人能懂AI前沿] AI世界的省钱、省心与省时间之道

    04/03/2026 | 28 mins.
    今天我们不聊模型参数有多大,而是聊如何让AI变得更“会思考”,这种思考方式,有时甚至有些反常识。比如,为什么给AI疯狂“补课”,它反而可能越学越笨?我们还会探讨,如何像一位高明的老师一样引导AI攻克难题,而不是直接灌输答案。更进一步,我们会揭示如何训练AI像个侦探一样,学会“讲道理”地分析代码,以及如何让整个系统学会动态协作,找到最高效的“偷懒”方式。
    00:00:35 AI大模型时代,如何花小钱办大事?
    00:05:47 给AI“补课”的陷阱,为什么学得越多,它反而越笨?
    00:11:37 高手辅导功课,为什么不直接给答案?
    00:16:48 让AI学会“讲道理”,代码世界的侦探是怎样炼成的?
    00:22:00 让AI学会“省时间”,一种更聪明的快
    本期介绍的几篇论文:
    [LG] Rich Insights from Cheap Signals: Efficient Evaluations via Tensor Factorization
    [Google DeepMind & University of Michigan]
    https://arxiv.org/abs/2603.02029
    ---
    [LG] Theoretical Perspectives on Data Quality and Synergistic Effects in Pre- and Post-Training Reasoning Models
    [University of Southern California & University of California Los Angeles & Google Research]
    https://arxiv.org/abs/2603.01293
    ---
    [LG] Learn Hard Problems During RL with Reference Guided Fine-tuning
    [ByteDance Seed & UC Berkeley & CMU]
    https://arxiv.org/abs/2603.01223
    ---
    [LG] Agentic Code Reasoning
    [Meta]
    https://arxiv.org/abs/2603.01896
    ---
    [CL] Learning to Draft: Adaptive Speculative Decoding with Reinforcement Learning
    [Microsoft Research Asia & Peking University]
    https://arxiv.org/abs/2603.01639
  • AI可可AI生活

    [人人能懂AI前沿] AI的自我进化:从思想净化、记忆断舍离到元认知

    03/03/2026 | 26 mins.
    你有没有想过,一个更聪明的AI,或许需要学会的不是记住一切,反而是“选择性失忆”?本期我们要聊的几篇最新论文,就充满了这样颠覆常识的洞见。我们将一起探索,AI如何从“管住嘴”进化到深入思想的“排毒手术”,如何像顶尖高手一样动态进化自己解决问题的方法论,甚至,如何拥有人类最宝贵的品质之一——知道自己“不知道”的自知之明。
    00:00:31 AI“排毒”,是动手术,还是只吃止痛药?
    00:04:49 AI的记忆难题,除了死记硬背,还有什么好办法?
    00:10:33 你的方法,也需要进化
    00:16:14 AI的记忆,竟然是它的负担?
    00:21:15 聪明反被聪明误,AI也需要“自知之明”
    本期介绍的几篇论文:
    [LG] Detoxifying LLMs via Representation Erasure-Based Preference Optimization
    [McGill University & Google DeepMind]
    https://arxiv.org/abs/2602.23391
    ---
    [LG] Memory Caching: RNNs with Growing Memory
    [Google Research]
    https://arxiv.org/abs/2602.24281
    ---
    [LG] EvoX: Meta-Evolution for Automated Discovery
    [UC Berkeley]
    https://arxiv.org/abs/2602.23413
    ---
    [CL] Do LLMs Benefit From Their Own Words?
    [MIT & IBM Research]
    https://arxiv.org/abs/2602.24287
    ---
    [LG] RewardUQ: A Unified Framework for Uncertainty-Aware Reward Models
    [ETH Zurich]
    https://arxiv.org/abs/2602.24040
  • AI可可AI生活

    [人人能懂AI前沿] AI的“左右互搏”:从刷题陷阱到博弈破局

    01/03/2026 | 24 mins.
    你有没有想过,为什么AI题刷得越多,反而越容易在简单问题上翻车?这一期,我们将一起潜入AI的内心世界,看看它们是如何陷入“应试教育”的陷阱,又是如何被“剪刀石头布”这样的逻辑死循环给困住的。但更重要的是,我们会发现,科学家们如何通过“读心术”和“记仇本”这样的奇思妙想,教会AI从失败中学习,并找到那条跳出困境的智慧之路。准备好,一场关于AI学习与评估的深度思考,现在开始。
    00:00:35 为什么AI刷题越多,第一次答对率反而越低?
    00:05:35 AI的“好记性”与“烂笔头”
    00:10:06 AI程序员的“应试教育”陷阱
    00:14:17 AI世界的“剪刀石头布”难题
    00:19:08 机器人教练的“读心术”
    本期介绍的几篇论文:
    [LG] Why Pass﹫k Optimization Can Degrade Pass﹫1: Prompt Interference in LLM Post-training
    [Singapore University of Technology and Design & University of Maryland]
    https://arxiv.org/abs/2602.21189
    ---
    [LG] Exploratory Memory-Augmented LLM Agent via Hybrid On- and Off-Policy Optimization
    [Microsoft Research]
    https://arxiv.org/abs/2602.23008
    ---
    [LG] ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?
    [Lossfunk]
    https://arxiv.org/abs/2602.19594
    ---
    [LG] Back to Blackwell: Closing the Loop on Intransitivity in Multi-Objective Preference Fine-Tuning
    [CMU]
    https://arxiv.org/abs/2602.19041
    ---
    [RO] TOPReward: Token Probabilities as Hidden Zero-Shot Rewards for Robotics
    [University of Washington & Amazon]
    https://arxiv.org/abs/2602.19313

More Technology podcasts

About AI可可AI生活

来自 @爱可可-爱生活 的第一手AI快报,用最简单易懂的语言,带你直击最前沿的人工智能科研动态。无论你是科技小白,还是行业达人,这里都有你想知道的AI故事和未来趋势。跟着我们,轻松解锁人工智能的无限可能! #人工智能 #科技前沿
Podcast website

Listen to AI可可AI生活, Darknet Diaries and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/6/2026 - 1:35:36 AM