Powered by RND
PodcastsTechnologyAI可可AI生活

AI可可AI生活

fly51fly
AI可可AI生活
Latest episode

Available Episodes

5 of 726
  • [人人能懂] 噪声、几何与深思的力量
    你有没有想过,让AI变聪明,或许并不需要更强的算力,而是需要一种更巧妙的引导方式?本期,我们将一起探索几篇最新论文带来的奇妙洞见:我们会发现,一点点“计算噪声”竟能让AI学得更好;我们甚至能像做CT扫描一样,亲眼“看见”AI思考的几何轨迹;并学习如何像教育孩子一样,教会AI在探索与专注间找到完美平衡,甚至不花一分钱,就解锁它的隐藏潜能。00:00:36 不花钱升级你的AI?换个提问方式就行00:05:39 AI育儿经:如何教机器学会“恰到好处”的探索00:11:50 训练AI,加点“噪声”效果更好?00:16:47 AI的“心流”:看见思考的轨迹00:22:19 如何让聪明的AI,学会更聪明地做事?本期介绍的几篇论文:[LG] Reasoning with Sampling: Your Base Model is Smarter Than You Think[Harvard University]https://arxiv.org/abs/2510.14901---[LG] Agentic Entropy-Balanced Policy Optimization[Kuaishou Technology & Renmin University of China]https://arxiv.org/abs/2510.14545---[LG] QeRL: Beyond Efficiency -- Quantization-enhanced Reinforcement Learning for LLMs[NVIDIA & MIT]https://arxiv.org/abs/2510.11696---[LG] The Geometry of Reasoning: Flowing Logics in Representation Space[Duke University]https://arxiv.org/abs/2510.09782---[CL] Demystifying Reinforcement Learning in Agentic Reasoning[National University of Singapore & Princeton University & University of Illinois at Urbana-Champaign]https://arxiv.org/abs/2510.11701
    --------  
    30:11
  • [人人能懂] 从统一语言、认知陷阱到犯错的艺术
    要让AI真正变聪明,是该为它发明一套统一江湖的“世界语”,还是该教会它如何在“大脑袋”和“长思考”间做出明智的权衡?又或者,我们得先帮它建立健康的“信息食谱”,避免陷入“脑子瓦特”的陷阱,并治愈它那藏着世界性偏见的好奇心?甚至,最新论文告诉我们,通往更高智能的钥匙,竟然是先让AI学会如何当一个会犯错的“差生”。本期节目,我们将通过五篇最新论文,共同探索AI智能背后那些你意想不到的深层逻辑。00:00:39 AI的“世界语”:一套统一江湖的武功秘籍00:06:11 AI的“垃圾食品”陷阱:为什么顶尖模型也会“脑子瓦特”?00:10:58 AI变聪明,靠脑子大还是想得久?00:17:18 AI的“好奇心”,藏着一个世界性的偏见00:22:56 为什么最聪明的AI,要先学会当个“差生”?本期介绍的几篇论文:[LG] Tensor Logic: The Language of AI [University of Washington] https://arxiv.org/abs/2510.12269 ---[CL] LLMs Can Get "Brain Rot"! [Texas A&M University & University of Texas at Austin & Purdue University] https://arxiv.org/abs/2510.13928 ---[LG] Not All Bits Are Equal: Scale-Dependent Memory Optimization Strategies for Reasoning Models [KRAFTON & University of Wisconsin–Madison] https://arxiv.org/abs/2510.10964 ---[CL] The Curious Case of Curiosity across Human Cultures and LLMs [University of Michigan] https://arxiv.org/abs/2510.12943 ---[LG] Learning to Make MISTAKEs: Modeling Incorrect Student Thinking And Key Errors [MIT CSAIL] https://arxiv.org/abs/2510.11502
    --------  
    28:31
  • [人人能懂] 从火箭发射、大学主修到片刻沉思
    你有没有想过,除了喂给它更多数据,还有哪些更精妙的法门能让AI变得更聪明?本期我们要聊的几篇最新论文,就揭示了AI的“成长秘籍”:它们把训练AI的视角从“下山”升级为“发射火箭”,为它设计了从通识到专业的“大学课程”,还教会了它预测“未来摘要”的远见,以及在关键时刻“喘口气”慢思考的智慧。今天,就让我们一起看看,这些研究是如何重塑AI的“学习方法论”的。00:00:33 训练AI,你以为是爬山,其实是开火箭?00:05:56 AI成长秘籍:多上一门“专业课”00:11:26 AI模型的终极瘦身术:如何让大象既轻盈又聪明?00:16:53 AI的远见:不只关心下一个词00:21:10 AI的“沉思时刻”:快与慢的智慧本期介绍的几篇论文:[LG] Optimal Control Theoretic Neural Optimizer: From Backpropagation to Dynamic Programming[Meta & Georgia Institute of Technology & Apple]https://arxiv.org/abs/2510.14168---[CL] Midtraining Bridges Pretraining and Posttraining Distributions[CMU]https://arxiv.org/abs/2510.14865---[LG] BitNet Distillation[Microsoft Research]https://arxiv.org/abs/2510.13998---[LG] Beyond Multi-Token Prediction: Pretraining LLMs with Future Summaries[FAIR at Meta & CMU]https://arxiv.org/abs/2510.14751---[CL] Catch Your Breath: Adaptive Computation for Self-Paced Sequence Production[Google DeepMind]https://arxiv.org/abs/2510.13879
    --------  
    26:08
  • [人人能懂] 从科学预测、大道至简到团队协作
    想知道为什么教机器人玩最“笨”的玩具,反而能让它学会抓取任何东西吗?本期节目,我们将一起探索如何将神秘的AI“炼金术”变成一门严谨的科学,看看怎样让AI大神学会“说人话”并带得动AI小白,并最终揭示,那些五花八门的调教秘籍背后,其实藏着同一个简单的目标。让我们马上进入今天的前沿速递!00:00:28 AI大模型调教指南:从玄学到科学00:05:39 返璞归真:最笨的方法,可能就是最好的方法00:11:25 想让机器人变聪明?先教它玩“笨”玩具00:16:41 如何让AI大神,带得动AI小白?00:00 大模型调教秘籍:条条大路通罗马?本期介绍的几篇论文:[LG] The Art of Scaling Reinforcement Learning Compute for LLMs[Meta & UT Austin & UC Berkeley]https://arxiv.org/abs/2510.13786---[RO] VLA-0: Building State-of-the-Art VLAs with Zero Modification[NVIDIA]https://arxiv.org/abs/2510.13054---[RO] Learning to Grasp Anything by Playing with Random Toys[UC Berkeley]https://arxiv.org/abs/2510.12866---[LG] Tandem Training for Language Models[Microsoft & EPFL & University of Toronto]https://arxiv.org/abs/2510.13551---[LG] What is the objective of reasoning with reinforcement learning?[University of Pennsylvania & UC Berkeley]https://arxiv.org/abs/2510.13651
    --------  
    27:01
  • [AI评论] 为什么AI工具越强大,我越觉得自己是废物?
    工具越强,焦虑越深。当Sora能一语成片,Claude能瞬间代码,我们仿佛手握神兵,却又为何心生恐慌?“有了这么好的AI,如果我还赚不到钱,我就是个废物。”——这句话,是否也曾像幽灵一样在你脑中盘旋?本期节目,我将带你跳出这个“工具越强,自我审判越重”的怪圈。我们将从加州淘金热的历史,聊到今天AI时代的生存法则;从相机的普及,看懂创造力的稀缺性转移。你将听到: 为什么AI降低的是“执行门槛”,却提高了“认知门槛”? 从“淘金者”到“卖水人”,你的价值定位在哪里? 概率思维如何帮你从AI生成的100个选项中,找到那1%的黄金机会?别让最好的工具,成为最重的枷锁。你的审美、品味和同理心,才是AI无法估值的核心资产。收听本期节目,告别赚钱焦虑,找到你在AI时代的真实价值坐标。
    --------  
    11:52

More Technology podcasts

About AI可可AI生活

来自 @爱可可-爱生活 的第一手AI快报,用最简单易懂的语言,带你直击最前沿的人工智能科研动态。无论你是科技小白,还是行业达人,这里都有你想知道的AI故事和未来趋势。跟着我们,轻松解锁人工智能的无限可能! #人工智能 #科技前沿
Podcast website

Listen to AI可可AI生活, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/20/2025 - 8:47:35 AM