你有没有想过,一个聪明的AI要如何审视和优化自己的工作方法,实现“自我进化”?怎样才能把一大堆“专家模型”的智慧,完美浓缩进你手机里那个小小的芯片中?本期节目,我们将一口气解锁五篇最新论文,看看AI如何通过“先加后减”的智慧炼成全才,如何用“元认知”打破思维僵局,又是如何学会“聪明的偷懒”,在关键处全力以赴,在无聊处“摸鱼”省电。准备好了吗?让我们一起开启这场精彩的AI思想之旅!
00:00:37 AI界的“浓缩”智慧,先做加法,再做减法
00:05:00 一个聪明的系统,如何变得更聪明?
00:11:12 AI“通才”,如何用一把钥匙,打开物理世界的多扇大门?
00:16:39 AI变聪明的秘密,不是看得多,而是看得准
00:21:18 大模型“瘦身”记,聪明地偷个懒
本期介绍的几篇论文:
[CV] Efficient Universal Perception Encoder
[Meta Reality Labs & FAIR at Meta]
https://arxiv.org/abs/2603.22387
---
[AI] Bilevel Autoresearch: Meta-Autoresearching Itself
https://arxiv.org/abs/2603.23420
---
[LG] UniFluids: Unified Neural Operator Learning with Conditional Flow-matching
[Chinese Academy of Sciences & Microsoft Research Asia]
https://arxiv.org/abs/2603.22309
---
[LG] Scaling Attention via Feature Sparsity
[Xidian University]
https://arxiv.org/abs/2603.22300
---
[LG] Sparser, Faster, Lighter Transformer Language Models
[Sakana AI & NVIDIA]
https://arxiv.org/abs/2603.23198