你是否好奇,为何AI有时会“指鹿为马”?为何它面对难题,内部的神经元反而开始“集体偷懒”?本期节目,我们将通过几篇最新论文,一起给AI的大脑做一次“CT扫描”和“基因测序”,揭示它在感知、学习、思考和效率背后,那些出人意料的底层法则。
00:00:26 人工智能的“阿喀琉斯之踵”,一个关于维度的诅咒
00:05:34 AI绘画进化论,为什么高手不需要“题海战术”?
00:10:02 AI一思考,我们就发笑?不,是神经元在“偷懒”
00:15:44 如何用50倍的效率,给AI做一次“CT扫描”?
00:21:34 AI模型的“不可能三角”,算力、速度与智能
本期介绍的几篇论文:
[LG] Solving adversarial examples requires solving exponential misalignment
[Stanford University & Aisle]
https://arxiv.org/abs/2603.03507
---
[LG] Generalization Properties of Score-matching Diffusion Models for Intrinsically Low-dimensional Data
[University of Michigan & Google DeepMind & UC Berkeley]
https://arxiv.org/abs/2603.03700
---
[CL] Farther the Shift,Sparser the Representation: Analyzing OOD Mechanisms in LLMs
[Rutgers University & Northwestern University & UKP Lab, TU Darmstadt]
https://arxiv.org/abs/2603.03415
---
[CL] Compressed Sensing for Capability Localization in Large Language Models
[CMU]
https://arxiv.org/abs/2603.03335
---
[LG] Why Are Linear RNNs More Parallelizable?
[Allen Institute for AI & Rheinland-Pfalzische Technische Universitat]
https://arxiv.org/abs/2603.03612