PodcastsTechnologyAI可可AI生活

AI可可AI生活

fly51fly
AI可可AI生活
Latest episode

921 episodes

  • AI可可AI生活

    [人人能懂AI前沿] 如何给AI做CT?当螃蟹开始跳舞,AI学会了“变脸”

    02/05/2026 | 33 mins.
    你有没有想过,我们能用一套“知识探针”给大模型做一次精确的“脑容量”CT扫描吗?或者,当AI不再满足于讲一个完美的成功故事,而是把所有失败的教训都记录下来,科学研究会变成怎样一个“活物”?本期节目,我们将从五篇最新论文出发,看看AI如何学会“变脸”戏法,又是如何用“笨办法”实现反超,以及,如何只用一部手机就让一只螃蟹学会跳街舞。
    00:00:31 如何给AI大模型做一次“脑容量”CT扫描?
    00:08:25 让一只螃蟹学会跳街舞,总共分几步?
    00:13:47 让知识“活”起来,科研的下一种形态
    00:20:47 AI的“变脸”戏法,我们以为的安全,可能只是没对上“暗号”
    00:27:26 AI进化新思路,为什么“笨办法”反而更聪明?
    本期介绍的几篇论文:
    [LG] Incompressible Knowledge Probes: Estimating Black-Box LLM Parameter Counts via Factual Capacity
    [Pine AI]
    https://arxiv.org/abs/2604.24827
    ---
    [CV] MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos
    [Huawei International Pte. Ltd. & Huawei Central Media Technology Institute]
    https://arxiv.org/abs/2512.10881
    ---
    [LG] The Last Human-Written Paper: Agent-Native Research Artifacts
    [Orchestra Research & Stanford University & Ohio State University]
    https://arxiv.org/abs/2604.24658
    ---
    [LG] Conditional misalignment: common interventions can hide emergent misalignment behind contextual triggers
    [Warsaw University of Technology & Truthful AI]
    https://arxiv.org/abs/2604.25891
    ---
    [CV] Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation
    [Meta AI]
    https://arxiv.org/abs/2604.24763
  • AI可可AI生活

    [人人能懂AI前沿] AI的经济学:从精打细算、聪明分工到绘制思想地图

    01/05/2026 | 30 mins.
    你有没有想过,一个真正能干活的AI,需要的不是更多的考题,而是一间属于自己的“办公室”?我们又该如何扮演一个聪明的“甩手掌柜”,给手下的AI专家们高效分配任务?本期节目,我们将从几篇最新的AI论文出发,聊聊如何用“成本思维”给AI的训练省下一半的钱,如何通过一场“博弈”让AI自我进化,并最终一起探索AI思考的形状,看看它的“脑海”里究竟是字典,还是一幅幅由概念构成的几何地图。
    00:00:34 想让AI替你干活?得先给它一间“办公室”
    00:06:01 如何当一个聪明的“甩手掌柜”?
    00:11:46 AI训练太烧钱?你缺的不是算力,是“成本思维”
    00:17:43 AI进步的捷径,不只看结果,更要玩对博弈
    00:23:08 AI怎么思考?答案可能藏在几何里
    本期介绍的几篇论文:
    [LG] Synthetic Computers at Scale for Long-Horizon Productivity Simulation
    [Microsoft]
    https://arxiv.org/abs/2604.28181
    ---
    [LG] Optimized Deferral for Imbalanced Settings
    [Google Research & Courant Institute of Mathematical Science]
    https://arxiv.org/abs/2604.27723
    ---
    [LG] Cost-Aware Learning
    [Google Research]
    https://arxiv.org/abs/2604.28020
    ---
    [LG] Distributional Alignment Games for Answer-Level Fine-Tuning
    [Google Research & Microsoft Research]
    https://arxiv.org/abs/2604.27166
    ---
    [LG] Do Sparse Autoencoders Capture Concept Manifolds?
    [Harvard University]
    https://arxiv.org/abs/2604.28119
  • AI可可AI生活

    [人人能懂Ai前沿] AI的思考术:当机器学会了划重点、开复盘会和管理书房

    30/04/2026 | 29 mins.
    你有没有想过,AI不仅能当一个好员工,还能自己进化成项目经理,开“复盘会”优化工作流?或者,指挥一个复杂的机器人,也许只需要像在屏幕上“画重点”一样简单?本期节目,我们将从五篇最新的AI论文出发,聊聊AI如何突破效率瓶颈:从揭秘AI服务“拼单”背后隐藏的隐私风险,到看AI如何像图书管理员一样高效整理海量知识,再到探索不同世界里的学习速度极限。准备好,让我们一起看看AI是如何学会“化繁为简”与“自我进化”的。
    00:00:39 想把东西卖出高价?你得懂点学习的规律
    00:07:21 拼单的代价,AI服务如何泄露你的秘密
    00:12:23 AI的瓶颈,不在大脑,在“书房”
    00:17:48 让AI自己进化,不止是大力出奇迹
    00:23:32 给机器人“画重点”,让复杂变简单
    本期介绍的几篇论文:
    [LG] On the Learning Curves of Revenue Maximization
    [Purdue University & Yale University & Technion]
    https://arxiv.org/abs/2604.26922
    ---
    [LG] Quantamination: Dynamic Quantization Leaks Your Data Across the Batch
    [University of Cambridge & AI Sequrity Company]
    https://arxiv.org/abs/2604.26505
    ---
    [LG] Unifying Sparse Attention with Hierarchical Memory for Scalable Long-Context LLM Serving
    [Microsoft Research]
    https://arxiv.org/abs/2604.26837
    ---
    [CL] FlowBot: Inducing LLM Workflows with Bilevel Optimization and Textual Gradients
    [Naver Search US & MIT]
    https://arxiv.org/abs/2604.26258
    ---
    [CV] Lifting Embodied World Models for Planning and Control
    [New York University & UC Berkeley]
    https://arxiv.org/abs/2604.26182
  • AI可可AI生活

    [人人能懂AI前沿] AI如何验明正身、高效开会、稳定心态?

    29/04/2026 | 30 mins.
    想知道如何像做笔迹鉴定一样,一眼看穿AI的“真身”吗?想了解怎样能让AI开会时,奇迹般地省下97%的“桌子”吗?本期我们就来聊聊几篇最新论文,看看AI如何学会“读心术”来高效协作,如何避免因“谜之自信”而犯下大错,甚至,为什么一个“会犯错”的老师,反而能教出更厉害的AI学生。
    00:00:28 如何给AI做“笔迹鉴定”?
    00:06:27 AI开会,如何省下97%的桌子?
    00:14:10 AI界的“青出于蓝”,是惊喜还是惊吓?
    00:19:30 你还在让AI“写报告”?它们已经开始直接交换“想法”了
    00:24:16 为什么“犯错”的老师,能教出更好的AI?
    本期介绍的几篇论文:
    [CL] The Surprising Universality of LLM Outputs: A Real-Time Verification Primitive
    [Evolutionairy AI]
    https://arxiv.org/abs/2604.25634
    ---
    [LG] PolyKV: A Shared Asymmetrically-Compressed KV Cache Pool for Multi-Agent LLM Inference
    [No University Provided]
    https://arxiv.org/abs/2604.24971
    ---
    [AI] Evaluating Risks in Weak-to-Strong Alignment: A Bias-Variance Perspective
    [University of Illinois Urbana-Champaign & Microsoft & InstaDeep]
    https://arxiv.org/abs/2604.25077
    ---
    [CL] Recursive Multi-Agent Systems
    [UIUC]
    https://arxiv.org/abs/2604.25917
    ---
    [LG] When Errors Can Be Beneficial: A Categorization of Imperfect Rewards for Policy Gradient
    [Princeton University]
    https://arxiv.org/abs/2604.25872
  • AI可可AI生活

    [人人能懂AI前沿] AI的思维地图、社交网络与减肥陷阱

    28/04/2026 | 29 mins.
    你有没有想过,一个“乐于助人”的AI,它的善意本身可能就是最危险的漏洞?本期节目,我们将从几篇最新的AI论文出发,一起探索AI的“内心世界”:看看它是如何通过预判未来让训练更高效,如何在内部形成“专家圈子”,又是如何掉进“减肥不减脂”的内存陷阱,并最终揭示那张描绘它思维路径的神秘“藏宝图”。准备好了吗?让我们一起打开AI的黑箱。
    00:00:30 为什么说,答案对错没那么重要?
    00:05:59 你的AI正在“挑食”,一个让大模型加速的隐秘模式
    00:11:46 AI大模型瘦身指南,减重≠减脂
    00:17:49 为什么一个“乐于助人”的AI,反而更危险?
    00:22:34 AI的“藏宝图”,我们如何看懂机器的“内心世界”?
    本期介绍的几篇论文:
    [LG] Reward Models Are Secretly Value Functions: Temporally Coherent Reward Modeling
    [AI at Meta]
    https://arxiv.org/abs/2604.22981
    ---
    [LG] Scaling Multi-Node Mixture-of-Experts Inference Using Expert Activation Patterns
    [Meta & Georgia Institute of Technology]
    https://arxiv.org/abs/2604.23150
    ---
    [LG] Parameter Efficiency Is Not Memory Efficiency: Rethinking Fine-Tuning for On-Device LLM Adaptation
    [MIT CSAIL]
    https://arxiv.org/abs/2604.22783
    ---
    [CL] Jailbreaking Frontier Foundation Models Through Intention Deception
    [CMU]
    https://arxiv.org/abs/2604.24082
    ---
    [AI] Domain-Filtered Knowledge Graphs from Sparse Autoencoder Features
    [Stanford University]
    https://arxiv.org/abs/2604.23829

More Technology podcasts

About AI可可AI生活

来自 @爱可可-爱生活 的第一手AI快报,用最简单易懂的语言,带你直击最前沿的人工智能科研动态。无论你是科技小白,还是行业达人,这里都有你想知道的AI故事和未来趋势。跟着我们,轻松解锁人工智能的无限可能! #人工智能 #科技前沿
Podcast website

Listen to AI可可AI生活, The AI Daily Brief: Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features