Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon Walks Us Through His New Paper
This episode dives into "Category-Theoretic Analysis of Inter-Agent Communication and Mutual Understanding Metric in Recursive Consciousness." The paper presents an extension of the Recursive Consciousness framework to analyze communication between agents and the inevitable loss of meaning in translation. We're thrilled to feature the paper's author, Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon, to walk us through the research and its implications.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
48:11
--------
48:11
Small Language Models are the Future of Agentic AI
We had the privilege of hosting Peter Belcak – an AI Researcher working on the reliability and efficiency of agentic systems at NVIDIA – who walked us through his new paper making the rounds in AI circles titled “Small Language Models are the Future of Agentic AI.”The paper posits that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and are therefore the future of agentic AI. The authors’ argumentation is grounded in the current level of capabilities exhibited by SLMs, the common architectures of agentic systems, and the economy of LM deployment. The authors further argue that in situations where general-purpose conversational abilities are essential, heterogeneous agentic systems (i.e., agents invoking multiple different models) are the natural choice. They discuss the potential barriers for the adoption of SLMs in agentic systems and outline a general LLM-to-SLM agent conversion algorithm.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
31:15
--------
31:15
Watermarking for LLMs and Image Models
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals.Learn more about the A Watermark for Large Language Models paper. Learn more about agent observability and LLM observability, join the Arize AI Slack community or get the latest on LinkedIn and X.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
42:56
--------
42:56
Self-Adapting Language Models: Paper Authors Discuss Implications
The authors of the new paper *Self-Adapting Language Models (SEAL)* shared a behind-the-scenes look at their work, motivations, results, and future directions.The paper introduces a novel method for enabling large language models (LLMs) to adapt their own weights using self-generated data and training directives — “self-edits.”Learn more about the Self-Adapting Language Models paper.Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
--------
31:26
--------
31:26
The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning
This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic. Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder.Dylan and Parth dive into the paper's findings as well as the debate around it, including a response paper aptly titled "The Illusion of the Illusion of Thinking." Read the paper: The Illusion of Thinking Read the response: The Illusion of the Illusion of Thinking Explore more AI research and sign up for future readings Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Deep Papers is a podcast series featuring deep dives on today’s most important AI papers and research. Hosted by Arize AI founders and engineers, each episode profiles the people and techniques behind cutting-edge breakthroughs in machine learning.