Powered by RND

EDGE AI POD

EDGE AI FOUNDATION
EDGE AI POD
Latest episode

Available Episodes

5 of 47
  • From Sensors to Solutions: The Future of Edge AI with Chad Lucien of Ceva
    At the crossroads of cutting-edge technology and practical innovation stands Ceva, a semiconductor IP powerhouse with a remarkable two-decade legacy. Powering nearly 20 billion devices worldwide and shipping over 2 billion annually, SIVA has emerged as a crucial enabler in the burgeoning edge AI ecosystem.What distinguishes Ceva in this competitive landscape is their holistic approach to edge computing. Rather than focusing solely on neural processing, they've strategically built solutions around what Chad Lucien describes as the three pillars of edge AI: connectivity, sensing, and inference. This comprehensive vision has positioned them as the industry's leading Bluetooth IP licensor while developing sophisticated DSP solutions and a scalable NPU portfolio that ranges from modest GOPS to an impressive 400 TOPS.The secret to Ceva's effectiveness lies in their deep integration of hardware and software expertise. "The software is becoming the definition of the product," notes Lucien, explaining how their deep learning applications team directly influences hardware specifications. This software-first perspective has created solutions perfectly tailored for low-power, small form factor devices across diverse applications. From earbuds and health trackers to consumer robots and smart appliances, SIVA's fully programmable solutions handle everything from neural network computation to DSP workloads and control code.Most exciting is SIVA's leadership in the Audio ML renaissance through their work with the EDGE AI FOUNDATION's Audio Working Group. As audio applications shift from traditional DSP implementations to neural strategies, we're witnessing transformative capabilities in speech enhancement, anomaly detection, sound identification, and edge-based natural language processing. Discover how Ceva is providing the essential "picks and shovels" for the AI gold rush and why collaboration remains the key to unlocking the full potential of intelligence at the edge. Subscribe to hear more partner stories shaping the future of edge AI!Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    17:06
  • EDGE AI Partner: David Aronchick of Expanso
    The digital landscape is rapidly evolving beyond centralized cloud computing. In this illuminating conversation with David Aronchik, co-founder of Expanso, we explore the growing necessity of processing data right where it's generated—at the edge.Drawing from his impressive background as the first non-founding PM for Kubernetes at Google and his leadership in open AI strategy at Microsoft, David reveals how these experiences led him to tackle a persistent challenge: how do you leverage container technologies and ML models outside traditional data centers? While cloud platforms excel at centralized workloads, businesses increasingly need computing power in retail locations, manufacturing facilities, and smart city infrastructure.Expanso's elegantly named Bacalhau project (Portuguese for cod, a clever nod to "Compute Over Data") offers a solution by providing reliable orchestration of workloads across distributed locations. Their lightweight Go binary runs on virtually anything from Raspberry Pis to sophisticated edge servers, managing the delivery and execution of jobs while gracefully handling connectivity disruptions that would cause traditional systems to fail.David makes a compelling case for edge computing with a simple physical reality: even 100,000 years from now, the speed of light will still impose a 45-millisecond latency between LA and Boston. This unchangeable constraint, combined with data transfer costs and regulatory requirements, makes local processing increasingly essential. For organizations struggling with high telemetry bills, Expanso confidently promises at least 25% cost reduction—or they work for free.Whether you're managing satellite networks, underwater cameras for aquaculture, or thousands of retail locations, this conversation illuminates how the future of computing involves bringing intelligence to where data lives rather than constantly shipping bytes across networks. Join us to discover how this paradigm shift is making AI more effective in the physical world.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    21:51
  • Bringing Generative AI to Your Pocket: The Future of Edge Computing
    A technological revolution is quietly unfolding in your pocket. Imagine your phone creating stunning images, understanding what its camera sees, and responding to complex questions—all without sending a single byte of data to the cloud. This isn't science fiction; it's Generative EDGE AI, and it's already here.We dive deep into this transformative trend that's bringing AI's creative powers directly to our devices. Building on the foundation laid by the tiny ML movement, Generative EDGE AI represents a fundamental shift in how we'll interact with technology. The benefits are compelling: complete privacy as your data never leaves your device, lightning-fast responses without internet latency, independence from network connections, and significant cost savings from reduced cloud computing needs.The applications span far beyond convenience. For people with disabilities, it means having image captioning that works anywhere, even without internet. For photographers, it's like having a professional editor built right into your camera. In healthcare, it enables diagnostics while keeping sensitive patient data secure and accessible even in areas with poor connectivity.The technical achievements making this possible are equally impressive. Researchers have shrunk massive AI models to run efficiently on everyday devices, from visual question answering systems that respond in milliseconds to text-to-speech engines that sound remarkably natural. They're even making progress bringing text-to-image generation and small language models directly to smartphones.As we explore these breakthroughs, we consider the profound implications of truly intelligent devices that can learn, adapt, and make decisions autonomously. What happens when our technology not only understands but creates and acts independently? The silent AI revolution happening in our hands is set to transform our relationship with technology in ways we're just beginning to comprehend.Ready to understand the future that's already arriving? Listen now and glimpse the world where intelligence lives at your fingertips, not in distant server farms.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    27:10
  • Audio AI on the Edge with Ceva
    Audio processing at the edge is undergoing a revolution as deep learning transforms what's possible on tiny, power-constrained devices. Daniel from SIVA takes us on a fascinating journey through the complete lifecycle of audio AI models—from initial development to real-world deployment on microcontrollers.We explore two groundbreaking applications that demonstrate the power of audio machine learning on resource-limited hardware. First, Environmental Noise Cancellation (ENC) addresses the critical need for clear communication in noisy environments. Rather than accepting the limitations of traditional approaches that require multiple microphones, SIVA's single-microphone solution leverages deep neural networks to achieve superior noise reduction while preserving speech quality—all with a model eight times smaller than conventional alternatives.The conversation then shifts to voice interfaces, where Text-to-Model technology is eliminating months of development time by generating keyword spotting models directly from text input. This innovation allows manufacturers to create, modify, or rebrand voice commands instantly without costly data collection and retraining cycles. Each additional keyword requires merely one kilobyte of memory, making sophisticated voice interfaces accessible even on the smallest devices.Throughout the discussion, Daniel reveals the technical challenges and breakthroughs involved in optimizing these models for production environments. From quantization-aware training and SVD compression to knowledge distillation and framework conversion strategies, we gain practical insights into making AI work effectively within severe computational constraints.Whether you're developing embedded systems, designing voice-enabled products, or simply curious about the future of human-machine interaction, this episode offers valuable perspective on how audio AI is becoming both more powerful and more accessible. The era of intelligent listening devices is here—and they're smaller, more efficient, and more capable than ever before.Ready to explore audio AI for your next project? Check out SIVA's YouTube channel for demos of these technologies in action, or join the Edge AI Foundation's Audio Working Group to collaborate with industry experts on advancing this rapidly evolving field.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    59:47
  • Garbage In, Garbage Out - High-Quality Datasets for Edge ML Research
    The EDGE AI FOUNDATION's Datasets & Benchmarks Working Group highlights the rapid progress in neural networks, particularly in cloud-based applications like image recognition and NLP, which benefited greatly from large, high-quality datasets. However, the constrained nature of edge AI devices necessitates smaller, more efficient models, yet a lack of suitable datasets hinders progress and realistic evaluation in this area. To address this, the Foundation aims to create and maintain a repository of production-grade, diverse, and well-annotated datasets for tiny and edge ML use cases, enabling fair comparisons and the advancement of the field. They emphasize community involvement in contributing datasets, providing feedback, and establishing best practices for optimization. Ultimately, this initiative seeks to level the playing field for edge AI research by providing the necessary resources for accurate benchmarking and innovation.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    21:17

More Technology podcasts

About EDGE AI POD

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community. These are shows like EDGE AI TALKS, EDGE AI BLUEPRINTS as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics. Join us to stay informed and inspired!
Podcast website

Listen to EDGE AI POD, Rabbit Hole and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.20.1 | © 2007-2025 radio.de GmbH
Generated: 7/3/2025 - 11:48:27 PM