EDGE AI POD

EDGE AI FOUNDATION
EDGE AI POD
Latest episode

Available Episodes

5 of 70
  • Generative AI on NXP Microprocessors
    Stepping into a future where AI doesn't require the cloud, NXP is revolutionizing edge computing by bringing generative AI directly to microprocessors. Alberto Alvarez offers an illuminating journey through NXP's approach to private, secure, and efficient AI inference that operates entirely at the edge.The heart of NXP's innovation is their EAQ GenAI Flow, a comprehensive software pipeline designed for iMX SoCs that enables both fine-tuning and optimization of AI models. This dual capability allows developers to adapt openly available Large Language Models for specific use cases without compromising data privacy, while also tackling the challenge of memory footprint through quantization techniques that maintain model accuracy. The conversational AI implementation creates a seamless experience by combining wake word detection, speech recognition, language processing with retrieval-augmented generation, and natural speech synthesis—all accelerated by NXP's Neutron NPU.Most striking is NXP's partnership with Kinara, which introduces truly groundbreaking multimodal AI capabilities running entirely at the edge. Their demonstration of the LAVA model—combining LLAMA3's 8 billion parameters with CLIP vision encoding—showcases the ability to process both images and language queries without any cloud connectivity. Imagine industrial systems analyzing visual scenes, detecting subtle anomalies like water spills, and providing spoken reports—all while keeping sensitive data completely private. With quantization reducing these massive models to manageable 4-bit and 8-bit precision, NXP is making previously impossible edge AI applications practical reality.Ready to experience the future of edge intelligence? Explore NXP's application code hub to start building with EIQ GenAI resources on compatible hardware and discover how your next project can harness the power of generative AI without surrendering privacy or security to the cloud.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    28:44
  • Transforming Human-Computer Interaction with OpenVINO
    The boundary between humans and computers is blurring as AI capabilities advance, creating opportunities for more natural, conversational interactions with our devices. Raymond Lowe from Intel takes us on a journey through the evolution of human-computer interaction, from simple mouse clicks to sophisticated chatbots that understand context, process images, and engage in meaningful dialogue.At the heart of this transformation is OpenVINO, Intel's toolkit for optimizing neural networks across diverse hardware. Raymond demonstrates how this technology enables edge devices—from laptops to specialized processors—to run sophisticated AI models locally without requiring cloud connectivity. The examples are compelling: generating beautiful images of teddy bears in just seconds on standard laptop GPUs, running large language models that once consumed 25GB of RAM on modest hardware, and creating smart cameras that can describe what your baby is doing without complex coding.Memory management emerges as the hero of this story. Through techniques like quantization (reducing model precision from 32-bit to 8-bit or even 4-bit), OpenVINO dramatically shrinks model size while maintaining accuracy. This isn't just about fitting models into limited memory—it's about activating specialized hardware instructions that can deliver 2-3x performance improvements, transforming sluggish experiences into fluid, real-time interactions.The impact extends beyond technical achievements. Raymond shares the emotional moment when he first got a chatbot running locally: "I never felt so alive when I saw the machine talking to me." For developers, this means being able to create prototypes in weeks rather than months, accessing hundreds of pre-optimized examples, and focusing on building experiences rather than struggling with technical hurdles.Through partnerships with Microsoft's AI Foundry program, these capabilities are being integrated directly into Windows, ensuring consumers get optimal AI performance from their hardware without additional setup. For industries embracing AI—from healthcare to retail to smart cities—OpenVINO offers a path to enhance existing applications while exploring new possibilities at the intersection of traditional and generative AI approaches.Want to experience this revolution yourself? Check out Intel's extensive library of notebooks and examples, or try the Open Edge Platform to start building immediately. The future of human-computer interaction isn't just coming—it's already here on your local device.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    43:26
  • Support for Novel Models for Ahead of Time Compiled Edge AI Deployment
    The growing gap between rapidly evolving AI models and lagging deployment frameworks creates a significant challenge for edge AI developers. Maurice Sersiff, CEO and co-founder of Germany-based Roofline AI, presents a compelling solution to this problem through innovative compiler technology designed to make edge AI deployment simple and efficient.At the heart of Roofline's approach is a retargetable AI compiler that acts as the bridge between any AI model and diverse hardware targets. Their SDK supports all major frameworks (PyTorch, TensorFlow, ONNX) and model architectures from traditional CNNs to cutting-edge LLMs. The compiler generates optimized code specifically tailored to the target hardware, whether it's multi-core ARM systems, embedded GPUs, or specialized NPUs.What truly sets Roofline apart is their unwavering commitment to comprehensive model coverage. They operate with a "day zero support" philosophy—if a model doesn't work, that's considered a bug to be fixed within 24 hours. This approach enables developers to use the latest models immediately without waiting months for support. Performance benchmarks demonstrate the technology delivers 1-3x faster execution speeds compared to alternatives like Torch Inductor while significantly reducing memory footprint.Maurice provides a fascinating comparison between Roofline's compiler-based approach for running LLMs on edge devices versus the popular library-based solution LLama.cpp. While hand-optimized kernels currently maintain a slight performance edge, Roofline offers vastly superior flexibility and immediate support for new models. Their ongoing optimization work is rapidly closing the performance gap, particularly on ARM platforms.Interested in simplifying your edge AI deployment while maintaining performance? Explore how Roofline AI's Python-integrated SDK can help you bring any model to any chip with minimal friction, enabling true innovation at the edge.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    11:50
  • Comparative Analysis of NPU Optimized Software Framework
    The future of AI isn't just in massive cloud servers—it's already sitting in your pocket. In this eye-opening presentation, Yeon-seok, CEO and co-founder of JTIC AI, reveals how his company is revolutionizing the AI landscape by tapping into the underutilized Mobile Processing Units (MPUs) that have been standard in smartphones since 2017.While tech giants pour billions into cloud infrastructure, JTIC AI has identified a critical opportunity: leveraging the powerful AI processors already in billions of devices worldwide. This approach delivers not just cost savings, but crucial advantages including offline functionality, enhanced data security, and real-time responsiveness—without depending on internet connectivity.The technical journey involves three essential components: hardware utilization, model optimization, and runtime software. Yeon-seok breaks down sophisticated model optimization techniques like pruning, quantization, and knowledge distillation that make complex AI models deployable to mobile devices. However, the biggest challenge isn't hardware capability but software fragmentation. Unlike the GPU market dominated by NVIDIA and CUDA, mobile devices operate in a fragmented ecosystem where Apple, Qualcomm, MediaTek, and others maintain incompatible software stacks—creating significant barriers for AI engineers.JTIC AI's innovative solution is an end-to-end automated pipeline that handles everything from model optimization to device-specific benchmarking. Their system can determine which runtime will deliver optimal performance for specific models on specific devices—something that's impossible to predict without comprehensive testing. With this approach, developers can deploy sophisticated AI across the mobile ecosystem without wrestling with manufacturer-specific implementations.Ready to unlock the AI capabilities already sitting in your users' pockets? Discover how on-device AI can transform your applications with better privacy, offline functionality, and faster response times—all while reducing your cloud infrastructure costs.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    13:32
  • Powering Intelligence: Anaflash's Revolutionary AI Microcontroller with Embedded Flash Memory
    Memory bottlenecks, not computational limitations, are the true barrier holding back Edge AI. This revelation lies at the heart of Anaflash's revolutionary approach to intelligent edge computing – a breakthrough AI microcontroller with embedded flash memory that transforms how we think about power efficiency and cost in smart devices.The team has engineered a solution that addresses the two fundamental challenges facing Edge AI adoption: power efficiency and cost. Their microcontroller features zero-standby, power-weight memory with 4-bit per-cell embedded flash technology seamlessly integrated with computation resources. Unlike traditional non-volatile memory options that demand extra processing steps and offer limited storage density, this technology requires no additional masks and scales efficiently.At the core of this innovation is the Near Memory Computing Unit (NMCU), which establishes a tight coupling with flash memory through a wide I/O interface on a single chip. This architecture eliminates the need to fetch data from external memory after booting or waking from deep sleep – a game-changing feature for battery-powered devices. The NMCU's sophisticated three-part design enhances parallel computations while minimizing CPU intervention: control logic manages weight addresses and buffer flow, 16 processing elements share weights through high-bandwidth connections, and a quantization block efficiently converts computational results.Fabricated using Samsung Foundry's 28nm standard logic process in a compact 4 by 4.5 mm² die, the microcontroller delivers impressive results. Testing with MNIST and Deep Auto Encoder models demonstrates accuracy levels virtually identical to software baselines – over 95% and 0.878 AUC respectively. The overstress-free waterline driver circuit extends flash cell margins, further enhancing reliability and performance.Ready to transform your Edge AI applications with technology that combines unprecedented efficiency, performance, and cost-effectiveness? Experience the future of intelligent edge computing with Anaflash's embedded flash microcontroller – where memory and computation unite to power the next generation of smart devices.Send us a textSupport the showLearn more about the EDGE AI FOUNDATION - edgeaifoundation.org
    --------  
    15:21

More Technology podcasts

About EDGE AI POD

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community. These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics. Join us to stay informed and inspired!
Podcast website

Listen to EDGE AI POD, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.1.1 | © 2007-2025 radio.de GmbH
Generated: 12/10/2025 - 6:40:14 AM